The power of the generalist

 

Gen AI

 

The power of the generalist  

March 13, 2024 | By Dean Sevin Yeltekin

In this Q&A, technology strategist Aditya Singh’08S (MBA) reflects on how generative AI is changing the world of work and predicts who will succeed in the new normal. 

Sevin Yeltekin: What brought you to Simon? 

Aditya Singh: I moved to the U.S. to complete a master’s degree in computer science and found myself at a crossroads. I could become a subject matter expert and lean into more technical work, or I could move into a business role and help clients leverage technology to achieve their goals. I found the business path more appealing, which is how I ended up at Simon to pursue an MBA. I naturally gravitated toward Simon’s data-driven, analytical approach because of my background. What I found was that Simon doesn’t just give you a skillset—it teaches you to think in a certain way. Being called upon to solve problems you know nothing about creates discomfort that leads to growth. That is how you thrive in future positions. 

SY: Now you’re a leader in technology strategy at Microsoft. What does your role entail?

AS: Within Microsoft’s Financial Services Industry sales team, my role is to help clients leverage our full technology suite to achieve their goals. I am a generalist, not a deep subject matter expert. My job is to put all the technology in front of the client like pieces of a Lego set and help them build something meaningful. 

SY: When did generative AI first come onto your radar, and how is it changing the nature of your work?

AS:I was introduced to the concept of AI in the early 2000s when I was doing my master’s degree. I was familiar with models that could be trained to make inferences, but everything changed when companies like Open AI began training Large Language Models (LLMs) with sweeping applications. Now we have a tool that doesn’t just scour transport data or climate data; it is trained on the entire universe of written language. At Microsoft, I have the privilege of trying new technology before it is commercialized, so there are plenty of generative AI tools that are still in development. But I can say that I currently use M365 Copilot, our AI assistant, to do things like produce meeting transcripts and set reminders. Generative AI is particularly useful for noticing things that happen on a schedule, like when someone sends an expense email every month around the same time. The AI tool might take notice and ask to automate it. If you say yes, that’s one more task off your plate for the day. This is just one example of the many repetitive tasks that AI will soon take over.

SY: How should we fill the time saved by using generative AI to do more routine tasks? 

AS: As time goes by, generative AI will leave humans more productive for things that matter. There will certainly be more opportunities for deep, strategic thinking. But that doesn’t necessarily have to take place in an office. We are all better when we have space to let our minds roam—whether that looks like thinking through a client’s problem on a long walk or spending more time with family. AI creates less pressure to do mundane tasks, but the answer is not to replace every mundane task with another task. That’s not good for people and their organizations in the long run. 

SY: How are your clients using generative AI to enhance their work?

AS: My clients in the financial services industry are using generative AI to tackle a broad range of problems. I often help them leverage our tools to improve employee productivity and reduce communication barriers on platforms like Outlook, Teams, and PowerPoint, where people spend most of their workday. My clients are also looking to embed the power of LLMs into very complex systems so they can in turn serve their own clients more effectively. A lot of this is not visible to the public—it is part of their secret sauce. Overall, I have noted an eagerness in my clients to understand how competitors are using generative AI. This desire to maintain a competitive edge is driving adoption and innovation in an industry that can be slow to evolve. Nothing happens with the snap of a finger, but as AI tools are gradually baked into day-to-day work, people notice colleagues trying them and decide to take the plunge themselves. Then, over time, the entire culture changes. 

SY: When it comes to generative AI, what kind of skills would you want a new hire to bring to the table? 

AS: Generative AI is changing the entire fabric of the global workforce. It will create new industries and require new skills in ways we can’t even predict. But what will stay the same is the kind of employee that companies like Microsoft want to hire. It is the generalist, the person who is trained in multiple areas and has multiple skillsets, who brings the most value to the table. Generalists will combine the power of AI with the weight of their varied, rich experiences to solve problems in a way that a machine never could. When it comes to being nimble and adaptive in solving out-of-the-box challenges, there is still no match for the human mind.

signature of Dean Sevin Yeltekin

Dean Sevin Yeltekin

Sevin Yeltekin is the Dean of Simon Business School. 


Follow the Dean’s Corner blog for more expert commentary on timely topics in business, economics, policy, and management education. To view other blogs in this series, visit the Dean's Corner Main Page.

 

 

 

 

Add new comment

Enter the characters shown in the image.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Generative AI as a data analyst


AI Data Analytics

 

Generative AI as a Data Analyst 

February 15, 2024 | By Dean Sevin Yeltekin

 

In this Q&A, business analyst Benedikt Statt ’16S (MS) reflects on the benefits and the limitations of using generative AI in a data analytics role. 

Sevin Yeltekin: What motivated you to pursue a Master’s in Marketing Analytics degree from Simon? What were some of the highlights of your experience?

Benedikt Statt: As an undergraduate student in Germany, I developed an interest in the people side of business. The desire to uncover what drives consumer behavior ultimately led me to marketing. I was working in a general marketing role at a small HR consulting firm in Mexico when I made the decision to pursue a one-year program to strengthen my skills in data analytics. At that time, I planned to continue building my career in consulting, so I got involved with Simon Vision. I remember learning early on about how to conduct and analyze surveys in market research class and then immediately putting that skill to use on a consulting project for a local company. Having the opportunity to apply theory and see very tangible results was impactful for me. I didn’t end up in consulting, but I owe my current job at Groupon to the experiences acquired through Simon Vision.

SY: Now you manage pricing and promotions for Groupon’s North American market. What does your role entail, and how do you use Generative AI in your daily work? 

BS: I work with engineering, design, and optimization teams around the globe to accomplish everything from monitoring daily promotion performance and testing website features to developing strategies to target customers with personalized messaging. One of the things I appreciate most about Groupon is that it’s the kind of place where we must prove every hypothesis we come up with, even when there is a consensus. At every step of the way, we use data analytics tools to drive and defend our decisions. A gut feeling isn’t enough. 

I primarily use Tableau, Google Sheets, and Sequel Data Systems in my daily work, but I do rely on generative AI to streamline things. I may use it to come up with ideas for improving Sequel queries or copy and paste an error to see where I went wrong in my analysis. By using generative AI to reduce manual work, I have more time to dig deeper into data and spot trends, which is often like finding a needle in a haystack. The drawback is that there is more thinking involved in deep analysis so I find myself more tired at the end of the day. Sometimes it's helpful to leave some more routine tasks on my plate to break up the more rigorous work. 

SY: To what extent can generative AI take on the role of a data analyst? 

BS: I can imagine some data engineering and maintenance roles disappearing shifting in the future, but there are limitations to what AI can accomplish. It would certainly make my life easier to feed in raw data and have generative AI tools complete all my analysis for me, but AI can't do everything I do, even if it looks like it can in a technical sense. There is a human touch that will always be missing. At Groupon, for example, someone might look at a user interface and come up with an idea to improve the customer experience. We can ask AI to implement the idea, but it takes a human to have the free thinking and creativity to come up with it in the first place. So much of what made my Simon experience meaningful was the gathering of creative minds. Intelligent people from diverse backgrounds combine their ideas to create something new. That is how you end up with places like Silicon Valley, where great ideas come from people, not robots.

SY: How should Simon integrate generative AI into the classroom experience today?

BS: Simon students graduate with a tremendous skillset, and the ability to leverage generative AI tools will be a welcome addition to the extent that they can apply them in real-world settings. When I interview a job candidate, I look for the ability to solve problems. If they use generative AI in that process, great. Those skills will certainly help push the company further. But the most important thing is that core ability to connect the dots between theory and practice. In addition to workshops and class projects, I hope that Simon can steer students toward consulting projects, competitions, and collaborations with local companies to help them connect the dots.

signature of Dean Sevin Yeltekin

Dean Sevin Yeltekin

Sevin Yeltekin is the Dean of Simon Business School. 


Follow the Dean’s Corner blog for more expert commentary on timely topics in business, economics, policy, and management education. To view other blogs in this series, visit the Dean's Corner Main Page.

 

 

 

 

Add new comment

Enter the characters shown in the image.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

4 pillars of generative AI


Sigel Jeff

4 Pillars of Generative AI 

January 10, 2024 | By Dean Sevin Yeltekin

In the first installment of this Q&A series, strategy consultant Jeff Sigel ’01S (MBA) describes four pillars of generative AI usage and warns against common pitfalls. 

Sevin Yeltekin: You started your own consulting firm, Proprioceptive, several months ago. Can you walk us through your career journey to this point?

Jeff Sigel: I was originally a math and physics teacher, which I like to describe as the hardest marketing job I’ve ever had. After several years of teaching, I joined a consulting firm that introduced me to the world of marketing and inspired me to pursue a business education. Simon gave me the training and credibility I needed to grow in a new direction. I discovered brand management in the first year of my MBA program, landed an internship at Kraft Nabisco, and started a journey through food marketing and innovation that included stops at The Hershey Company and Cracker Barrel. 

At Cracker Barrel, I noticed a lack of communication between the finance and data engineering teams. No one was translating consumer data into actionable insights. I raised my hand and volunteered to build an analytics function to bridge the gap. Our CFO would often bring up the topic of integrating generative AI into our operations, in addition to machine learning (ML) tools we were already using for forecasting purposes, but like many organizations, we were not prepared to use those tools in a strategic way. At the end of 2023, I left Cracker Barrel to found a consulting firm called Proprioceptive with a vision for helping companies activate strategy, not just put something to paper. Generative AI is increasingly part of this work.

SY: How have you built expertise in generative AI?

JS: I am an avid listener of audio books. In 2023, I spent much of the year listening to books on topics related to AI and machine learning. My approach is to listen to everything at 2x speed and fly through without being too worried about picking everything up, because what I glean from one book will help me understand the next one better. Using an application called DataCamp, I have also taken weekly classes to learn to code with Python. To supplement this independent learning, I took a class offered by Dan Keating at Simon that brought a fascinating perspective to the table. I learned more about what ChatGPT can do in terms of running Python code and was inspired to explore multimodal generative AI in greater depth. 

SY: How can generative AI enhance human intellect rather than replace it? 

JS: When developing materials on generative AI for new clients, I walk them through four pillars of application:

  1. Knowledge task assistance—Tools like ChatGPT add incredible value when it comes to tasks like coding and report writing. I attended a recent conference looking at generative AI in the pharmaceutical industry, and most presentations touched on report writing. These reports are onerous and cost a fortune, but they are an integral part of every drug trial. Generative AI has the potential to streamline this process. 
  2. Enhanced idea generation—Generative AI can help my clients create in different modes: images, words, numbers, and sounds. Coming up with an image that fits a word is much simpler than taking a piece of music and generating an image that fits, but both are now possible. I also use AI tools for simple brainstorming. Today, I use ChatGPT to create a list of questions I should ask a prospective client about or generate images for marketing materials. Back in my food innovation days, I might have asked ChatGPT to come up with a way of packaging chocolate that allows someone to carry it around without it melting.
  3. Accelerated personalized education—Generative AI can help my clients bridge the gap     between functions. Imagine a salesperson who doesn’t understand what a data analyst does.     They could ask an AI assistant to listen in on a conversation and explain what that analyst is     saying, or suggest better questions to ask. There are endless examples of ways that generative     AI can become a tour guide in unfamiliar territory and help us work more effectively across     disciplines and functions.
  4. Automated human-like interaction—With the help of video generation platforms like         Synthesia, it is now possible to type in a script and watch a video of a computer-generated     person reading it in several languages. If a client is creating a series of training videos, this tool could dramatically improve accuracy and reduce cost.

SY: What are some common pitfalls you help your clients avoid? 

JS: You would never want to start with a hammer and look for ways to use it. In the same way, it is a mistake to take a generative AI tool that seems interesting and look for ways to apply it to business operations. As a marketer, I believe that you always start with the problem. Define the problem and search for a tool that can address it, whether or not it is related to AI. 

Just like clients can become too enamored of generative AI, they can also become too cynical. Maybe I’m too much of an optimist, but I view generative AI through a positive lens. Even with the tremendous social disparities in place today, think about the ways that innovations in fields like medicine have vastly improved living conditions. I’m currently consulting for a data analytics company that is building a new AI-powered app to help doctors create better patient care reports. Another client is working with machine learning models in the drug discovery space to predict new drug candidates. Humans will find a way to abuse every technological advance, but the general arc of history bends toward progress. The good will outweigh the bad. 

signature of Dean Sevin Yeltekin

Dean Sevin Yeltekin

Sevin Yeltekin is the Dean of Simon Business School. 


Follow the Dean’s Corner blog for more expert commentary on timely topics in business, economics, policy, and management education. To view other blogs in this series, visit the Dean's Corner Main Page.
 

Add new comment

Enter the characters shown in the image.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

What ChatGPT means for higher education


Chat GPT

 What ChatGPT means for higher education

April 26, 2023 | By Dean Sevin Yeltekin

 

ChatGPT, a chatbot released by Open AI last November, has captured the world’s attention with its uncanny ability to compose essays, write song lyrics, play games, answer test questions, and take on other tasks traditionally associated with human intelligence.

The academic community has responded to the advent of this new technology with a mixture of fascination and concern, keenly aware of its potential to transform the way knowledge is produced, conveyed, and retained.

In the Q&A below, Mitch Lovett, senior associate dean of education and innovation at Simon Business School, weighs in on the potential impact of Chat GPT on higher education.

What do you view as ChatGPT’s primary strengths and weaknesses?

Where ChatGPT seems to excel the most is predicting text that might be useful to the user based on information that is available. A student taking a finance class, for example, might ask it to explain and contrast specific kinds of financial tools. ChatGPT might not produce the most elegant answer, but it will produce a B+ answer in a short amount of time. On the flipside, ChatGPT will often make up sources or information while trying to predict something—and, so far, it’s not particularly good at math. It is also not as useful when information doesn’t exist, like in a situation in which someone needs to write the website copy for a brand-new product. When ChatGPT tries to venture outside the space of prediction, it becomes less effective.

Do you think it can ever fully replace human intelligence?

No, but I would be wary of underestimating technology like ChatGPT. A few years ago, we never would have expected to encounter an AI program that can create something people struggle to discern from human art. Over time, it will become capable of more and more. Right now, though, it is just making predictions. It knows that certain things go with other things, but it does not know truth from lies. It doesn’t have a sense of ethics. That’s an area where human judgment is indispensable.

What I think is most profound is how it is going to redefine expertise in some fields. We are going to use ChatGPT to write things for us—text, code, outlines—so that we can complete our work faster. That means some skills will become less important and others more important.

What are some ways ChatGPT can improve the quality of education at Simon?

For both students and professors, one of its primary functions will be to assist in search. Constructing literature reviews will become significantly less time-intensive when using ChatGPT to aggregate sources and summarize information on a given topic. And when we think about ways to integrate it into classroom assignments, like using it to write code or organize information to help solve a case study, it is clear that students will be able to go further in the topics they are learning about. They will be able to do more in an assignment and do it more efficiently. They may not be able to learn more information overall, because the human brain can only absorb so much in a short time, but they will learn different information and get more practice designing solutions rather than implementing them.

I can also imagine ChatGPT serving as a first-line teaching assistant (TA) or discussion board assistant in order to help students taking asynchronous classes more quickly and efficiently than their professors and human TAs. Having an outside curator of information to post questions, responses, and comments will enrich the asynchronous interactions that take place.

What are some of the downsides or pitfalls of using ChatGPT in an academic setting?

There are significant implications when it comes to academic honesty. Theoretically, a student taking an online asynchronous course could use ChatGPT to complete all their assignments and take their exams. Many faculty members at Simon already use various analytics tools to detect plagiarism—for example, by comparing student answers to see if any are unreasonably similar. But as ChatGPT produces writing that is increasingly indistinguishable from something a student would produce, it will become an arms race to detect the use of AI. Some professors will address this by switching to in-person, handwritten exams whenever possible, while also ensuring that the course content is so specific that ChatGPT becomes ineffective. These are strategies that involve trying to block ChatGPT. Others will embrace the use of ChatGPT, but will do so by adjusting assignments and exams to compensate, similar to the concept of allowing a one-page cheat sheet on an exam or holding an open book exam.

Of course, there is also the danger of placing too much trust in ChatGPT. A student taking an introductory course in accounting, for example, might ask it to answer an accounting-related question but lack the base knowledge to discern the accuracy of its answers. Without any background in the subject matter, it can be difficult to know if ChatGPT is producing an A- answer or a complete fabrication. This is where our definition of expertise becomes important. Part of educating students is helping them understand how best to use AI and evaluate when the AI might be producing meaningless or less valuable responses.

How do you expect ChatGPT to change the way students learn?

I find it helpful to think about the analogy of using calculators in math. My children’s elementary school found that allowing students to use calculators from an early age weakened their ability to do simple computations in their head, a skill that is helpful in doing more advanced math. In the same way, the introduction of ChatGPT might weaken students’ ability to write. Writing is certainly something we have traditionally viewed as an important skill. But how important is it in relation to the tasks we need people to be able to do in a world with AI?

If ChatGPT is always creating a generic setup to tailor, it allows students to avoid the mundane and repetitive—but doing mundane, repetitive tasks over and over might be helping them develop intuition and judgment. One of the most important tasks that confronts us as educators is figuring out how much of this mundane work is needed to become an expert when AI is available.

On the other hand, we may be overreacting to the fact that future students will learn differently from their predecessors. To be considered an expert on something, it will be less essential to recall facts that will always be at their fingertips and more important to apply judgment and critical thinking. There is little chance of stopping the integration of AI tools like ChatGPT into education, so our job is to decide what fundamental knowledge is necessary for mastering a subject and learn to train students accordingly.

What questions are we not asking about ChatGPT that we should be asking?

ChatGPT raises fundamental questions about truth that we must grapple with. This technology produces text that may or may not be accurate and presents it as fact. When asked the same question twice, it may produce contradictory answers if it incorporates new information in between searches. Will truth start becoming blurrier for the people who use it?

I also think about how search engines like Google form a layer between the user and a website. ChatGPT, on the other hand, will put a filtered version of a website in front of you. You won’t even need to visit a website directly. What are the implications when it gets things wrong? Where do we draw a boundary line between what is a search and what is not? There are plenty of legal questions to consider surrounding ownership and responsibility for the information that is presented.

Mitch Lovett


Mitch Lovett is the senior associate dean of education and innovation and a professor of marketing at Simon Business School.


Follow the Dean’s Corner blog for more expert commentary on timely topics in business, economics, policy, and management education. To view other blogs in this series, visit the Dean's Corner Main Page.

 

 

Add new comment

Enter the characters shown in the image.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Research paper explores impact of AI on pricing decisions

Body

 

 

Research paper explores impact of AI on pricing decisions 

March 14, 2024| By Bret Ellington

Simon Business School is proud to announce the publication of a new research paper co-authored by Professor Jeanine Miklos-Thal, the Fred H. Gowen Professor of Economics and Management who is also a CEPR research fellow. The paper, titled AI, Algorithmic Pricing, and Collusion appeared in Competition Policy International (CPI). According to their website CPI is “an independent knowledge-sharing organization focused on the diffusion of the most relevant information and content on the subjects of antitrust, competition law, and technological regulation.”

The paper delves into the complex intersection of artificial intelligence (AI), machine learning, and pricing decisions, addressing concerns about their potential impact on consumer prices.

In recent years, advancements in AI and machine learning have raised questions about whether such innovations could facilitate collusive pricing practices among firms, ultimately leading to higher prices for consumers. However, the paper challenges these fears by providing a detailed examination of the actual pricing algorithms commonly used by firms.

Through extensive research, Professor Miklos-Thal and her co-author, Catherine Tucker, the Sloan Distinguished Professor of Management at MIT Sloan and an NBER research associate, argue that while certain pricing algorithms may raise competition concerns, others may actually undermine firms' ability to sustain collusive prices. The paper emphasizes the importance of understanding the nuances of different pricing algorithms and their implications for competition in the marketplace.
 

This groundbreaking paper sheds light on the complexities of pricing decisions in the age of AI and machine learning, underscoring the need for a nuanced understanding of pricing algorithms to ensure fair competition and consumer welfare.


Read the full paper here:

AI, Algorithmic Pricing, and Collusion

Bret Ellington

Bret Ellington is a senior copywriter and content creator for the Simon Business School Marketing Department.


 

Follow Simon Business School News for the latest articles in this series at Simon News & Highlights.

 

AI and the future of work


AI and Jobs

 AI and the future of work

December 14, 2023 | By Professor Huaxia Rui

Last year, the advent of ChatGPT raised new questions about what Artificial Intelligence (AI) means for human labor. Workers who once felt secure in their jobs began wondering if they would soon go the way of the telegraph operator or the carriage driver. 

In a working paper, my co-authors and I create a visual framework to think about the evolving relationship between AI and jobs. We then use the launch of ChatGPT as a shock to test an idea we call the inflection point conjecture.

A conceptual framework

Before diving in, let’s address three common misunderstandings. 

First, intelligence is not the same as consciousness. While we can define human or artificial intelligence in various job contexts, the same cannot be said for consciousness. In fact, whether consciousness even exists remains debatable. 

Second, there are really two forms of human intelligence: one based on deduction and the other based on induction, much like System 2 (slow thinking) and System 1 (fast thinking) suggested by psychologist Daniel Kahneman. We can think of deduction as a causal inference process based on logic and premises, and induction as a computational process of achieving generalization using data under certain distribution assumptions. Hence, we need to distinguish Statistical AI and Causal AI where the former, better known as machine learning, obtains knowledge by detecting statistical regularities in data. Statistical AI gained momentum later last century, thanks to significant progresses in statistical and computational learning theories, and, of course, to the dramatic increase in computing power and the availability of vast quantities of data. Current AI technologies are largely based on Statistical AI. Despite its limitations in reasoning, Statistical AI has enjoyed enormous success over the past decade or so. It most likely will be the form of AI that revolutionizes the way we live and work in the near future. 

Third, current AI technologies are task-specific, not task-generic. Artificial general intelligence (AGI) that can learn any task is probably still decades away, although some have argued that GPT-4’s capabilities show some early signs of AGI. 

We limit our discussions to task-specific Statistical AI and will refer to it as AI from now on.

The power of AI for a given task depends on four factors.

Task learnability—how difficult it is for an AI to learn to complete a task as well as a human worker does. From the perspective of an AI, a task is essentially a function mapping certain task input to some desirable task output, or more generally, distribution of task outputs. The learnability of the task is determined by how complex the mapping is and how difficult it is to learn this mapping from data using computational algorithms. While some tasks are highly learnable because they are so routine, others may require vast amounts of data and/or huge amounts of computational resources for the learning to be successful. In fact, there may even exist tasks that are simply not learnable no matter how much data we have. As a theoretical example, consider the practical impossibility of learning the private key in a public-key cryptosystem, even though one can generate an arbitrary number of labeled instances, i.e., pairs of plaintext and encrypted messages.

We can break down a task’s learnability into its statistical complexity Sf and its computational complexity Cf. Visually, we may represent a task as a point on a task plane where the two coordinates represent the statistical and computational complexities of the task. Plotting AI performance (e.g., relative to human performance) for all tasks in a 3-dimensional space which we refer to as the task intelligence space, we obtain the current intelligence surface, or CIS, for short, that represent the overall intelligence levels of current AI technologies. The top left panel of Figure 1 illustrates this concept.     

CIS_AI


Figure 1

The two sources of task learnability imply two types of resources needed for AI to successfully learn the task, which lead us to the next two factors.

Data availability—The more data points available to train an AI, the higher the CIS is. Whether it is data about driving conditions and vehicle control to train an autonomous vehicle or documents in different languages to train a translation device, the availability of sufficient amounts of labeled data is of paramount importance for AI to approximate human intelligence. This may seem obvious given our understanding of the two types of resources required for the training of AI. Its significance in practice can still be strikingly impressive. For example, the ImageNet project, launched in 2009 and containing more than 14 million annotated images of over 20,000 categories, is of historical importance in the development of AI, especially for vision tasks. Dr. Fei-Fei Li, the founder of ImageNet, is recognized as the godmother of AI at least in part for establishing ImageNet. Because the importance of data availability for different tasks depends on their degrees of statistical complexity, as is illustrated in the top right panel of Figure 1, we may also understand the significance of ImageNet for vision tasks by noting the high statistical complexity of image data. 

Computation speed—The faster the computation speed is, the higher the CIS is. Similarly, the importance of computation speed for different tasks depends on their degrees of computational complexity, as is shown in the bottom left panel of Figure 1. The recent example of graphic processing unit, or GPU, demonstrates the importance of this factor.

Learning techniques—Unlike the first factor, which is an inherent property of a task, or the second and third factors, which are resources, this factor is all about the actual learning and is where unexpected progress is made thanks to human ingenuity. It encompasses a variety of techniques used for learning which can be broadly categorized into two types: better hypothesis class or better learning algorithm. For example, the successes of convolutional neural networks for computer vision tasks and the transformer architecture for natural language processing are examples of better hypothesis classes. On the other hand, regularization and normalization techniques are examples of learning algorithm improvements. If there is an occupation that will never be replaced by task-specific Statistical AI, we bet on researchers and engineers who innovate in learning techniques. The bottom right panel of Figure 1 illustrates the impact of improvements in learning techniques, the magnitude of which is not necessarily related to task learnability.

In summary, we can understand the AI performance through the lens of four factors, illustrated in Figure 2.
 

AI_Learnability


Figure 2

For a given task, whether AI performance is satisfactory enough depends on what we mean by satisfactory. To make this concrete, imagine another surface, referred to as the minimal intelligence surface, which represents the minimal level of AI performance for us humans to consider it as satisfactory. If the CIS is below the minimal intelligence surface on a task, AI performance on that task is not yet good enough and the task remains a human task. But if the CIS is above the minimal intelligence surface on a task, the task can be left for AI.

Three phases for AI-jobs relations

We consider an occupation as a set of tasks. Depending on the relative position of the CIS and the minimal intelligence surface, we can play out three different scenarios.

Phase 1: Decoupled

This is the phase when human workers are not engaging with AI while doing their jobs. Graphically, the CIS is below the minimal intelligence surface on the region corresponding to the task set of the occupation, as is illustrated in the left panel of Figure 3 where the occupation is represented by six red dots. Therefore, none of the tasks can be satisfactorily completed by AI yet. This phase will likely last long for occupations with data availability issues.

Human Intell


 
Figure 3


Phase 2: Honeymoon

This is the phase when human workers and AI benefit from each other. Graphically, the CIS is above the minimal intelligence surface on some tasks of an occupation but is below the minimal intelligence surface on other tasks of the occupation. In other words, these jobs still have to be done by human workers, but AI can help them by satisfactorily completing some of the tasks required by the jobs. On the other hand, by working side-by-side with human workers, the AI can benefit from new data generated by human workers. In the left panel of Figure 3, we illustrate this phase by representing the occupation using six dots. The three green dots represent the tasks that an AI can do, and the three red dots represent the tasks that only a human can do. Human workers of such an occupation will use AI to complement their work, benefiting from the boost in productivity that comes from offloading some tasks. Ironically, this may also accelerate their own replacement. 

Phase 3: Substitution

In this phase, AI can perform as well as an average human worker but at a much smaller or even negligible marginal cost. Graphically, the CIS is completely above minimal intelligence surface on the region corresponding to the task set of the occupation. In the left panel of Figure 3, we illustrate this phase by representing the occupation using only green dots. At this phase, the occupation is at the risk of becoming obsolete because the marginal cost of AI is often negligible compared to that of humans, making it more efficient for these jobs to be completed by AI rather than by humans.

While the minimal intelligence surface is largely static, the CIS shifts upwards over time, because even though task learnability is an inherent task property, the other three factors progress over time, resulting in improved AI performance. Hence, we can envision most occupations, initially decoupled with AI, gradually enter the honeymoon phase, and for many, eventually move into the substitution phase.  On the other hand, because AI adoption takes time and different organizations have different AI proficiency levels, we may find that the same occupation can simultaneously be in different phases, depending on organizations or regions. We illustrate this point in the right panel of Figure 3.

The Inflection Point

Based on the conceptual framework, we further build and analyze an economic model to show the existence of an inflection point for each occupation. Before AI performance crosses the inflection point, human workers always benefit from improvement in AI, but after the inflection point, human workers become worse off whenever AI gets better. This model insight offers a way to test our thinking using data. Let’s consider the occupation of translation and the occupation of web development. Existing evidence suggests that AI likely has crossed the inflection point for translation, but not for web development. Based on the inflection point conjecture, we hypothesized that the launch of ChatGPT has likely benefited web developers but hurt translators. We believe these effects should be discernible in data because the launch of ChatGPT by OpenAI a year ago significantly shocked the CIS, affecting many occupations. Indeed, anecdotal evidence and our own experiences suggest that ChatGPT has increased AI performance for translation and for programming in general. There are even academic discussions that ChatGPT, especially the one powered by GPT-4, has shown early signs of AGI which is the stated mission of OpenAI.

To test this, my co-authors and I conducted an empirical study to evaluate how the ChatGPT launch affected translators and web developers on a large online freelance platform. Consistent with our hypotheses, we find that translators are negatively affected by the launch in terms of the number of accepted jobs and the earnings from those jobs. In contrast, web developers are positively affected by the same shock.

By nature, some occupations will be slower to enter the substitution phase. 

Occupations that require a high level of emotional intelligence will be slower to enter the substitution phase. At a daycare center, for example, machines may replace human caregivers by changing diapers and preparing bottles, but they will be poor at replicating human empathy and compassion. Humans are born with a neural network that can quickly learn to detect and react to human emotions. That learning probably began tens of millions of years ago and has become engrained in our hardware. Machines, in contrast, do not have that long evolutionary past, and must learn from scratch, if they can learn at all. At a more fundamental level, this might be rooted in the computational complexity of learning to “feel”.

Occupations that require unexpected or unusual thinking processes will also be slower to enter the substitution phase or even the honeymoon phase. Humans sometimes come up with original ideas seemingly out of nowhere, without following any pattern. What’s more intriguing is that we may not be able to explain how we came up with that idea. While fascinating for humans, this poses significant challenges to AI because there simply isn’t enough data to learn from. To exaggerate a bit, there is only one Mozart, not one Mozart a year.

What’s next

The relationship between AI and humans is already generating heated public debates because of its profound implications on our society and the potential to disrupt the fabric of our society. At this moment, I still believe there is a future for human workers, not only because of the many limitations of current AI technologies, but also because of our limited understanding of ourselves. Until the moment when we finally understand what it means to be human and the nature of human spark, we have a role to play in the cosmic drama.

Rui Huaxia

Huaxia Rui is the Xerox Professor of Computers & Information Systems at Simon Business School. 

Follow the Dean’s Corner blog for more expert commentary on timely topics in business, economics, policy, and management education. To view other blogs in this series, visit the Dean's Corner Main Page.  

 

Comments

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share

Add new comment

Enter the characters shown in the image.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Generative AI, competition, and antitrust


AI Competition Antitrust

Generative AI, competition, and antitrust

July 19, 2023 | By Jeanine Miklós-Thal

 

These days, everyone is talking about AI, especially generative AI which can create text or images. Discussions range from concerns about AI causing human extinction to more immediate questions about the potentially disruptive effects of generative AI on higher education.

As a competition economist, I ask myself whether competition in the industries that produce AI is healthy and working in ways that serve consumers and society.

The generative AI industry consists of several layers. So-called foundation models form the upstream layer. A foundation model is a large machine learning model that is trained on broad amounts of data and can be adapted to many different downstream tasks. GPT-3.5 is an example of a foundation model. The downstream layer of the industry consists of AI applications for specific tasks. These applications arise in a wide range of industries, including healthcare, energy, finance, education, social media, law, agriculture, and more. Examples of applications built upon foundation models include the ChatGPT chatbot, or GitHub Copilot which helps developers write software code.

The upstream layer of the industry is currently dominated by two players: Open AI, in partnership with Microsoft; and Google DeepMind. While Microsoft and Alphabet/Google have been giants in the tech industry for many years, OpenAI was founded in 2015 as a start-up backed by tech heavyweights like Elon Musk and Sam Altman. Given the small number of large players, the AI foundation model market can currently be considered highly concentrated. There are some good economic reasons for this high level of concentration. Developing a foundation model requires immense amounts of data, extensive cloud computing infrastructure, and an army of data engineers. Only a few select firms have the resources needed to build foundation models, and replicating the required investments may be socially inefficient. It should also be noted that the three main players in cloud computing—AWS, Microsoft Azure, and Google Cloud—have a strategic advantage in foundation models, given the importance of cloud infrastructure for training and deploying large-scale AI models.

The downstream layer of the industry consists of applications and tools tailored to specific tasks. Some of the applications, like ChatGPT, are developed by the foundation model owners themselves, others by tech start-ups, or by non-tech firms that seek to improve efficiency or solve problems in traditional industries. This is an exciting and dynamic industry, with plenty of innovation and new-firm entry happening.

Is the market concentration in foundation models something to worry about?

One worry is that the owners of foundation models may exploit their market power in the classic way, by setting high prices. For instance, the licensing fees charged to downstream application developers may be higher than they would be in a more competitive market, which could lead to fewer applications being developed and higher final prices charged to end buyers. Market power would slow down the development and adoption of AI applications in this case.

Another worry is that the owners of foundation models may exclude or discriminate against downstream firms viewed as (potential) competitors in certain applications, with the goal of extending their market power in foundation models into other markets. Antitrust regulators should be on the lookout for contracts that are aimed at leveraging market power in the upstream market to monopolize downstream application markets, which is an illegal practice under existing antitrust laws.

Finally, market power is also likely to influence the direction of innovation, as emphasized by Daron Acemoglu and Simon Johnson in their recent book “Power and Progress: Our 1,000 Year Struggle Over Technology and Prosperity.” This would be a source of worry if market power leads firms to focus less on developing socially desirable applications, e.g., in health care or energy, and more on applications that may be harmful to society, e.g., applications that spread misinformation or social media applications with high addiction potential. A related and important question is whether the applications being developed will replace or augment human labor and how market power in AI models plays into this.

What about competition in the downstream applications market?

The availability of foundation models has the potential to facilitate entry and thereby promote competition among application developers. Consider an entrepreneur who wants to offer a service that helps grooms and brides write their wedding vows. Prior to the availability of foundation models for generative AI, the entrepreneur would have had to build their own machine learning model, which would have required significant investments in data and computing as well as engineering talent. Now that a foundation model like GPT is available, the entrepreneur can build upon an out-of-the-box solution, which makes entry significantly easier and less costly. And indeed, several competing firms (notably, ToastWiz and Joy) have begun to offer AI-assisted wedding vow writing tools over the past year.

Generative AI may also foster new competition in markets that have been dominated by a single firm for many years. For instance, Microsoft may challenge Google’s long-standing single-firm dominance in online search (and, with it, Google’s leadership in online advertising) thanks to the integration of ChatGPT into Bing. Google’s dominance in search may also be challenged by new entrants like You.com, a search engine built on AI that was founded in 2020. It remains to be seen if one of these search engines will end up replacing Google as the industry leader, we will witness competition between multiple search engines, or Google will maintain its leadership position. Or online search as we currently know it may be replaced by something different altogether?

In summary, although there are economic reasons for the current market concentration in AI foundation models, this concentration raises several legitimate worries. At the same time, the availability of foundation models has the potential to facilitate entry and foster competition in AI applications, which arise in a vast range of (old and new) industries. Importantly, for these benefits to fully realize, third-party application developers must be given access to foundation models.

From an antitrust policy perspective, I think the priority of regulators should be to ensure that competition in newly emerging AI-related markets is based on the merits of the products and services provided. Firms should not be able to leverage their market power in existing markets to obtain power in newly emerging markets. Let me conclude by saying that I think that antitrust policy is only a small part of the puzzle here and that a more complete suite of policy tools will likely be needed to address some of the societal issues raised by AI, such as the spread of misinformation.

Note: It is important to acknowledge the inherent difficulty of accurately forecasting the future in technology markets, particularly in the rapidly evolving field of AI. The dynamics of competition, market concentration, and the potential impacts of AI innovation are subject to ongoing changes and complexities, making it challenging to predict precise outcomes and implications.

Jeanine Miklós-Thal

Jeanine Miklós-Thal is a professor in the Economics & Management and Marketing groups at Simon Business School. Her research spans industrial organization, marketing, and personnel economics. 


Follow the Dean’s Corner blog for more expert commentary on timely topics in business, economics, policy, and management education. To view other blogs in this series, visit the Dean's Corner Main Page.

 

Add new comment

Enter the characters shown in the image.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Subscribe to AI Initiative