AI Competition Antitrust

Generative AI, competition, and antitrust

July 19, 2023 | By Jeanine Miklós-Thal

 

These days, everyone is talking about AI, especially generative AI which can create text or images. Discussions range from concerns about AI causing human extinction to more immediate questions about the potentially disruptive effects of generative AI on higher education.

As a competition economist, I ask myself whether competition in the industries that produce AI is healthy and working in ways that serve consumers and society.

The generative AI industry consists of several layers. So-called foundation models form the upstream layer. A foundation model is a large machine learning model that is trained on broad amounts of data and can be adapted to many different downstream tasks. GPT-3.5 is an example of a foundation model. The downstream layer of the industry consists of AI applications for specific tasks. These applications arise in a wide range of industries, including healthcare, energy, finance, education, social media, law, agriculture, and more. Examples of applications built upon foundation models include the ChatGPT chatbot, or GitHub Copilot which helps developers write software code.

The upstream layer of the industry is currently dominated by two players: Open AI, in partnership with Microsoft; and Google DeepMind. While Microsoft and Alphabet/Google have been giants in the tech industry for many years, OpenAI was founded in 2015 as a start-up backed by tech heavyweights like Elon Musk and Sam Altman. Given the small number of large players, the AI foundation model market can currently be considered highly concentrated. There are some good economic reasons for this high level of concentration. Developing a foundation model requires immense amounts of data, extensive cloud computing infrastructure, and an army of data engineers. Only a few select firms have the resources needed to build foundation models, and replicating the required investments may be socially inefficient. It should also be noted that the three main players in cloud computing—AWS, Microsoft Azure, and Google Cloud—have a strategic advantage in foundation models, given the importance of cloud infrastructure for training and deploying large-scale AI models.

The downstream layer of the industry consists of applications and tools tailored to specific tasks. Some of the applications, like ChatGPT, are developed by the foundation model owners themselves, others by tech start-ups, or by non-tech firms that seek to improve efficiency or solve problems in traditional industries. This is an exciting and dynamic industry, with plenty of innovation and new-firm entry happening.

Is the market concentration in foundation models something to worry about?

One worry is that the owners of foundation models may exploit their market power in the classic way, by setting high prices. For instance, the licensing fees charged to downstream application developers may be higher than they would be in a more competitive market, which could lead to fewer applications being developed and higher final prices charged to end buyers. Market power would slow down the development and adoption of AI applications in this case.

Another worry is that the owners of foundation models may exclude or discriminate against downstream firms viewed as (potential) competitors in certain applications, with the goal of extending their market power in foundation models into other markets. Antitrust regulators should be on the lookout for contracts that are aimed at leveraging market power in the upstream market to monopolize downstream application markets, which is an illegal practice under existing antitrust laws.

Finally, market power is also likely to influence the direction of innovation, as emphasized by Daron Acemoglu and Simon Johnson in their recent book “Power and Progress: Our 1,000 Year Struggle Over Technology and Prosperity.” This would be a source of worry if market power leads firms to focus less on developing socially desirable applications, e.g., in health care or energy, and more on applications that may be harmful to society, e.g., applications that spread misinformation or social media applications with high addiction potential. A related and important question is whether the applications being developed will replace or augment human labor and how market power in AI models plays into this.

What about competition in the downstream applications market?

The availability of foundation models has the potential to facilitate entry and thereby promote competition among application developers. Consider an entrepreneur who wants to offer a service that helps grooms and brides write their wedding vows. Prior to the availability of foundation models for generative AI, the entrepreneur would have had to build their own machine learning model, which would have required significant investments in data and computing as well as engineering talent. Now that a foundation model like GPT is available, the entrepreneur can build upon an out-of-the-box solution, which makes entry significantly easier and less costly. And indeed, several competing firms (notably, ToastWiz and Joy) have begun to offer AI-assisted wedding vow writing tools over the past year.

Generative AI may also foster new competition in markets that have been dominated by a single firm for many years. For instance, Microsoft may challenge Google’s long-standing single-firm dominance in online search (and, with it, Google’s leadership in online advertising) thanks to the integration of ChatGPT into Bing. Google’s dominance in search may also be challenged by new entrants like You.com, a search engine built on AI that was founded in 2020. It remains to be seen if one of these search engines will end up replacing Google as the industry leader, we will witness competition between multiple search engines, or Google will maintain its leadership position. Or online search as we currently know it may be replaced by something different altogether?

In summary, although there are economic reasons for the current market concentration in AI foundation models, this concentration raises several legitimate worries. At the same time, the availability of foundation models has the potential to facilitate entry and foster competition in AI applications, which arise in a vast range of (old and new) industries. Importantly, for these benefits to fully realize, third-party application developers must be given access to foundation models.

From an antitrust policy perspective, I think the priority of regulators should be to ensure that competition in newly emerging AI-related markets is based on the merits of the products and services provided. Firms should not be able to leverage their market power in existing markets to obtain power in newly emerging markets. Let me conclude by saying that I think that antitrust policy is only a small part of the puzzle here and that a more complete suite of policy tools will likely be needed to address some of the societal issues raised by AI, such as the spread of misinformation.

Note: It is important to acknowledge the inherent difficulty of accurately forecasting the future in technology markets, particularly in the rapidly evolving field of AI. The dynamics of competition, market concentration, and the potential impacts of AI innovation are subject to ongoing changes and complexities, making it challenging to predict precise outcomes and implications.

Jeanine Miklós-Thal

Jeanine Miklós-Thal is a professor in the Economics & Management and Marketing groups at Simon Business School. Her research spans industrial organization, marketing, and personnel economics. 


Follow the Dean’s Corner blog for more expert commentary on timely topics in business, economics, policy, and management education. To view other blogs in this series, visit the Dean's Corner Main Page.

 

Add new comment

Enter the characters shown in the image.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.