The State of the AI Economy
6 min read
2024-09-23
Carl Benedikt Frey
Associate Professor of AI & Work, University of Oxford

executive summary

Artificial Intelligence (AI)'s impact on productivity has been underwhelming so far compared to past technological breakthroughs like electricity or the internal combustion engine.

Transformative innovation in AI requires both investment and broad experimentation. AI is currently too resource-intensive for broad experimentation.

The most immediate opportunities for productivity growth from AI lie in fine-tuning foundation models for very specific tasks.

Gallery

No items found.

article

In “The Technology Trap”1, you explore how the Industrial Revolution initially harmed workers but eventually led to increased wealth and prosperity. How does the impact of AI compare to historic technological breakthroughs?

In terms of its impact on productivity and the economy, AI has been relatively disappointing so far, despite the hype. Its impact is nowhere near what we saw with electricity or the internal combustion engine. We need more progress. We need more innovation in the field.

How Artificial Intelligence Works
We need AI that is more robust—AI that doesn't just memorise data and information that humans have produced but is actually capable of adjusting to new circumstances.

However, some patience is required here, as delays in realising the benefits of new technologies are common. Major productivity gains from computers did not emerge until a decade after their introduction. New technologies often require rethinking business workflows, substantial organisational changes, and acquiring new skills to adapt. This process takes time, so AI's full impact will become visible only gradually.

“The Future of Employment”2, a study you co-authored more than ten years ago, sparked considerable attention predicting significant job displacements due to AI. What were your key findings?

Our research suggested that 47% of mostly lower-skilled jobs could be at risk, due to automation. That said, advancements in machine learning and robotics do not necessarily result in net job losses. Technological change can also create new roles by reducing costs and freeing up resources within organisations. Historically, the evolution of work and technology has been a race between creating new job types and replacing existing ones with automation.

Growth in AI Research and Innovation

Your study predated the emergence of Large Language Models (LLMs). The International Monetary Fund (IMF) recently reported that higher-skilled occupations are now at risk3. Have LLMs changed the outlook?

At the time of our study, we identified mostly lower-skilled jobs as being at risk. Yet, LLMs have proved beneficial across skill levels. Lower-skilled employees can now quickly acquire new capabilities, such as basic coding, copywriting, or financial analysis. Similarly, highly skilled professionals can enhance their productivity with LLMs, potentially increasing their market share.

However, while fears of a total job apocalypse are exaggerated, there's no guarantee that the new jobs created by AI will outnumber those it displaces. The net effect of AI largely hinges on its application. If AI focuses on automating existing tasks, we could see significant job displacement and wage pressures, reminiscent of the early Industrial Revolution in Britain. Conversely, if leveraged to develop new products and solutions, AI could enhance productivity and create new job opportunities— though that doesn’t appear to be the current trajectory.

Right now, nearly all existing AI applications I can think of essentially act as replacement technologies: doing something we already do, but a bit more productively.

If we use AI for automation only, the impact will be job displacement, pressure on wages, and a falling labour share of national income.

There has been quite a debate around the need for regulating AI. What’s your position on that?

There have been several studies highlighting the potential harm AI could cause. For instance, concerns about the role of generative AI (GAI) in spreading misinformation seem legitimate, though we still lack sufficient evidence to fully assess its societal impact.

When regulating AI, legislators must consider certain trade-offs. Emphasising AI dangers and pushing for regulation could risk concentrating the technology in the hands of a few dominant players.

Consider the EU General Data Protection Regulation (GDPR) , implemented in 2018. We found that the GDPR had a notable economic impact, particularly on tech companies, which experienced an average profit decline of 2.1%. That decline mostly stemmed from higher compliance costs: firms had to invest in creating or upgrading their IT systems to manage consent, encrypt data, and anonymize confidential information.

While larger firms could absorb these costs, smaller firms (with fewer than 500 employees) struggled disproportionately. Accordingly, it seems that the GDPR led to greater market concentration by benefitting bigger technology companies at the expense of smaller ones.

The effects of the GDPR raise important considerations for future digital regulations, like the EU AI Act. It’s too early to say for sure, but the AI Act could result in similar costs and disproportionately benefit large companies. On the other hand, large companies may also attract more regulatory scrutiny, increasing their risk of sanctions or fines, as well as raising their costs for lobbying and regulatory compliance.

Yet, companies that have already invested in GDPR compliance may find themselves better prepared and may need to spend fewer additional resources to comply with the EU AI Act.

The AI Value Chain

When, if ever, will AI become a transformative technology?

Transformative innovation requires not just investment but also broad experimentation. Investments are important, but not enough. The field of AI is currently dominated by data- and resource-intensive models, which means that only well-funded companies can really experiment. Policy can help here by making data and compute available to smaller players. Indeed, some scholars have called for a CERN for AI. Yet it is not clear that data-intensive approaches like deep learning are the only path forward. We need more innovation to develop AI that can learn from smaller curated datasets.

That said, we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. This we call Amara's law.

Amara's Law

Considering Amara’s law in the context of AI: Before you can deploy any AI system within a company’s organisation, you need to make additional investments beyond just acquiring the system. Setting up a data infrastructure within one’s company that collects and organises data of interest and connects with other data is often the most labour-intensive and expensive part. Successfully building this infrastructure isn’t just a question of how much a company can invest, but also whether the company has the right skills, and often the company needs to acquire these skills first.

Looking ahead, where do you see the greatest opportunities for productivity growth driven by AI?

There is still plenty of low-hanging fruit from the adoption of the recent wave of generative AI as more and more companies begin to tailor existing foundation models to specific tasks with specialised datasets.

However, I also think that the Large Language Models which are behind this progress will run into diminishing returns quite soon. Why? There are two reasons. First, the internet may soon be flooded with low-quality, AI-generated content, which is unsuitable for training robust models. Second, several studies show that relying on synthetic data poses risks to the models themselves in the form of model collapse .

Finally, most of the applications of generative AI we see today centre on automation, and the productivity gains from replacing people in existing activities is a one-off. Sustained productivity growth comes from doing new and previously inconceivable things, and this will only happen if AI really transforms our ability to do science and innovate. Many think that it might, but we also thought this would be the case with the PC and the Internet, and their impact on science and innovation have been disappointing so far.

Carl Benedikt Frey is the Dieter Schwarz Associate Professor of AI & Work at the Oxford Internet Institute and a Fellow of Mansfield College, University of Oxford.

Sources

  1. The Technology Trap: Capital Labor, and Power in the Age of Automation. By Carl B. Frey. Princeton: Princeton University Press, 2019
  2. Frey, Carl B. & Osborne, Michael A., 2017. "The future of employment: How susceptible are jobs to computerisation?", TechnologicalForecasting and Social Change, Elsevier, vol. 114(C), pages 254-280
  3. Mariarosaria Comunale, Andrea Manera, 2024. "The Economic Impacts and the Regulation of AI: A Review of the Academic Literature and Policy Actions", IMF Working Paper No. 2024/065, International Monetary Fund
  4. Susan Ratcliffe, ed.(2016). "Roy Amara 1925–2007, American futurologist". OxfordEssential Quotations. Vol. 1 (4th ed.), Oxford University Press
  5. Center for Security and Emerging Technology, Christian Schoeberl, Autumn Toney, and James Dunham, (2023), Identifying AI Research, Retrieved from: https://doi.org/10.51593/20220030, Last accessed 17 July 2024