Last December, the EU set a global precedent by finalising the Artificial Intelligence Act (AI Act), one of the world’s most comprehensive sets of AI rules.
Europe’s landmark legislation could signal a broader trend towards more responsive AI policies. But while regulation is necessary, it is insufficient. Beyond imposing restrictions on private AI companies, governments must assume an active role in AI development by designing systems and shaping markets for the common good.
AI models are evolving rapidly. When EU regulators released the first draft of the AI Act in April 2021, they hailed it as “future-proof”, only to be left scrambling to update the text in response to the release of ChatGPT a year and a half later. But regulatory efforts are not in vain. For example, the law’s ban on AI in biometric policing will likely remain pertinent regardless of technological advances. Moreover, the risk frameworks contained in the AI Act will help policymakers guard against some of the technology’s most dangerous uses. While AI will develop faster than policy, the law’s fundamental principles will not need to change — though more flexible regulatory tools will be needed to tweak and update rules.
However, thinking of the state as only a regulator misses the larger point. Innovation is not just some serendipitous market phenomenon. Its direction depends on the conditions in which it emerges, and public policymakers can influence these conditions. The rise of a dominant technological design or business model results from a power struggle between various actors — corporations, governmental bodies and academic institutions — with conflicting interests and divergent priorities. Reflecting this struggle, the resulting technology may be more or less centralised, more or less proprietary, and so forth.
The markets that form around new technologies follow the same pattern, with important distributive implications. As the software pioneer Mitch Kapor puts it, “Architecture is politics”. More than regulation, a technology’s design and surrounding infrastructure dictate who can do what with it and who benefits. For governments, ensuring that transformational innovations produce inclusive and sustainable growth is less about fixing markets and more about shaping and co-creating them. When governments contribute to innovation through bold, strategic, mission-oriented investments, they can create new markets and crowd in the private sector.
In the case of AI, the task of directing innovation is currently dominated by large private corporations, leading to an infrastructure that serves insiders’ interests and exacerbates economic inequality. This reflects a longstanding problem. Some of the technology firms that have benefitted the most from public support — such as Apple and Google — have also been among those accused of using their international operations to avoid paying taxes. These unbalanced, parasitic relationships between big firms and the state now risk being further entrenched by AI, which promises to reward capital while reducing the returns to labour.
See also: ChatGPT’s US$8 tril birthday gift to Big Tech
The companies developing generative AI are already at the centre of debates about extractive behaviour, owing to their unfettered use of copyrighted text, audio and images to train their models.
By centralising value within their own services, they will reduce value flows to the artists whom they rely on. As with social media, the incentives are aligned for rent extraction, whereby dominant intermediaries amass profits at others’ expense.
Today’s dominant platforms, such as Amazon and Google, exploited their position as gatekeepers by using their algorithms to extract ever larger fees (“algorithmic attention rents”) for access to users.
See also: DBS, EnterpriseSG and IMDA launch gen AI programme to help up to 50,000 SMEs adopt tech
Once Google and Amazon became one big “payola” scheme, information quality deteriorated, and value was extracted from the ecosystem of websites, producers and app developers the platforms relied on. Today’s AI systems could take a similar route: value extraction, insidious monetisation and deteriorating information quality.
Governing generative AI models for the common good will require mutually beneficial partnerships, oriented around shared goals and creating public value rather than only private. This will not be possible with redistributive and regulatory states that act only after the fact; we need entrepreneurial states capable of establishing pre-distributive structures that will share risks and rewards ex ante.
Policymakers should focus on understanding how platforms, algorithms and generative AI create and extract value, so that they can create the conditions — such as equitable design rules — for a digital economy that rewards value creation.
Mind your history
The Internet is a good example of a technology that has been designed around principles of openness and neutrality. Consider the principle of “end-to-end”, which ensures that the Internet operates like a neutral network responsible for data delivery. While the content being delivered from computer to computer may be private, the code is managed publicly. And while the physical infrastructure needed to access the Internet is private, the original design ensured that, once online, the resources for innovation on the network are freely available.
This design choice, coordinated through the early work of the Defense Advanced Research Projects Agency (among other organisations), became a guiding principle for the development of the Internet, allowing for flexibility and extraordinary innovation in the public and private sectors. By envisioning and shaping new domains, the state can establish markets and direct growth, rather than just incentivising or stabilising it.
It is hard to imagine that private enterprises developing the Internet in the absence of government involvement would have adhered to equally inclusive principles. Consider the history of telephone technology. The government’s role was predominantly regulatory, leaving innovation largely in the hands of private monopolies. Centralisation not only hampered the pace of innovation, but also limited the broader societal benefits that could have emerged.
For example, in 1955, AT&T persuaded the Federal Communications Commission to ban a device designed to reduce noise on telephone receivers, claiming exclusive rights to network enhancements. The same kind of monopolistic control could have relegated the Internet to being merely a niche instrument for a select group of researchers, rather than the universally accessible and transformative technology it has become.
Sink your teeth into in-depth insights from our contributors, and dive into financial and economic trends
Likewise, the transformation of the Global Positioning System (GPS) from a military tool to a universally beneficial technology highlights the need to govern innovation for the common good. Initially designed by the US Department of Defense to coordinate military assets, public access to GPS signals was deliberately degraded on national-security grounds. But as civilian use surpassed that of the military, the US government, under President Bill Clinton, made GPS more responsive to civil and commercial users worldwide.
That move not only democratised access to precise geolocation technology; it also spurred a wave of innovation across many sectors, including navigation, logistics and location-based services. A policy shift towards maximising public benefit had a far-reaching, transformational impact on technological innovation. But this example also shows that governing for the common good is a conscious choice that requires continuous investment, high coordination, and a capacity to deliver.
To apply this choice to AI innovation, we will need inclusive, mission-oriented governance structures with the means to co-invest with partners that recognise the potential of government-led innovation. To coordinate inter-sectoral responses to ambitious objectives, policymakers should attach conditions to public funding so that risks and rewards are shared more equitably.
That means clear goals to which businesses are held accountable; high labour, social and environmental standards; and profit sharing with the public. Conditionalities can, and should, require Big Tech to be more open and transparent. We must insist on nothing less if we are serious about the idea of stakeholder capitalism.
Ultimately, addressing the perils of AI demands that governments extend their role beyond regulation. Yes, different governments have different capacities, and some are highly dependent on the broader global political economy of AI. The best strategy for the US may not be the best one for the UK, the EU or any other country.
But everyone should avoid the fallacy of presuming that governing AI for the common good is in conflict with creating a robust and competitive AI industry. On the contrary, innovation flourishes when access to opportunities is open and the rewards are broadly shared. — © Project Syndicate, 2024
Mariana Mazzucato, founding director of the UCL Institute for Innovation and Public Purpose, is chair of the World Health Organization’s Council on the Economics of Health for All. Fausto Gernone, a PhD student at the UCL Institute for Innovation and Public Purpose, is on a research visit at the Haas School of Business at the University of California, Berkeley