Continue reading this on our app for a better experience

Open in App
Floating Button
Home Views Artificial Intelligence

What the AI pessimists are missing

Michael R. Strain
Michael R. Strain • 5 min read
What the AI pessimists are missing
Photo: Pexels
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Pessimism suffuses current discussions about generative artificial intelligence. A YouGov survey in March found that Americans primarily feel “cautious” or “concerned” about AI, whereas only one in five are “hopeful” or “excited.” Around four in ten are very or somewhat concerned that AI could put an end to the human race.  

Such fears illustrate the human tendency to focus more on what could be lost than what could be gained from technological change. Advances in AI will cause disruption.

Creative destruction creates and destroys, ultimately benefiting that process. Often, the problems created by a new technology can also be solved. We have already seen this with AI and will see more in the coming years.

Recall the panic in schools and universities when OpenAI’s ChatGPT first demonstrated natural language writing. Educators rightly worried that generative AI would help students cheat, undermining their education. However, the same technology that allows for this abuse also facilitates its detection and prevention.

Generative AI can help to improve education quality. The longstanding classroom model of education faces serious challenges. Aptitude and preparation vary widely across students within a given classroom, as do learning styles and levels of engagement, attention, and focus. In addition, the quality of teaching varies across classrooms.

AI could address these issues by acting as a private tutor: If a particular student learns math best by playing math games, AI can play math games. AI can accommodate if another student learns better by quietly working on problems and asking for help when needed.

See also: ChatGPT’s US$8 tril birthday gift to Big Tech

Economic benefits
Suppose one student is falling behind while another in the same classroom has already mastered the material and grown bored. In that case, AI tutors can work on remediation with the former student and more challenging material with the latter. AI systems will also serve as customised teaching assistants, helping teachers develop lesson plans and shape classroom instruction.

The economic benefits of these applications would be substantial. When every child has a private AI tutor, educational outcomes will improve overall, with less-advantaged students and pupils in lower-quality schools likely benefiting disproportionately. These better-educated students will grow into more productive workers who can receive higher wages.

They also will be wiser citizens, capable of brightening the outlook for democracy. As democracy is a foundation for long-term prosperity, this, too, will have beneficial economic effects.
Many commentators worry that AI will undermine democracy by supercharging misinformation and disinformation. They ask us to imagine a “deep fake” of, say, President Joe Biden announcing that the US is withdrawing from NATO or perhaps of Donald Trump suffering a medical event. Such a viral video might be so convincing as to affect public opinion during the November election run-up.

See also: DBS, EnterpriseSG and IMDA launch gen AI programme to help up to 50,000 SMEs adopt tech

But while deep fakes of political leaders and candidates for high office are a real threat, concerns about AI-driven risks to democracy are overblown. Again, the technology that allows for deep fakes and other information warfare can also be deployed to counter them.

Such tools are already being introduced. For example, SynthID, a watermarking tool developed by Google DeepMind, imbues AI-generated content with a digital signature invisible to humans but detectable by software. Three months ago, OpenAI added watermarks to all images generated by ChatGPT.

Will AI weapons create a more dangerous world? It is too early to say. However, as with the examples above, the same technology that can create better offensive weapons can also create better defences.

‘Defender’s dilemma’
Many experts believe AI will increase security by mitigating the “defender’s dilemma”: The asymmetry whereby bad actors must succeed only once, whereas defensive systems must work every time.

In February, Google CEO Sundar Pichai reported that his firm had developed a large language model for cyber defence and threat intelligence. “Some of our tools are already up to 70% better at detecting malicious scripts and up to 300% more effective at identifying files that exploit vulnerabilities,” he wrote.

The same logic applies to national security threats. Military strategists worry that swarms of low-cost, easy-to-make drones could threaten large, expensive aircraft carriers, fighter jets, and tanks — all systems the US military relies on — if AI controls and coordinates them. However, the same underlying technology is already used to create defences against such attacks.

Finally, many experts and citizens are concerned about AI displacing human workers. But, as I wrote a few months ago, this common fear reflects a zero-sum mentality that misunderstands how economies evolve.

Sink your teeth into in-depth insights from our contributors, and dive into financial and economic trends

Though generative AI will displace many workers, it also will create new opportunities. Work in the future will look vastly different from work today because generative AI will create new goods and services whose production will require human labour. A similar process happened with previous technological advances. As MIT economist David Autor and his colleagues have shown, most of today’s jobs are in occupations introduced after 1940.

The current debate around generative AI focuses disproportionately on the potential disruptions. Yet, technological advances both disrupt and create. While bad actors will always exploit new technologies, a strong financial incentive exists to mitigate these risks and ensure profitability.

The personal computer and the internet-empowered thieves facilitated the spread of false information and led to substantial labour-market disruptions. Yet very few today would turn back the clock. History should inspire confidence — but not complacency — that generative AI will lead to a better world. — © Project Syndicate, 2024   

Michael R. Strain, Director of Economic Policy Studies at the American Enterprise Institute, is the author, most recently, of The American Dream Is Not Dead (But Populism Could Kill It) (Templeton Press, 2020)

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.