The 25 employees of San Francisco start-up Atmo Inc had every reason to be afraid. The four-year-old company takes atmospheric data from meteorological sensors and uses artificial intelligence (AI) to make weather predictions that it sells to customers such as the US Air Force and the Philippine government. Atmo says its AI tools generate weather forecasts that are more precise than those produced by crunching data on supercomputers, cost far less to use and can learn from past mistakes.
The start-up seemed to have clear competitive skies — until March 29, when Alphabet Inc published an academic paper titled “Generative AI to quantify uncertainty in weather forecasting” that described its own AI weather model, which it dubbed Seeds. Suddenly Atmo, which had raised a grand total of US$11.2 million ($15.1 million), was facing the prospect of competition from a US$2 trillion behemoth with one of the largest AI operations in the world.
In the week of May 13, Google and Microsoft-backed OpenAI both staged public demonstrations of their upcoming AI tools. They introduced models that can ingest not only words typed on a keyboard but spoken commands and images, expanding the way people can interact with computers. They both seem determined to dominate the transformative field of AI.
Alex Levy, Atmo’s co-founder and CEO, says he is unfazed by the formidable competition. After reading Google’s paper, he says, he just shrugged. The article does not guarantee Google is going to start selling a weather forecasting tool anytime soon, he says in a video interview from his office, a bank of impressively detailed weather maps blinking behind him. “Google puts out a very large number of papers, and it’s important to distinguish papers are not products,” he says, adding that “today, you most definitely cannot buy weather forecasting from Google at any price”.
Tech giants to dominate?
Welcome to the age of unbridled AI optimism, a time when the pace of innovation is so rapid that prognostication is almost impossible. AI inventions, in the form of chatbots, coding copilots, and services that summon pictures and video at the behest of a crisp text prompt, are emerging at startling speed. But it seems like the faster things advance, the fuzzier Silicon Valley’s AI picture gets.
The ambiguities begin with the question of whether the landscape will be dominated by a handful of tech giants and a few of their amply funded proxies like OpenAI, which is backed by US$13 billion from Microsoft Corp. Start-ups such as Atmo hope not, but no one knows for sure. Then there are a host of financial questions. Will consumers and companies continue to embrace and even pay for generative AI when it is no longer a novelty? And how much economic value will remain in proprietary large language models such as GPT-4, considering companies like Meta Platforms Inc are spending hundreds of millions of dollars to develop powerful models like Llama 3 and planning to open-source them, essentially giving them away for free.
See also: ChatGPT’s US$8 tril birthday gift to Big Tech
It is also unclear whether the extraordinary rate of improvement that has marked the 17 months since the release of ChatGPT will be sustained. Recently, OpenAI demonstrated its newest flagship AI model, GPT-4o, calling it faster and more capable than its predecessor. (It can respond to spoken queries almost immediately, and it can sing, too.) In a blog post, CEO Sam Altman said the new model “feels like AI from the movies, and it’s still a bit surprising to me that it’s real”.
Other trusted tech visionaries are not as confident as Altman that generative AI will continue to astonish. “It’s unbelievable how much people are asserting they know what’s going on when the thing you need to know is unknowable,” says Eric Ries, author of the industry bible The Lean Startup and a co-founder of the AI research lab Answer AI Lab Inc. “If anyone out there knows the absolute God’s honest truth about how the scaling of this works, they should be out there demonstrating they are right and reaping all the rewards.”
Mammoth costs
Here’s what we do know: The cost of training and running an AI model is enormous. GPT-4 used an estimated US$78 million worth of computing power as it was being trained by OpenAI, according to Stanford University’s Artificial Intelligence Index Report 2024, released in April. Google’s Gemini Ultra cost US$191 million to train. Both models were developed on graphic processors that are expensive and difficult to obtain from Nvidia Corp, which for now is almost the only company that makes them.
See also: DBS, EnterpriseSG and IMDA launch gen AI programme to help up to 50,000 SMEs adopt tech
These daunting costs are having a significant impact on how the industry evolves. The need for vast amounts of capital and computing power helps explain why OpenAI shifted from being a non-profit to a commercial entity that is closely linked to the world’s most valuable company. Anthropic, another of the handful of prominent startups making foundational AI models, has raised US$4 billion from Amazon.com Inc and an additional US$2 billion from Google, and it also relies on the tech giants for chips and cloud infrastructure.
Dario Amodei, Anthropic’s co-founder and CEO, said at the Bloomberg Tech Summit on May 9 that it costs about US$100 million for it to train an AI model and that he expects the cost will eventually reach US$100 billion as models get bigger and require more computing power. He defended the company’s partnerships, saying it is “not plausible that various things could be exclusive”, because Anthropic has relationships with more than one tech giant. “That independence and that choice is one thing that I think differentiates Anthropic from some of these other deals,” he said.
Whether regulators will accept Anthropic’s perspective on such deals remains to be seen. The Federal Trade Commission (FTC) announced earlier this year that it was scrutinising the partnerships between cloud providers and generative AI companies, with FTC chair Lina Khan saying that the agency hoped to “shed light on whether investments and partnerships pursued by dominant companies risk distorting innovation and undermining fair competition”.
Churning out AI start-ups
The scales do seem to be tipped against AI start-ups without relationships with Big Tech. Yet Silicon Valley keeps churning them out. Venture capitalists funded 1,812 new AI companies last year, a 40.6% increase from 2022, according to Stanford. Many of these will almost certainly careen towards failure, such as London-based Stability AI, which has struggled to pay its bills and whose CEO resigned in March. Others, like Inflection AI, will simply get consolidated into larger AI efforts. It was designing a “kind and supportive chatbot” until it got largely subsumed by Microsoft earlier this spring for a fraction of what it had raised. “Having the GPU [graphics processing unit] farms to power AI requires deep, massive economies of scale, orders of magnitude more than we have ever seen in computing,” says Aaron Levie, CEO of cloud computing firm Box Inc. “No matter what, you can name three or four winners, plus the chip companies. For everyone else, it’s an open question.”
These are the kind of unfavourable odds that Silicon Valley tends to overlook. A basic tenet of the tech industry is that small, nimble companies can thrive because incumbents tend to be slow to identify and pursue new opportunities. One oft-cited local parable suggests you can make a fortune picking up dimes directly in the path of a slow-moving steamroller — for example, a lumbering tech colossus — if you are nimble enough to get out of the way and not get smushed. (Downside: If you slip and fall, you are a pancake.)
Suno is among the start-ups picking up dimes as fast as it can. It lets paying subscribers create songs and add AI-generated vocals with a written prompt. It has created its own AI model to generate the music but also draws on ChatGPT for lyrics and titles. Relying on OpenAI could be risky, since it may yet release an AI music product — back in 2020, for instance, it published research and code for its own song generator, called Jukebox. “I think it’s still a little too early to decide what the right business model is,” says Suno co-founder and CEO Mikey Shulman, of the company’s foray into offering a subscription. “Our mentality here is, let’s figure out how to delight people. Let’s make sure we build something people really love, and the shape of that product could be so different depending on how things turn out.”
For more stories about where money flows, click here for Capital Section
Another start-up, Perplexity, is also using subscriptions to defray computing costs and test demand. It asks users to pay US$20 a month and returns answers that draw on several large language models, such as OpenAI’s GPT-4 and Anthropic’s Claude 3. It adorns its responses with source citations, links to related articles and pertinent follow-up questions, often providing more reliable answers than other chatbots and search engines. The service poses a direct challenge to Google, which at its annual I/O conference on May 14 introduced a revamped search experience that highlights AI-generated summaries drawn from search results. Meanwhile, Bloomberg News has reported that OpenAI is working on its own similar AI search engine.
This can sound like unstable strategic ground for Perplexity, whose plans rely both on competing against far larger rivals and using models made by them to power its own product. Yet in April, Perplexity raised new financing that valued the company at more than US$1 billion, another sign of the undaunted AI enthusiasm despite all the ambiguity. And the company recently told Bloomberg News it is bringing in US$20 million in annual recurring revenue. Aravind Srinivas, Perplexity’s co-founder and CEO, is betting that “most of the profits in generative AI” will flow towards services that have this direct relationship with users and less towards foundational models like GPT-4.
Venture capitalist Dave Morin has seen this movie before. He was an executive at Facebook 15 years ago, when the company allowed start-ups to tap into elements of its social network, such as its users’ friend networks and photos. A few years later, Facebook reversed course and vapourised a lot of the companies that had not forged their own bonds with customers. Avoiding excessive reliance on mercurial tech platforms is “not rocket science. It’s old Silicon Valley thinking”, Morin says. He is an investor in Atmo and says that the company is preparing to survive any coming shake-up by honing its focus on weather forecasting.
Of course, like everyone else, Morin does not know for sure. “There is way more ambiguity than people want to admit,” he says. — Bloomberg Businessweek