Continue reading this on our app for a better experience

Open in App
Floating Button
Home Digitaledge In Focus

80% of AI projects are projected to fail. Here's how it doesn't have to be this way

Ilias Chantzos
Ilias Chantzos • 5 min read
80% of AI projects are projected to fail. Here's how it doesn't have to be this way
At least 30% of generative AI projects will be abandoned after proof of concept by the end of 2025. Photo: Unsplash
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Every single conversation at recent industry events in Asia – on stage or off it  either starts with or eventually meanders back to Artificial Intelligence (AI). It is clear that the Asia Pacific region is part of the headlong rush to embrace AI.

But we are starting to hear (and see) doubts about AI’s success beyond the hype. A recent RAND report noted that some estimates suggest 80% of AI projects are predicted to fail, far eclipsing the failure rate of IT projects that do not involve AI.

The reason for such high estimates is largely because many enterprises struggle to transition from pilot projects to full-scale implementation and value realisation. “At least 30% of generative AI (GenAI) projects will be abandoned after proof of concept (POC) by the end of 2025, due to poor data quality, inadequate risk controls, escalating costs or unclear business value," according to Gartner, Inc.

Where should your AI live?

Why do AI initiatives still struggle to move beyond the POC stage? Firstly, there’s simply not enough resources to go around – from Graphic Processing Unit (GPUs) to talents needed to create and operate AI.

That is why many organisations are turning to the public cloud to run their AI models, or apply off-the-shelf AI (run on public clouds) to their unique business needs.

See also: Responsible AI starts with transparency

However, public clouds come with their own set of challenges:


  • Localisation Issues: Deciding where to host and scale AI initiatives, such as models and agents — on-premises or in the cloud — remains a significant challenge.

  • Vendor selection: With multiple technology options like GPUs or specialised ASICs available from various vendors, making an informed choice is complex.

  • Jurisdictional control: Having local/regional legal control of proprietary data, access and analytics under a local jurisdiction instead of a foreign jurisdiction creating compliance challenges.

  • Infrastructure foundation: Balancing between cloud-native practices and hybrid multi-cloud strategies can be daunting.

  • Supercomputing strategies: Managing infrastructure for training and inference requires meticulous planning.

These issues often lead enterprises toward costly reserved instance models due to supply-demand imbalances for specialised resources like GPUs.

See also: Mitigating the third-party identity threat

But many of these organisations cannot – and will not – allow public clouds to process their highly confidential and sensitive data, especially given the slate of regulations catching up to govern the use of data for AI.

The cost of non-compliance

Enterprises today operate in a complex regulatory landscape – GDPR, NIS2, and the EU AI Act are just a few regulations from the European Union (EU) that are steering how data is managed and secured. They are also just a fraction of the regulations currently in the pipeline. The accelerated pace of technological advancements, particularly in AI, are likely to fuel the development of new regulations, adding further complexity over time.

Failure to establish robust, sustainable compliance practices from the outset can lead to significant challenges down the road. What happens, for example, when people leave an organization and can no longer explain why certain decisions around training, fine-tuning, customisation, security, quality or oversight were taken for a particular AI use case?

Managing AI is not necessarily dissimilar from managing other forms of technology risk. The key question is how to benefit from the technology and the automation it can deliver while focusing human oversight and involvement to the truly high-risk cases without defeating the purpose of AI in the first place.

To reduce the odds of failure and drive long-term success of AI amidst all the complexities, enterprises must act on these three key initiatives: 


  • Forge a strong AI Foundation
    Establish a comprehensive GenAI policy framework with executive buy-in at the highest level to guide responsible and ethical AI development and deployment. 

  • Effective risk management innovation
    Implement a governance structure to review AI use cases, identify risks, provide usage guidelines, and foster experimentation. Understand data, technology, and purpose while ensuring oversight and mitigating risks.

  • Empower your workforce
     Invest in employee education and training programs to equip staff with the knowledge and skills necessary to effectively and ethically leverage AI within their roles.

To stay ahead of the latest tech trends, click here for DigitalEdge Section

The future of AI is private

Right now, many AI use cases are the typical boy in the bubble. It works well within a secure, isolated and contained space, but it faces serious challenges interacting with the world outside it. And eventually, that world will leave the boy far behind.

For AI to survive beyond the POC stage, we need to host it in the right, real-world environment from incubation. And given all we’ve outlined above on data concerns, there are scenarios where this is possible using public clouds because of the nature of the data and the risk associated with them, while equally there are scenarios in which relying on a private model gives the right level of assurance. The future of AI, just like computing, will take different architectural forms. For the skilled organisations that have highly sensitive data or particular custom requirements it will be private. This is largely because you don’t have to export it to a public cloud's proprietary data store in order to gain the benefits of AI.

Private clouds with private workloads fuelled by your private data will ensure greater control. Equally, there will remain a need for the public cloud to manage sudden spikes and bursting workloads or to manage lower risk use cases. But for AI to succeed, we need to be prepared that regulation, geopolitics and risk calculations will drive the use-cases towards a mixed model where private cloud will be a key element that is needed to drive true innovation and value to the business instead of flashy demos.

Ilias Chantzos is the head of Global Privacy at Broadcom

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.