Fifty-five per cent of organisations in Southeast Asia (Asean) believe they are in the transformative or AI-first stages of AI readiness. However, digital research and advisory company Ecosystm found that only 5% had that degree of AI maturity when it assessed Asean organisations’ preparedness to deploy artificial intelligence (AI) based on four key criteria: their culture and leadership, skills and people, data foundation and governance framework.
This gap between perception and reality is perhaps why many organisations struggle to scale AI across the enterprise. “We see a lot of organisations experimenting with and using AI in meaningful ways, but some are stuck in their AI journey after a period of time. They aren’t sure how to proceed when issues, such as data security and privacy, arise as they try to scale the use of AI,” Mohamad Ali, senior vice president of IBM Consulting, shares with DigitalEdge in an interview on the sidelines of the recent IBM Think Singapore 2024.
Proving generative AI’s ROI
Initial use cases for generative AI tend to be for internal purposes, such as code generation, quality checks and process optimisation.
Thailand’s TMBThanachart Bank, for example, is exploring using IBM watsonx Code Assistant to automate the code analysis and refactoring processes in the infrastructure underlying its legacy system. By leveraging generative AI for code modernisation and development, the bank expects to reduce operational costs, increase productivity and address skills gaps.
“Most organisations are just starting with internal use cases, but when they scale those experiments/projects, they quickly realise that the external opportunities — such as using generative AI for customer-facing processes — can be very significant. In insurance companies, we’re seeing them use generative AI to process claims (an internal use case) and also enable their customer service agents to get the right information quickly when a customer calls in to ask about their claims (an external use case),” says Ali.
See also: 80% of AI projects are projected to fail. Here's how it doesn't have to be this way
Regardless of the use case, generative AI must support business goals such as improving productivity, enhancing customer experience and accelerating innovation. This calls for the need to prove the ROI on generative AI projects to get buy-in from management for enterprise-wide adoption of generative AI. Doing so, however, may be difficult because some benefits of generative AI (such as better customer experience) are intangible or there is no baseline.
“Without a baseline, it’ll be difficult to evaluate the impact of generative AI on a process because we don’t know what we’ve improved. So, organisations need to clearly define success metrics (such as reducing the time taken for code development or speeding up cash collections process) and establish a baseline as a reference point before starting their generative AI journey,” Ali advises.
Organisations struggling to define measurable ROI and a baseline, he adds, can turn to consulting firms like IBM Consulting for help as they can share their knowledge and experience working on similar projects with other clients.
See also: Responsible AI starts with transparency
Consulting firms can also help determine the baseline by conducting A/B testing. For example, they could have two teams in the organisation develop an app concurrently — one using traditional coding methods and another employing generative AI. By comparing the two teams, organisations can identify the baseline for metrics like productivity and code quality.
Scaling generative AI
The study Ecosystm conducted on behalf of IBM reveals that Asean organisations’ top AI priorities include improving data quality and equipping employees with AI skills.
Worryingly, there’s a lack of focus on data governance, which could expose them to regulation risk. Only 18% of the surveyed Asean organisations have a dedicated AI and data governance role. Two-thirds spread AI responsibility across departments or teams, which could lead to inconsistencies, while 15% don’t have a defined AI policy.
“Organisations that treat data governance and security as an afterthought will end up being stuck in their AI journey after the experimentation stage. They will then have to bring in a partner, like IBM Consulting, to help re-do their entire generative AI projects the right, scalable way,” claims Ali.
IBM Consulting Advantage is key in helping IBM Consultants deliver consistency, repeatability and speed for organisations’ generative AI projects.
Ali explains: “The IBM Consulting Advantage platform lets us harness more of our intellectual property, including an array of AI assistants, in our client engagements. This allows our consultants to be even more creative and productive as they use the platform to deliver greater value faster to thousands of our clients.”
To stay ahead of the latest tech trends, click here for DigitalEdge Section
Core to the platform is a library of role-based IBM Consulting Assistants to support consultants in their day-to-day tasks. This includes assistants trained on IBM proprietary data and composed of tailored prompts, models and output formats to put the collective knowledge of IBM and the wider industry at the fingertips of 160,000 IBM consultants. Employees can use strategy assistants to support use case prioritisation and business case development, business analyst assistants to support creating personas for user-centric design, or developer assistants to support code generation and conversion.
Moreover, the IBM Consulting Advantage has embedded capabilities for data security and privacy. This includes the ability to create spaces configured for and limited to a project team. IBM Consulting Assistants can be set up with private instances of generative AI models that do not store data or use it for training the models. They can also alert users if personally identifiable information appears in prompts.
In addition, the Assistants have integrated AI guardrails to help mitigate bias and enable auditable use, such as the ability for a consultant to ask one of the Assistants a question and select an option to check for bias in the answer.
Managing hybrid AI
Ali also highlights that the IBM Consulting Advantage can aid organisations in managing their hybrid AI model, consisting of both small language models (SLMs) and large language models (LLMs). SLMs are trained on fewer parameters than LLMs and on data pertaining to a specific domain. They perform well on specialised business-domain tasks — such as summarisation, question-answer and classification — consume less resources and can run locally on smaller devices like smartphones.
“Organisations will need to mix and match LLMs and SLMs to create digital assistants [for various tasks as not all tasks are created equal.] With IBM Consulting Advantage, they can manage all those models easily. We’ve also built in a metering capability into the platform to deliver something like FinOps for language/AI models. Organisations can, therefore, track the performance of their AI models and the resources consumed [so that they can identify cost-saving opportunities in their AI operations],” says Ali.