Singapore’s $1 billion boost to further artificial intelligence (AI) capabilities underscores the importance of AI in transforming the nation’s socio-economic growth and competitiveness. Businesses will be compelled to act immediately to integrate their current business processes and capture the benefits that AI and other innovations can bring. But to secure the required investment (whether internally or externally) and to produce intelligent business outcomes will require careful consideration and a methodical strategy.
A recent survey conducted by Searce to examine the factors behind successful AI implementation reveals that ‘mature’ organisations with successful AI implementation employ several strategies for success. This includes investment in high-quality data, clear goals, talent development, and a strategic approach characterised by gradual scaling via small pilot projects.
The survey also reveals that data security, data management/governance, and scalability, as well as interoperability and integrating with legacy systems are primary barriers to AI/ML progress. The scarcity of skilled AI and machine learning (ML) professionals to spearhead and administrate AI/ML initiatives and to deal with change management also poses a significant hurdle.
AI, whether generative or not, is a collection of tools. When adopting AI, it is the business outcome that should concern organisations and not the tool. Tech vendors often turn the conversations around as they dominate the media debate to sell their tools. Some top-level leaders have even declared that they will adopt AI and then asked their teams to find application areas, putting the cart before the proverbial horse.
While there is a lot of buzz among C-suites to adopt AI, there is a lack of focus on identifying top impact areas and a solid business case. In mature organisations, we’ve observed that they are harnessing AI and ML to improve productivity, enhance customer experiences, automate tasks, and deepen predictive capabilities.
Talent
See also: Are bug bounty programmes the solution to rising cybersecurity threats in Southeast Asia?
Most importantly, humans will have to be trained and motivated to participate in the complex change management process necessary to implement a new AI-based initiative, particularly as Singapore has announced ambitions to triple the number of AI practitioners, including data scientists and engineers to 15,000.
It is common knowledge that approximately 80% of all AI projects fail. However, they do not fail because the software or mathematics don’t work. They fail because users either do not use the tools or use them incorrectly.
Characteristics of AI-driven organisations prioritise an AI-friendly culture. Our research shows that 70% of organisations with successful AI implementation have a workforce that tends to be well-versed in AI, ensuring wide-spread AI understanding. They also proactively manage change, with robust change management guidelines, easing the integration of AI teams and processes. They also prize and encourage collaboration and learning, with many of these enterprises seeing “communities of practice” thrive, fostering knowledge sharing and upskilling.
See also: Mitigating the risks of AI face-swapping fraud in financial services
Moreover, they are more likely to organise structured AI/ML training programmes, leverage low-code/no-code technology to enhance in-house AI/ML skills, and manage data centrally across teams and functions. They are also pushing their service vendors to adopt a dual delivery approach to cross pollinate the skillsets within the organisation.
At ‘mature’ enterprises, 57% say initiatives are led by senior/C-level executives while non-mature organisations trail well behind in all these categories.
Regulations
Next, there are concerns surrounding the lack of clear and consistent regulations and the safety and ethical considerations of AI systems.
One of the principal obstacles to AI adoption is the current legal uncertainty. Many countries and states are currently thinking about regulations, but most have not finalised them. The result is near complete uncertainty on what is legal, what will be regulated and how, and what the penalties are, if any. What seems certain is that many places in the world will answer these questions very differently. In an industry that mainly lives on the internet with vendors, data centres, and customers in different countries, it is unclear how these rules will be enforced effectively.
As a global reference point, countries can look to Singapore, having progressively established the Singapore Model AI Governance Framework and AI Verify, a set of guidelines with 11 AI ethics principles with a balanced approach to facilitate innovation and safeguard consumer interests, which are consistent with internationally recognised frameworks from the EU and OECD.
Businesses buying and applying tools to modify processes will need assurance that they are compliant and long-lasting. Currently, they do not have that reassurance, so the key is to ensure adaptability and scalability.
To stay ahead of the latest tech trends, click here for DigitalEdge Section
Scalability
The shift to cloud has several significant advantages over on-premises data centres. It transforms spending from a Capex to an Opex model, allowing companies to concentrate on their core business focus without dealing with IT complexities and managing hardware intricacies.
Many software tools are usually baked into the cloud and pre-integrated with each other, minimising the effort required for setting up and maintaining the environment. Previously difficult and expensive processes like backup and security have become no brainers as cloud providers take care of them in the background. Software and data become available easily, anywhere, and anytime.
Perhaps the biggest benefit is scaling. If organisations need more or less storage or computer infrastructure, the cloud allows an efficient, dynamic, and fully automatic scaling both up and down.
Safety of AI
Robust data privacy and security are assured easily through advanced technological measures such as encryption, data governance, and other suitable IT tools and environments. Compliance with privacy regulations is similarly no problem. The identification and handling of speech such as toxic, insulting, or discriminating content are managed through automated detection, flagging, and removal. Additionally, the automatic detection and removal of personally identifiable information further fortifies the protection of sensitive data.
Leveraging strategies like prompt engineering increases the chances of getting accurate responses from AI, but we must acknowledge that AI will make mistakes. Therefore, the underlying process in which AI is used must be robust enough to absorb the occasional mistake. In our work, we categorically assist our clients in crafting resilient processes that encompass both human intervention and policy elements to effectively and gracefully handle these exceptions.
Ensuring ethical practices in AI, it’s crucial to have an established ethical AI framework. Businesses need to meticulously evaluate all projects against this framework to ensure AI initiatives cause no harm and document all decisions of moral character.
Securing investment
To secure investment, it’s imperative to ensure that the strategy and processes of implementing AI are holistic, responsible, and sustainable, and tied to positive outcomes. This commitment will underscore a company’s dedication to responsible and conscientious AI implementation and will go a long way to ensuring investment as well as positive and intelligent business outcomes.
Yash Thakker is the associate director of Cloud Services at Searce