As the saying goes: “A house is only as strong as its foundation”. Similarly, organisations can be resilient and innovative only when a strong IT backbone supports them. Here are the digital transformation areas organisations should prioritise in 2024 to address the business demands in a digital-first era.
Democratising AI
Artificial intelligence (AI) was reserved for selected individuals or teams in an organisation. However, the rise of generative AI last year has shown the value of democratising the technology to even non-technical employees.
Organisations should increasingly find ways to embed AI into their operations in 2024. “AI [shouldn’t be perceived as just another] technology, but more so an ‘underlying operating system’ that is ingrained into every part of our daily lives. Enabling that will result in many profound ways of utilising information and knowledge at scale, leading to massive innovation at parallel lengths,” says Ashley Fernandez, chief data and AI officer of Huawei International.
Nick Eayrs, vice president of Field Engineering at Databricks Asia Pacific and Japan, agrees that organisations should make AI the core of their software and data platforms. “Imagine integrating generative AI into your data storage and analysis tools. This convergence will redefine data analysis, making it as intuitive as engaging in a conversation. Such advancement will revolutionise data teams’ tasks, freeing them from the labour-intensive pattern detection process. We call this new generation of systems ‘Data Intelligence Platforms’, where AI models can deeply understand and interpret the semantics of enterprise data, and users can use natural language to query data-sets, allowing non-technical users to derive value from enterprise data.”
Chris Chelliah, senior vice president for Technology and Customer Strategy at Oracle Japan and Asia Pacific, adds: “By embedding AI in the digital infrastructure and applications, organisations can manage costs, optimise performance, improve decision-making, and automate end-to-end processes to enhance overall customer and employee experiences.”
See also: 80% of AI projects are projected to fail. Here's how it doesn't have to be this way
However, Chelliah notes that AI models require extensive processing power and computing capabilities. “[As such, we believe] developers will run more workloads in public clouds offering specialised infrastructure. By training generative AI models at scale and building on an organisation’s data, businesses can become more competitive in the coming years,” he says.
Clearing the cloudy situation
Besides AI models, critical workloads will be increasingly hosted in the cloud. A recent study by Oracle reveals that 97% of enterprises in Asia Pacific are using or planning to use at least two cloud infrastructure providers.
See also: Responsible AI starts with transparency
“Underpinning this is the need to optimise spend, reduce concentration risk, and accelerate innovation with flexibility in commercial models and geographical deployments. Most importantly, they are looking for the cloud that can give them the quickest headstart in their data and AI initiatives,” says Chelliah.
Yet, on-premises infrastructure will remain relevant. “Organisations are learning that the cloud is not ideally suited for all applications and data. This is why many companies that made the jump to the cloud are partially repatriating their data, and cloud-native companies are supplementing their cloud infrastructure with on-premises computing and storage resources,” explains Andy Ng, vice president and managing director for Asia South and Pacific region, at Veritas Technologies.
He continues: “Hybrid multi-cloud will play a key role as organisations address the challenges of managing data access between private networks and public clouds, enabling organisations to continue their business transformation journey.”
Sharing the same sentiment, Rob Le Busque, regional vice president for Asia Pacific at Verizon Business Group, advises organisations to take a closer look at rationalising, resetting, and perhaps collapsing, their cloud infrastructure in 2024.
“Firstly, it will make an organisation’s cloud infrastructure inherently more secure because you know where the data is and can structure governance and policy around it. Secondly, it will bring commercial benefits because organisations can create savings with consolidation and reassessment. Lastly, a consolidated cloud architecture and design principles will likely unlock further investment that can be ploughed back into cloud infrastructure, releasing the burden on capital within their organisation. With a considered approach, IT departments can bring back under control the spending on cloud infrastructure and, importantly, be better aligned with the organisation’s overall business objectives,” he says.
Managing data and AI
Data governance is another key area to focus on this year since data is the lifeblood of organisations, especially when they leverage AI. “Data governance will ensure the effective functioning and secure use of generative AI and AI tools. Companies should prioritise and learn about compliance in [generative AI] and how they can use various tools to best manage risks, protect data, and remain compliant with regulations and standards,” says Simon Tung, general manager of Crayon Singapore.
To stay ahead of the latest tech trends, click here for DigitalEdge Section
Remus Lim, vice president for Asia Pacific and Japan at Cloudera, adds: “Trust is critical for AI. Navigating the new world of enterprise AI requires organisations to build trustworthy foundational data and models via an ethical approach, supported by a strong data security and governance framework.”
Having a hybrid data platform can help organisations maintain strong management and governance controls across all their data. “This is crucial to dealing with data bias, causality, correlation, uncertainty and human oversight, as it impacts the system’s ability to reproduce outcomes reliably.
“Ideally, the platform should allow for full analytics on the data at rest and in motion, and be interoperable and compatible with multiple peer engines and competitive products for flexibility and openness in the platform. With a trusted data foundation, organisations can leverage all their data across the public and private clouds to deploy trusted, secure and responsible AI at scale,” shares Lim.
According to IBM’s research, less than 25% of executives globally have operationalised common principles of AI ethics. “Organisations that push forward without considering the intricacies of AI ethics and data integrity risk damaging their reputation for short-term gains,” says Madhavan Vasudevan, chief technical officer of IBM Asia Pacific.
He advises organisations to consider the following factors when adopting and scaling AI:
- Open — Understand the best way to use foundation models and leverage the best models available, considering how data is used, what can be shared beyond the organisation, what is proprietary and what can be shared in the public domain.
- Trusted — Employees, customers and partners must trust the output from generative AI tools built on foundation models.
- Targeted — By using smaller, purpose-built foundation models to address specific use cases, organisations can reduce the cost of running AI while minimising negative effects such as hallucination. Targeted models can also help produce accurate, scalable and adaptable model results.
- Empowering — Make everyone in the ecosystem AI creators by deploying foundation models under their direct control, coupled with an organisation’s data and rapid model-building tools that can easily be tuned, trained, deployed, governed — and running anywhere.
Strengthening cyber defence
As organisations advance their digital transformation efforts, cybersecurity needs to be an even greater priority. This is because of the wider attack surface as organisations become connected (internally and externally) and the fact that tools like AI can also be used for cyberattacks.
Organisations can improve their cyber defence by adopting the zero-trust security model, where users and devices are granted access to only the applications and data they need based on their identities and roles.
“For the longest time, cybersecurity has been focused on protecting valuable data such as personal and credit card information, typically targeted by bad actors. In the last three to four years, we have seen a shift to application security (primarily those that sit at the edge or in the cloud), which means organisations need to have the right protocols for those that sit outside the traditional corporate network, including securing and maintaining access, as well as guaranteeing the infallibility of those new applications,” says Verizon Business Group’s Le Busque.
He continues: “The new paradigm is one of identity and access management (IAM) converging with security. This means delivering on zero trust architecture as it boils down to the level of credentials and trust associated with an individual user across the entire IT stack. IT departments need to consider how they can create more agility and focus and — from a governance perspective — demonstrate great detail, fidelity and clarity around who has access to what.”
Adopting IAM as part of zero trust security can also help prevent organisations from falling prey to phishing attacks, which trick users into giving away personal details or other confidential information. “As AI becomes increasingly sophisticated, spear-phishing [a more targeted form of phishing attack] has become prevalent, increasing the vulnerability of organisations and individuals. This will accelerate the adoption of identity management to enhance security and protect against such cyber threats,” says Crayon’s Tung.
Besides that, organisations should streamline their cybersecurity tool portfolio. “Estimates put the average enterprise security toolset at 60 to 80 distinct solutions, with some enterprises reaching as many as 140. That said, too much of a good thing is bad — enterprise security tool sprawl or tool bloat leads to a lack of integration, alert fatigue and management complexity. Moreover, each additional security tool can potentially increase the threat surface. The end-outcome is a weakened security posture, opposite to what was intended,” explains Veritas’s Ng.
He continues: “In 2024, many organisations will likely be pushed to adopt either a ‘one in, one out’ mindset to their enterprise security toolsets or consolidate to more comprehensive integrated solutions that bring together data protection, data governance and data security capabilities.”
This calls for organisations to utilise frameworks such as the Cyber Defense Matrix to map out how their technology, people and processes are related to various functions and organisational assets. Doing so will help them understand their full suite, identify overlapping tools and processes, and determine what could be cut from their cybersecurity suite.
He adds that organisations should also harness the capabilities of AI to automate the detection of and response to malicious activities due to the increasing volume and sophisticated nature of cyber threats today.
Reducing cost and carbon footprint
For organisations to be AI-driven, the key will be to figure out how to implement AI affordably and sustainably.
Alexis Crowell, vice president and CTO for Asia Pacific and Japan at Intel, shares that the cost of developing and maintaining AI workloads can be extremely high. “Training a large language model like ChatGPT could cost millions of dollars, emit 500 tonnes of carbon dioxide (for comparison, human life on average emits about 5.31 tonnes of carbon dioxide each year), and consume 700,000 litres of water. And then there’s inferencing — when the application is used, it can cost millions every month if accessed by millions of users.”
As organisations turn AI ideas into reality, the bills will quickly rack up, so the priority will be to figure out how to deploy AI with the best performance but with the lowest cost and carbon footprint.
“Not all AI workloads need graphics processing units (GPUs). Also, most organisations will only need to use pre-trained models and finetune them with their own smaller, curated data sets. This can be done quickly with AI software running on general-purpose central processing units (CPUs) running other workloads simultaneously. So, CPUs, AI accelerators, or Field Programmable Gate Arrays may make better sense for diverse business needs, power and cost requirements,” says Crowell.
For instance, South Korea’s tech giant Naver recently decided to switch its GPU-based server to a CPU-based server for its location information provision service. Doing so enabled it to save over US$300,000 ($397,000) per year in operating costs without compromising performance or adding more equipment.
Crowell says: “Businesses [must understand that] different AI workloads require different approaches. For AI to be affordable and sustainable, the priority will be to explore a diverse mix of architecture, hardware and software to reach AI practicality.”
AI can also be used to help reduce an organisation’s carbon footprint. Veritas’s Ng says: “Today, numerous data-points could be used to help organisations identify sustainability opportunities and visualise their progress towards their sustainability goals. However, many organisations face the challenge of capturing timely and trusted data to help create these insights. This is where autonomous data management, based on AI, comes in.
“For many business leaders, automating the collection and management of data could accelerate the conversion of raw data into real, actionable and reliable insights. This swift and more holistic data management can drive real organisational improvement, allowing sustainable solutions to seamlessly be woven into operations and work processes and inform real-time decision-making.”
With more than 46% of executives surveyed by IBM viewing AI as important for advancing their companies’ sustainability reporting and performance efforts, he expects to see more organisations deploying AI-driven data management solutions in their transition to a more resilient and sustainable future.