Continue reading this on our app for a better experience

Open in App
Floating Button
Home Digitaledge In Focus

The key to realising AI’s potential? Ensure data analysts, engineers and IT collaborate efficiently

Matthew Swinbourne
Matthew Swinbourne • 4 min read
The key to realising AI’s potential? Ensure data analysts, engineers and IT collaborate efficiently
Data stored across hybrid multi-cloud environments need to be unified to work easily across multiple formats, structures and access mechanisms. Photo: Unsplash
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Organisations in Asia Pacific are increasingly using AI across the business to deliver insights, operational improvements, increased customer satisfaction, and better products and services. In NetApp’s 2024 Cloud Complexity Report, we noted that among Singapore companies driving AI projects, many have already benefited significantly from it, including a 55% increase in production rates and a 58% improvement in customer experience.

The foundation for every AI project is data. Effective data management will enable enterprises to harness the potential of AI and scale its benefits to new heights. However, we still see many organisations grappling with the management of disparate data formats (image, video, text, audio), data structures (structured, unstructured, semi-structured), data access mechanisms (block, file, object), and data locations (cloud, on-premises, edge). Unlocking data silos — whilst key to leverage AI from the start to its greatest effects — is increasingly challenging in the hybrid and multi-cloud world.

IT's crucial role in AI implementation

AI has clearly intensified the need to manage the increasing number of data types residing in bespoke data silos. Yet, while all organisations possess abundant data, accelerating AI projects requires the right data architecture and storage. The collaboration between data analysts and engineers is the key to enabling such projects to work, and to continue to expand without creating masses of technical debt.

At the heart of it is the vital role IT departments play. Amongst our customers who have been successful with AI, we have witnessed how their IT departments have made access to data so simple that data analysts and developers can do their work without having to care for the infrastructure.

These infrastructure platforms were built with architectures that work seamlessly across data formats, structures and access mechanisms, allowing diverse data to be quickly integrated into AI workflows.

See also: Responsible AI starts with transparency

In particular, architectures that support AI development and runtime must operate at scale and be able to handle all data, regardless of location. Furthermore, they must maintain optimal flexibility and security at every step.

Simplifying data access means increased focus on the outputs of any AI initiative, rather than on the mechanics of delivering the data to the workflow. Everyone up and down the workflow stream gets to perform their tasks without frustration over data access, and they don’t have to worry about the infrastructure. Productivity and effectiveness are maximised, thereby ensuring project success.

Role of data infrastructure in developing GenAI for enterprises

See also: Mitigating the third-party identity threat

The crucial starting points for implementing generative AI (GenAI) in enterprises are based on foundational models — machine learning models trained on a broad range of generalized and unlabeled data. These models can perform various general tasks such as understanding language, generating text and images, and engaging in natural language conversations.

However, building foundational models in-house requires significant investment in expertise and computational resources. Very few enterprises have the financial means or time to make such substantial investments.

Fortunately, companies can use their own data to augment existing, open-source foundational models using techniques such as retrieval-augmented generation (RAG). This game-changing approach means enterprises do not have to do AI development from scratch, leveraging foundational models already available in any of the major public clouds, available as-a-service, or free-to-use models that can be implemented on-premises.

While these models are trained on vast amounts of public data, they still need a company's proprietary data to provide value for specific use cases. Whether it’s by fine-tuning a GenAI model or by using techniques like RAG — on-premises or in the cloud—the ability to integrate data quickly, easily and securely from across the organisation will determine the successful implementation of various productivity and revenue-generating use cases. The right data architecture and its underlying unified storage infrastructure can streamline the entire process.

Accelerating AI workflows with unified data storage

Enterprises that can simplify data access and create flexible data architectures provide a robust foundation to accelerate predictive and GenAI projects. For Asia Pacific markets like Singapore, where close to three in five companies have AI projects up and running (according to the NetApp 2024 Cloud Complexity Report), we believe a focus on a unified, cloud integrated storage platform is pivotal in driving business innovation and transformation.

Those successful in implementing flexible, cloud-integrated data architectures will also be the ones most successful in enjoying the value of their investments in AI.

Matthew Swinbourne is the CTO for cloud architecture at NetApp Asia Pacific

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.