Continue reading this on our app for a better experience

Open in App
Floating Button
Home Digitaledge In Focus

Factors to consider before we make artificial intelligence pervasive

The Edge Singapore
The Edge Singapore • 8 min read
Factors to consider before we make artificial intelligence pervasive
What will it take to develop artificial intelligence responsibly, securely and sustainably? Photo: Pexels
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Even before the hype around ChatGPT, there are plenty of tools using artificial intelligence (AI) that we use every day This includes digital voice assistants like Siri or Alexa, navigation apps like Google Maps, or recommendation features on Netflix and Amazon. Since AI is still a nascent technology, here are some things to consider as we continue to embed AI into our daily lives.

Chua Hock Leng, area vice president for ASEAN and Greater China, Pure Storage

The AI gold rush is a double-edged sword. On the one hand, it can create new value, realise productivity gains, and enhance customer experiences. On the flip side, it has increased carbon risk in organisations.

Large-scale AI workloads and advanced data analytics demand higher energy consumption. Google’s AlphaGo Zero – the first AI-based computer programme to beat a professional human Go player – generated 96 tonnes of carbon dioxide over 40 days of research training. This is tantamount to the total carbon footprint from 1000 hours worth of air travel.

However, the right technology choices can mitigate the impact on the environment. For example, all-flash storage has lower energy and cooling requirements and is a more sustainable alternative to legacy storage systems, which consume more electricity and require a bigger physical footprint.

We should also consider the potential for AI to be used to augment sustainability efforts by providing data-driven insights into your organisation’s environmental footprint and where you can lower carbon emissions. So think about AI strategically, re-evaluate technology decisions, and optimise its potential while minimising the environmental impact.

See also: 80% of AI projects are projected to fail. Here's how it doesn't have to be this way

Lim Hsin Yin, managing director for Singapore, SAS Institute

AI and the more common generative AI have taken the world by storm. AI is now being democratised through ChatGPT and has marked a major milestone in innovation. While generative AI can provide a competitive edge, improve productivity and increase efficiency, organisations should also take heed of the potential risks before investing in AI technology or solutions. Organisations should start with organisational values to establish an effective strategy such as the responsible use of AI.

At SAS, we propose that organisations look at a model based on the principles of human-centricity, transparency, robustness, privacy and security, inclusivity and accountability as a starting point for responsible usage of generative AI.

See also: Responsible AI starts with transparency

Our vision is a world where data empowers people to thrive. We pursue that vision through responsible innovation, to ensure trustworthy and ethical AI is embedded in our AI strategy and platform. Core to SAS DNA is to place our customers first and to ensure we do the right thing. By intentionally implementing generative AI, organisations can mitigate adverse impacts and promote positive outcomes for individuals and society as well as for their business.

Darren Reid, APJ security business unit director, VMware

The proliferation of AI has introduced a new dimension to cyber threats, amplifying both the sophistication and scale of attacks. Malicious actors are increasingly manipulating AI-powered tools and techniques to launch targeted and evasive attacks that bypass traditional security measures. The rapid advancement of AI-driven attacks poses significant challenges for cybersecurity, necessitating the development of AI-powered defences and robust countermeasures to effectively detect, mitigate, and respond to emerging cyber threats.

Training AI to counter cyber threats presents a different set of challenges. In order for algorithms to be trained to tackle sophisticated AI-powered attacks, they must first learn from data sets that are representative of malicious behaviour, which is data created specifically to avoid detection – this process is otherwise known as adversarial learning. Moreover, it is difficult for AI to identify malicious programs with high precision as they will often generate errors known as “false positives”, thus often requiring human intervention.

AI-powered network detection and response (NDR) represents a robust approach to defending networks against advanced cyber threats. By leveraging the capabilities of AI, NDR systems can significantly enhance the speed, accuracy, and effectiveness of detecting and responding to attacks. These systems analyse vast amounts of network traffic data, identify anomalies, and detect potential threats that may typically go unnoticed by traditional security measures.

The integration of AI also enables NDR solutions to provide comprehensive data gathering from multiple security technologies and automated incident response. This automation streamlines the incident response process, allowing security teams to focus on stopping actual intrusions. AI-powered NDR tools are constantly learning and adapting to allow for the automatic detection of advanced and ever-evolving threats, as well as automatically delivering a detailed analysis of the attack and response. AI is not solely associated with negative aspects; it also brings forth numerous positive developments and advancements. While it may have resulted in more sophisticated cyber-attacks, organizations can ultimately leverage AI to counter these attacks, to strike the right balance between cybersecurity and innovation.

Fran Rosch, CEO, ForgeRock

To stay ahead of the latest tech trends, click here for DigitalEdge Section

At ForgeRock, we believe trustworthy AI can protect digital identities while simplifying everyone’s access to the connected world. The powerful intelligence that comes with AI will unlock new opportunities by processing vast amounts of data faster, and more accurately so the decisions we make are smarter. But with any emerging technology, we must safeguard against its misuse and fight for balanced AI laws and protections to ensure our humanity is preserved.

Now is the time to modernise our notion of trust and identity through the lens of AI threats like deep fakes and misinformation. This means finding new ways to use AI to deliver solutions that combat evolving threats to make the world a safer place. Together, we must resist the ‘gold rush’ mentality that AI presents and focus on how to use it responsibly. We approach this new frontier with optimism and commit to explaining how we use AI and collaborating with industry partners to help create a future where access to the digital world is both unhindered and secure.

Ramprakash Ramamoorthy, director of AI Research at ManageEngine and Zoho Corporations

AI has transformed IT operations, particularly within the cybersecurity domain. Through AI and machine learning, organisations can now identify aberrant behaviour and maintain a secure environment. AI enables a proactive approach to threat response by detecting deviations in tracked metrics and consistently monitoring entity actions to detect threats at an initial stage.

However, as businesses in South East Asia increasingly rely on AI, they must remain equally cautious. The same transformative cybersecurity capabilities that make AI indispensable to enterprises can also be exploited by threat actors. It is crucial for organisations in the region to make appropriate technology investments and policy decisions proactively.

To fortify their cybersecurity defences, businesses should equip themselves with sophisticated threat intelligence systems and analytics that focus on user behaviour. As AI adoption is reaching a tipping point, collaboration is of utmost importance within the cybersecurity community. Researchers, professionals, enterprises, and policymakers in South East Asia, should work in tandem to strengthen the collective security posture.

As AI is unleashing at a faster pace, organisations need to keep up with the latest advancements, best practices, and potential risks associated with AI to effectively harness its power while mitigating vulnerabilities.

Andy Ng, vice president and managing director for Asia South and Pacific Region, Veritas Technologies

While the true potential of AI is yet to be fully discovered, we can assume that its applications will be highly data-intensive, creating the need for enterprises to deploy efficient and responsible data management.

However, in today’s multi-cloud world, companies often struggle to manage the massive data deluge as the traditional data management approaches prove inadequate due to the lack of scalability, speed and visibility. As a result, organisations are revisiting their business processes and looking to integrate AI into their data management strategies with the promising prospect of enhancing efficiency. Done right, an organisation’s AI strategy will be a regular and seamless part of its overall data management strategy.

The panacea for data management in a complex, hybrid, multi-cloud environment is to deploy autonomous data management (ADM) based on AI. For instance, with AI-driven malware scanning and anomaly detection, organisations can manage their data and automate protection from cyber threats such as ransomware. AI also enables the automation of data management processes, minimising human intervention. This results in operational efficiencies, increased uptime, higher service levels and AI-driven insights for effective data archiving and intelligent decision-making.

The reliance on AI for data management also creates significant security risks if a proper data framework is not in place, since AI-powered systems are dependent on large sets of data. Hence, it is critical for organisations to ensure the integrity of any data processes that leverage AI and take the necessary steps to defend against cyber threats by implementing robust encryption, access controls, and authentication mechanisms. Another crucial aspect is addressing biases in AI algorithms to prevent discriminatory outcomes and enhance fairness in decision-making processes. By effectively managing these security concerns and biases, organisations can unleash AI's potential in data management to achieve transformative business outcomes.”

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.