Continue reading this on our app for a better experience

Open in App
Floating Button
Home Digitaledge In Focus

What is required to ensure responsible AI? (Part 1)

Nurdianah Md Nur
Nurdianah Md Nur • 6 min read
What is required to ensure responsible AI? (Part 1)
Every participant in the AI ecosystem has a role to play in ensuring the technology is used safely and ethically. Photo: Pexels
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

 

The Singapore government has collaborated with the private sector to ensure AI safety and ethics through AI Verify (an AI governance testing framework and software toolkit) and Project Moonshot (a generative AI testing toolkit to address large language model safety and security challenges).

However, there is more that AI providers, tech companies and end-user organisations can do to ensure responsible AI usage. Industry leaders share their thoughts below.

Olaf Pietschner, CEO of Asia Pacific, Capgemini:

Singapore’s Project Moonshot is leading the way in ensuring safe and ethical AI. Building on this momentum, AI providers, users and regulators need to collaborate for a brighter AI future. This will be crucial to minimise the risks of AI misuse, such as spreading fake news and deepfakes or even malicious cyberattacks.

Technology companies must prioritise bias detection and mitigation in AI development as transparency in how AI systems reach decisions is vital to build trust and prevent unintended consequences. Meanwhile, end users need education on AI capabilities and limitations. They should be empowered to question AI-generated content, especially in high-risk situations and seek human intervention when needed.

See also: 80% of AI projects are projected to fail. Here's how it doesn't have to be this way

Last but not least, regulators can strengthen consumer safeguards through clear disclosure requirements and user control mechanisms. Collaborative efforts between all stakeholders will help unlock AI’s potential to revolutionise various aspects of our lives while minimising the risks associated with this powerful technology.

Nick Eayrs, vice president, Field Engineering, Asia Pacific and Japan, Databricks:

Organisations should implement rigorous AI red teaming, especially for large language models, to ensure safe development and deployment. This involves automated probing to detect vulnerabilities, biases and privacy issues, followed by human evaluation and detailed analysis. Jailbreaking techniques test the model’s robustness against attacks.

See also: Responsible AI starts with transparency

Prioritising model supply chain security and establishing an ongoing feedback loop are crucial. The red teaming approach categorises probes into security, ethics, robustness and compliance, ensuring comprehensive evaluation. This methodology helps organisations advance their AI use responsibly, adapting to the evolving AI safety and security landscape.

Eunice Huang, head of AI and Emerging Tech Policy, Google Apac:

AI has great potential to help tackle critical global challenges, such as climate change, drug discovery and economic productivity.

But AI also presents potential risks. For example, while generative AI makes creating new and personalised content easier, it presents new challenges for content trustworthiness. To address this, we’ve developed tools such as SynthID — a new tool for watermarking and identifying AI-generated images, text, video and audio — to help people make more informed decisions about interacting with AI-generated content.

At Google, we are working to ensure AI is developed and deployed responsibly. In 2018, we released our AI Principles, outlining our commitment to develop responsibly and establish areas we will not pursue. We regularly share updates on our progress.

AI is an emerging technology; new questions will continue to arise as the technology advances. That’s why we believe it’s essential for industry, government and civil society to work together to ensure a responsible approach.

Edmund Heng, partner, Technology Risk, KPMG in Singapore:

To stay ahead of the latest tech trends, click here for DigitalEdge Section

As AI applications increase, so do the associated risks — data security threats and ethical and regulatory concerns. Companies must adopt responsible and ethical AI practices to navigate these complexities and promote sustained innovation. Local companies should explore leveraging existing tools and frameworks, such as the AI Verify toolkit or the AI Verify Project Moonshot, but more can be done.

Firstly, these companies should establish clear guidelines and guardrails for AI use that are aligned with existing frameworks and policies and cover the development, implementation, and application stages. A robust governance structure must ensure accountability across all lines of defence.

Secondly, companies should use this opportunity to enhance their technology, processes and data management. Some companies have successfully turned potential risks into initiatives to improve tech and governance.

Thirdly, fostering awareness and training for responsible AI use among employees and stakeholders is crucial. This includes careful curation of AI outputs and results. Lastly, continuous monitoring and regular reviews of AI models against the established guidelines are essential to ensure they remain ethical and compliant. These proactive approaches enable companies to adapt and thrive amidst evolving ethical concerns.

Wu Shi Wei, chair of SGTech’s AI and High-Performance Computing Chapter and Huawei Cloud Apac’s chief technology officer:

Tech companies can create specific guidebooks for employees tailored to the various AI software they use in accordance with IMDA’s recommendations. This increased user education would be an important step towards responsible AI usage. SGTech also highly advocates for Digital Trust; part of that is encouraging companies to use the many AI tools they have access to ethically.

Additionally, tech companies can expand hiring for AI-related compliance roles. Many new roles and skills have been borne out of AI, including AI compliance officers and content auditors. Companies can increase accountability by hiring for such skills to ensure AI is used ethically.

SGTech has also been pushing for the larger adoption of a skills-based approach in hiring to keep up with the realistic needs of the digital workforce. Alternatively, companies can set aside budgets to reskill existing employees for roles like data or risk analysts.

Chong Yang Chan, managing director, Asean, Qlik:

Organisations need to maintain high data quality standards to implement AI initiatives successfully. This means ensuring elements like accuracy, diversity and security are put in place to form the foundation of high-quality and reliable data. With reliable data, businesses can streamline the organisation and analysis of their data and seamlessly integrate it into AI systems, yielding accurate and actionable insights.

AI providers and tech companies are vital in helping organisations improve data quality. They can collaborate with governments to develop robust data governance frameworks and offer advanced data management tools and cutting-edge data processing technologies. These solutions should facilitate regular data audits and ensure seamless data integration into AI systems for accurate and actionable insights.

Additionally, vendors should use advanced analytics and machine learning techniques to enhance tools that manage and integrate unstructured data, which can account for up to 80% of enterprise data. By providing these comprehensive solutions, AI providers and tech companies can help organisations mitigate the risks of operational failures, regulatory breaches and financial losses, ensuring a more effective AI strategy that drives better decision-making.

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.