Continue reading this on our app for a better experience

Open in App
Floating Button
Home Digitaledge In Focus

What is required to ensure responsible AI? (Part 2)

Nurdianah Md Nur
Nurdianah Md Nur • 5 min read
What is required to ensure responsible AI? (Part 2)
What are some responsible AI practices that organisations should adopt? Photo: Pexels
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Responsible AI is about creating systems that are transparent, equitable, and accountable, avoiding biases that can lead to societal harm. As AI becomes ubiquitous, ensuring it upholds these values is crucial for maintaining trust and fostering innovation that benefits everyone. Besides the relevant initiatives by the Singapore government, what are some responsible AI practices that organisations should adopt?

Phoebe Poon, vice president of Product Management, Aicadium:

To ensure responsible use, AI providers and tech companies should implement robust AI governance frameworks that address data quality, security, and ethical guidelines by conducting regular audits, monitoring performance, and deploying bias mitigation strategies. Clear policies must also be established to ensure regulatory compliance and to foster transparency, accountability, and fairness in AI systems.

It is also becoming increasingly imperative for end-user organisations to adopt industry-specific AI governance frameworks and build interdisciplinary teams comprising data scientists, IT security experts, and regulatory policy specialists, especially for critical industries like manufacturing, finance, or the public sector. Continuous education and active participation in industry associations will keep organisations updated on evolving standards and best practices. 

Additionally, organisations should invest in training programmes to raise awareness about AI ethics and responsible use among employees at all levels. Cultivating a culture of accountability and transparency, along with maintaining an open feedback loop between AI developers and users, will further enhance trust and effectiveness in AI applications.

Paul Baptist, VP, Solution Engineering, Asia Pacific and Japan, Alteryx:

See also: What is required to ensure responsible AI? (Part 1)

Industry frameworks, including our own, are guided by principles to reflect the core values of responsible AI: customer first, accountability, equality, integrity, and empowerment. Responsible AI Principles are a guiding light for safe, reliable AI that prioritises people first.

 While clean and fairly sourced data is the foundation of effective AI, human oversight introduces accountability and established governance controls to ensure that AI systems operate in a trustworthy and responsible way.

 Responsible AI is a continuous journey, adapting to new challenges alongside advancements. AI providers have a critical role to play in ensuring responsible development and deployment. This includes rigorous testing, validation of AI outputs and adaptation to guarantee compliance with existing regulations. However, achieving this goal requires a collaborative effort. Open dialogue between AI providers, governments and end-users is essential to identify potential pitfalls and proactively address ethical considerations.

See also: 80% of AI projects are projected to fail. Here's how it doesn't have to be this way

Annabel Lee, head of Policy – Data and ASEAN, Amazon Web Services:

At Amazon, we believe responsibility and innovation must go hand in hand if we are to build the necessary trust in AI and harness the power of these transformative technologies for public good. We believe it is vital that academic, industry and government partners globally, continue to advocate for businesses to adopt a responsible by design approach to AI.

At Amazon, we build AI with responsibility in mind at each stage of our comprehensive process - throughout design, development, deployment, and operations, we consider a range of factors including fairness, explainability, robustness and veracity, privacy and security, governance, transparency, controllability, and safety. We provide our customers with tools and resources to develop and use AI responsibly, helping them transform responsible AI from theory into practice.

 Ravi Rajendran, area vice president for Southeast Asia, Elastic:

Responsible AI use requires a multi-pronged approach, starting with transparent development by tech companies. This means building ethical considerations into AI from the very beginning, including tackling bias, safeguarding user privacy, and clearly communicating limitations. This is especially important for financial services and institutions and the government, as they often deal with a wealth of private and personal data. For these sectors, fostering a culture of responsible implementation also goes hand-in-hand with having clear frameworks for auditing and monitoring.

The next layer of safeguard would be the implementation of enhanced observability through tools like search AI platforms. Search functionalities empower analysts to prioritise critical information from alerts and analyze vast amounts of data. This fosters a collaborative approach between humans and AI, ensuring responsible use and maximising AI's positive impact while mitigating potential risks.

Dr Ashley Fernandez, chief data and AI officer, Huawei International:

To stay ahead of the latest tech trends, click here for DigitalEdge Section

AI Verify, Project Moonshot is a testament to Singapore’s forefront efforts embarking on "progressive governance " on an emerging technology like large language models.  The rate of change of these technology offerings requires versatility in how we govern them ensuring it’s fit for use, which is seen through AI Verify, Moonshot.

This effort alone among many others globally has necessitated AI providers to establish "locality” performance efforts to ensure that these AI systems adhere to the local government and national requirements before full-scale deployment/ offering. This mode of operating is definitely going to be a de facto standard for all AI/Tech providers in the coming months.

Henry Kho, area vice president and general manager for Greater China, Asean and South Korea, NetApp:

Data underpins all AI processes. In addition to public-private partnerships, it is imperative for companies to review their data management strategy to accomplish responsible AI.

Broadly speaking, responsible AI is a method for creating AI algorithms that minimise sources of risk and bias throughout the AI lifecycle. To establish effective data governance and strong data practices, companies should consider four key principles: fairness (ensuring unbiased results), interpretability (ensuring data traceability), privacy (maintaining data confidentiality), and security.

However, given the massive volume of AI datasets, which are often unstructured and spread across disparate data silos, achieving these principles can be challenging. An intelligent data infrastructure can help companies simplify data mobility, protect critical data, and optimise cost and sustainability while enforcing compliance standards.

Responsible AI practices encompass everything and everyone in your AI effort –people, methods, processes and data. With thoughtful planning, organisational commitment, collaboration, and the right data environment, companies can then scale AI initiatives responsibly.

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.