Continue reading this on our app for a better experience

Open in App
Floating Button
Home Digitaledge In Focus

How to innovate with AI in a privacy-protective way

Simon Dale
Simon Dale • 4 min read
How to innovate with AI in a privacy-protective way
AI innovation should be grounded in the principles of accountability, responsibility and transparency. Photo: Unsplash
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

This year, the Singapore government unveiled its national plans to invest more than S$1 billion into artificial Intelligence (AI) over the next five years, enabling businesses to capitalise on AI and other technological advancements. While this move will further propel enterprises to adopt or expand AI capabilities, it is crucial to ensure that any data fueled into the algorithms is being used responsibly and transparently.

Data privacy is top of mind

Data has become the new currency in the age of AI, and the efficacy of AI tools relies heavily upon the quality of data sets on which they are trained. At the same time, the use of data in AI initiatives has become a cause for concern amongst consumers in Singapore. The numbers speak for themselves – Adobe’s State of Customer Digital Experience report found that 61% of consumers fear unauthorised use of personal data as brands harness Generative AI, while 64% express concern over excessive data collection.

Data privacy issues can be tricky to navigate due in part to complex data regulations involved, which also differ geographically. As businesses are now under enormous pressure to innovate faster than ever so that they can stay ahead of the game, it can be challenging to strike the right balance between technological innovation and data privacy. With the right combination of technical and legal expertise, a global view on policy, matched with bold ambition, businesses can think big and take on new challenges in the evolving AI and privacy landscape.

Striking the balance between innovation and data privacy

When making choices on how to develop, use, and regulate the latest technologies, privacy and security are critical considerations. In fact, lessons learned from privacy and security are transferable to AI. Leveraging a privacy-by-design approach throughout the AI lifecycle (from development to use), while investing in emerging privacy-enhancing technologies, like federated learning and differential privacy, can be done by shifting the ways we think about data privacy within existing workflows.

See also: 80% of AI projects are projected to fail. Here's how it doesn't have to be this way

To innovate in a privacy-conscious manner, organisations should leverage current processes for evaluating privacy (such as privacy impact assessments and data protection impact assessments) and reinforce them to account for new and increased risks posed by AI. Practising data minimisation and purpose limitation can help maintain quality of data, while respecting consumer personal data rights.

For businesses working with AI vendors, it is paramount to conduct due diligence on the privacy and security practices of those vendors. Understanding clearly if the vendor is acting as a service provider/processor or business/controller and implementing use limitations accordingly is key.

Businesses should also have AI talent and train their staff on potential harms associated with the use of AI.  It is important to develop cross-disciplinary and diverse teams to develop and review AI use cases. For example, at Adobe, AI-powered features with the highest potential ethical impact are reviewed by a diverse cross-functional AI Ethics Review Board. Diversity of personal and professional backgrounds and experiences is essential to identifying potential issues from a variety of different perspectives that a less diverse team might not see.

See also: Responsible AI starts with transparency

Privacy is never one-and-done, companies must also test AI systems regularly, perform assessments on these systems on an ongoing basis, and keep abreast of new privacy-enhancing technologies, regulations, and best practices, and continuously evaluate and update privacy, security and related AI practices accordingly.

Using technology to mitigate privacy

There are several technologies that will further enhance privacy for businesses. Some of these include:

  • Federated learning, which is an approach to machine learning that allows data to remain on local devices while still benefiting from collective intelligence. The models learn from decentralised data across devices and only the learned patterns are shared, offering a boost to privacy.
  • The concept of differential privacy, as it relates to AI technology, introduces “statistical noise” or “randomness” to data, allowing overall trends to be analyzed without compromising individual data points.
  • Homomorphic encryption, which is a form of encryption that enables computations to be performed on encrypted data without decrypting it first. Advances in this area could enable AI models to learn from encrypted data, making it a powerful tool for privacy-preserving data analysis.

Mastering the ART of AI and privacy

AI innovation should be approached with thoughtfulness and grounded in the principles of accountability, responsibility and transparency (ART). It is not a myth that the ART of innovation can be achieved in a privacy-conscious manner. Doing so will allow organisations to not only harness the transformational technology to their advantage, but create and retain trust with customers and employees, ultimately giving those who do it well a competitive advantage in Singapore’s digital economy.

Simon Dale is the vice president for Asia at Adobe

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.