Continue reading this on our app for a better experience

Open in App
Floating Button
Home Digitaledge In Focus

Generative AI: Friend or foe?

Nurdianah Md Nur
Nurdianah Md Nur • 9 min read
Generative AI: Friend or foe?
Generative AI can help develop new ideas, and organisations that neglect the technology will risk losing competitiveness in the long term. Photo: Shutterstock
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

ChatGPT — which stands for Chat Generative Pre-Trained Transformer — has been drawing the attention of tech enthusiasts and industry leaders alike since its launch in late November last year.

Developed by the artificial intelligence (AI) lab OpenAI, ChatGPT can answer questions, generate convincing essays and more, in a human-like manner. It could even solve complex exam questions from law, medical and business schools, including the University of Minnesota, Stanford Medical School and the University of Pennsylvania’s Wharton School of Business.

Thanks to ChatGPT going viral, generative AI is seeing a flurry of recent deal activity from venture capitalists and tech giants. For instance, copywriting AI-text generation app Jasper raised a US$125 million ($168 million) Series A round in October 2022. Meanwhile, Microsoft announced a new multiyear, multibillion-dollar investment with OpenAI earlier this year.

Why is there so much interest in generative AI? “Traditional AI/machine learning systems are trained to retrieve information from complex objects. For example, some AI-powered mobile apps can help identify a species of plant based on the photo uploaded. Generative AI does something completely different: It creates a complex object itself. ChatGPT, for example, can create poems and hold a basic human-like conversation,” says Pierre Alquier, professor in the department of information systems, decision sciences and statistics, at ESSEC Business School Asia Pacific.

In other words, generative AI can produce original content in response to queries by drawing from data it ingested and interactions with users, therefore displaying creativity.

See also: 80% of AI projects are projected to fail. Here's how it doesn't have to be this way

According to management consulting firm McKinsey and Co, this latest class of generative AI systems has emerged from foundation models — large-scale, deep learning models trained on massive, broad, unstructured data sets (such as text and images) that cover many topics. This allows developers to adapt the models for a wide range of use cases with little fine-tuning required for each task. As such, everyone — including those lacking specialised machine learning skills or with no technical background — can benefit from AI.

It can be worrying that generative AI can undertake some tasks similar to how humans would and is pushing into the creative realm, which is usually perceived to be unique to the human mind. However, the tool still requires human input and intervention to continuously “learn” new information and what is correct or not.

So, generative AI augments our capabilities to enable us to work more efficiently and effectively instead of completely replacing humans in the workforce.

See also: Responsible AI starts with transparency

“To my knowledge, generative AI is still far from being reliable enough to complete tasks without human supervision. I spoke with many journalists and translators who were afraid they might be replaced by chatbots like ChatGPT in the near future. But the tool is certainly not reliable enough to replace them for now as it sometimes makes mistakes that clearly show it doesn’t understand what it’s talking about. At most, ChatGPT can assist journalists by writing first drafts or helping to rephrase parts of a text,” Alquier comments.

Beneficiaries of generative AI

The creative industry, highlights Alquier, could utilise generative AI to create artwork and music for video games, write movie scripts, and more.

Artist Glenn Marshall, for example, made a three-minute film using OpenAI’s CLIP, a neural network that efficiently learns visual concepts from natural language supervision. Called The Crow, the movie is about a dancer turning into a crow and is set in a post-apocalyptic-looking world generated by an AI. It won the Jury Award at last year’s Cannes Short Film Festival.

Apart from that, McKinsey and Co observes that generative AI is being used in various ways across business functions, including:

  • Marketing and sales — Crafting personalised marketing, social media, and technical sales content (including text, images, and video); creating assistants aligned to specific businesses, such as retail
  • Operations — Generating task lists for efficient execution of a given activity
  • IT/engineering — Writing, documenting, and reviewing code
  • Risk and legal — Answering complex questions, pulling from vast amounts of legal documentation, and drafting and reviewing annual reports
  • Research and development — Accelerating drug discovery through a better understanding of diseases and the discovery of chemical structures.

“Generative AI has the potential to introduce the next level of efficiency and personalisation for customer service, sales, commerce, marketing, and IT teams. If deployed correctly, with trust and responsibility guardrails, AI, including generative AI, makes the life of the employee more productive and rewarding, and the customer experience more engaging and personalised,” says a spokesperson from Salesforce, a cloud-based software company.

To stay ahead of the latest tech trends, click here for DigitalEdge Section

For instance, layering generative AI on top of Salesforce’s Einstein for Service and Customer 360 can help organisations automatically generate personalised responses for human customer service agents to quickly send to customers.

Doing so can also help create smarter chatbots that deeply understand, anticipate, and respond to customer issues. This will power better-informed answers to nuanced customer queries, helping to increase first-time resolution rates.

Potential misuse cases

Since generative AI is a relatively nascent technology, its evolving capabilities and uses create the potential for misapplication, misuse, and unintended or unforeseen consequences.

“Generative AI has the power to transform the way we live and work in profound ways and will challenge even the most innovative companies for years to come. But the technology is not without risks. It gets a lot of things right, but many things wrong too.

“As businesses race to bring generative AI to market, it’s critical that we do so inclusively and intentionally. It’s not enough to deliver the technological capabilities of generative AI. We must prioritise responsible innovation to help guide how this transformative technology can and should be used — and ensure that our employees, partners, and customers have the tools they need to develop and use these technologies safely, accurately, and ethically,” says a spokesperson from Salesforce.

Agreeing, Alquier says: “While most businesses don’t know yet what to do with generative AI, many people may already be using it for the wrong reasons. They could generate fake photos or videos that can ruin someone’s reputation, or generate fake news without even having to write them themselves.”

Misinformation or disinformation — wherein false and out-of-context information is spread to deceive or mislead — is perhaps one of the major dangers of generative AI. McKinsey and Co explains that the technology sometimes “hallucinates”, meaning it confidently generates entirely inaccurate information in response to a user question and has no built-in mechanism to signal this to the user or challenge the result. This is also worrying as the content generated may be biased since the systems might draw from data with unwanted biases that potentially or actually harm particular classes and groups of people.

It is also worrying that filters are currently unable to catch inappropriate content. For instance, users of an image-generating application — which can create avatars from a person’s photo — received avatar options from the system that portrayed them nude, even though they had input appropriate photos of themselves.

Besides that, generative AI has raised concerns about intellectual property. “When a generative AI model brings forward a new product design or idea based on a user prompt, who can lay claim to it? What happens when it plagiarises a source based on its training data?” asks McKinsey and Co.

This is illustrated in the case of Stability AI, which was sued for violating copyright law earlier this year. Stock photo provider Getty Images claims that Stability AI copied millions of its photos without a license and used them to train Stable Diffusion (Stability AI’s image generation AI model) to generate more accurate depictions based on user prompts.

De-risking generative AI

So, what does it take to ensure we build trusted, responsible and ethical generative AI systems that will not negatively impact society?

“There are five guidelines we’re using to guide the development of trusted generative AI at Salesforce and beyond. These guidelines are very much a work in progress, but we’re committed to learning and iterating in partnership with others to find solutions,” says the Salesforce spokesperson.

The first is accuracy. It is crucial to deliver verifiable results that balance accuracy, precision, and recall in the models by enabling customers to train models on their own data.

AI providers should communicate when there is uncertainty about the veracity of the AI’s response and enable users to validate these responses. This can be done by citing sources, explaining why the AI gives the responses it does (such as chain-of-thought prompts), highlighting areas to double-check (like statistics, recommendations, and dates), and creating guardrails that prevent some tasks from being fully automated such as launching code into a production environment without a human review).

Secondly, AI companies must make every effort to mitigate bias, toxicity, and harmful output of their AI models by conducting bias, explainability, robustness assessments, and red teaming. They must also protect the privacy of any personally identifying information in the data used for training and create guardrails to prevent additional harm. For example, they could force publishing code to a sandbox rather than automatically pushing it to production.

Thirdly, they need to respect data provenance and ensure they have consent to use the collected information to train and evaluate their models. They must also be transparent that an AI has created content when it is autonomously delivered, such as indicating when a response to a consumer is provided by a chatbot instead of a human agent.

Empowerment is the fourth guideline. Although there are some cases where it is best to fully automate processes, there are other cases where AI should play a supporting role to the human or where human judgment is required. Organisations need to identify the appropriate balance to “supercharge” human capabilities and make these solutions accessible to all.

Finally, as AI providers and businesses strive to create more accurate models, they should develop the right-sized models where possible to reduce our carbon footprint. When it comes to AI models, larger does not always mean better. In some instances, smaller, better-trained models outperform larger, more sparsely trained models, says Salesforce’s spokesperson.

All in all, generative AI can help develop new ideas and organisations that neglect the technology will risk losing competitiveness in the long term. The goal is to use generative AI where it can support human creativity and work, instead of replacing them.

To achieve that, McKinsey and Co advises organisations to assemble a cross-functional team — including data science practitioners, legal experts, and functional business leaders — to identify the parts of their business where the technology could have the most immediate impact and the right mechanism to monitor it. The team also needs to ensure the generative AI models they plan to use meet the legal and community standards so that they can maintain trust with their stakeholders.

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.