The Internet has become a digital Wild West. As the online world grows, so does the range of risks its users face. For example, 57% of teens in Singapore reported encountering a risk in the past year. According to Microsoft’s Global Online Safety Survey 2024, their top concerns are personal risks (77%) and sexual risks (44%).
It is widely recognised that a greater commitment is needed to tackle online child sexual exploitation and Microsoft has been actively working to prevent this in its services. “Now, we’re also looking at advancing child wellness, such as finding the best way to address online abuses, such as bullying, harassment or unwanted contact from strangers. We want to provide digital tools that ensure children can thrive online,” Courtney Gregoire, the tech giant’s chief digital safety officer, tells DigitalEdge.
Another key area Microsoft is focusing on is tackling the misuse of the Internet by terrorists and extremists.
“The live streaming of the 2019 Christchurch terrorist attack on two mosques in New Zealand [was a wake-up call for the government, civil society and tech sector] to work together to prevent the exploitation of the Internet and prepare how to respond to such incidents across online platforms. Microsoft monitors across its platforms for content produced by perpetrators with the intent of viral dissemination. If such content is identified, we will implement strategies [based on our content incident protocol] to prevent its widespread distribution,” says Gregoire.
The threat of generative AI
The accessibility and ease of use of generative AI have led to an increase in online harm. “For instance, we are seeing [generative AI being used to produce] synthetic child sexual exploitation material at scale. We have yet to see it being used in the terrorism and violent extremism space as much, but creating deepfakes with fraud implications is another big concern,” Gregoire says.
See also: Alibaba anoints new chief in revamp of stalling commerce arm
She adds that Microsoft is taking a holistic approach to addressing (generative) AI abuse. Firstly, it has put up guardrails to prevent its generative AI technology from being used to create content against its terms of service and code of conduct, such as generating images to propel hate speech.
Secondly, it leverages watermarking and content provenance, which use metadata to establish the origin and authenticity of digital content and “keep the [online] ecosystem healthy.”
Detecting abusive content across the ecosystem is the third focus area. Gregoire says: “We are using the same technology and approaches we’ve had for a while. This includes hash matching technology, [which assigns images and videos a “hash” or unique digital signature that can be compared against a database, such as a photo database of child sexual abuse material]. Besides that, our threat intelligence monitoring teams will look into online forums on our platforms when they detect people talking about the adversarial use of AI [and stop them if we can get jurisdiction against them].”
See also: Break up Google? What’s at stake in antitrust action
She also believes generative AI could help reduce online risks. “We’re in the early stages of leveraging generative AI to understand the nuances of online content. This can help enhance detection of abusive online content with less impact on human content moderators.”
“Generative AI can also help detect inauthentic accounts earlier and prevent their creation. Those accounts are primarily used to perpetrate fraud or other harm by sending spam, phishing emails, or they may be used to hide a perpetrator’s identity as they attempt to harm a child,” she adds.
Realising a safer online world
Ultimately, creating a safer Internet for everyone requires more than just tech companies’ efforts.
The online environment reflects the real world, so governments, industries, academia and individuals need to work together to safeguard it. [The challenge here is that each might have a] unique perspective of online safety. So, it is important to identify our North Star clearly and have an open, trusted dialogue about the online harms we’re seeing and how to address it, which might include looking at regulations.
Courtney Gregoire, chief digital safety officer, Microsoft
She continues: “Some of the online harms are interrelated and the worst thing to do is tell the victim to file individual reports of the different abuses they experienced [because of siloed systems or platforms]. By taking a multi-stakeholder collaborative approach, we are putting the victim first, as it clarifies where to make a [comprehensive] report and what the remedies are."
Regulations, she adds, play a crucial role. “A perpetrator can easily migrate to another service [if one platform prevents them from creating and disseminating harmful content]. So, appropriate regulations ensure all platforms play by the same rules to raise the bar and enable a safer online environment.”