Artificial intelligence could help ‘normalize’ child sexual abuse as graphic images erupt online: experts
Artificial intelligence could help 'normalize' child sexual abuse as graphic images erupt online: experts FOX 10 News Phoenix
AI Should Be Regulated, ChatGPT Chief Says
Leaders in the artificial intelligence community are calling for a global agency to regulate the rise of AI. “It can go quite wrong,” the head of ChatGPT told members of Congress at a hearing on May 16.
Introduction
Artificial intelligence is opening the door to a disturbing trend of people creating realistic images of children in sexual settings, which could increase the number of cases of sex crimes against kids in real life, experts warn.
The Impact of AI Platforms
AI platforms that can mimic human conversation or create realistic images exploded in popularity late last year into 2023 following the release of chatbot ChatGPT, which served as a watershed moment for the use of artificial intelligence. As the curiosity of people across the world was piqued by the technology for work or school tasks, others have embraced the platforms for more nefarious purposes.
The Warning from the National Crime Agency (NCA)
The National Crime Agency (NCA), which is the U.K.’s lead agency combating organized crime, warned this week that the proliferation of machine-generated explicit images of children is having a “radicalizing” effect “normalizing” pedophilia and disturbing behavior against kids.
“We assess that the viewing of these images – whether real or AI-generated – materially increases the risk of offenders moving on to sexually abusing children themselves,” NCA Director General Graeme Biggar said in a recent report.
The Scale of the Issue
The agency estimates there are up to 830,000 adults, or 1.6% of the adult population in the U.K. that pose some type of sexual danger against children. The estimated figure is 10 times greater than the U.K.’s prison population, according to Biggar.
The majority of child sexual abuse cases involve viewing explicit images, according to Biggar, and with the help of AI, creating and viewing sexual images could “normalize” abusing children in the real world.
“[The estimated figures] partly reflect a better understanding of a threat that has historically been underestimated, and partly a real increase caused by the radicalizing effect of the internet, where the widespread availability of videos and images of children being abused and raped, and groups sharing and discussing the images, has normalized such behavior,” Biggar said.
The Situation in the United States
Stateside, a similar explosion of using AI to create sexual images of children is unfolding.
“Children’s images, including the content of known victims, are being repurposed for this really evil output,” Rebecca Portnoff, the director of data science at Thorn, a nonprofit that works to protect kids, told the Washington Post last month.
“Victim identification is already a needle-in-a-haystack problem, where law enforcement is trying to find a child in harm’s way,” she said. “The ease of using these tools is a significant shift, as well as the realism. It just makes everything more of a challenge.”
Community Guidelines and Workarounds
Popular AI sites that can create images based on simple prompts often have community guidelines preventing the creation of disturbing photos.
Such platforms are trained on millions of images from across the internet that serve as building blocks for AI to create convincing depictions of people or locations that do not actually exist.
Midjourney, for example, calls for PG-13 content that avoids “nudity, sexual organs, fixation on naked breasts, people in showers or on toilets, sexual imagery, fetishes.” While DALL-E, OpenAI’s image creator platform, only allows G-rated content, prohibiting images that show “nudity, sexual acts, sexual services, or content otherwise meant to arouse sexual excitement.” However, dark web forums of people with ill intentions discuss workarounds to create disturbing images, according to various reports on AI and sex crimes.
The Challenges for Law Enforcement
Biggar noted that the AI-generated images of children also throws police and law enforcement into a maze of deciphering fake images from those of real victims who need assistance.
“The use of AI for this purpose will make it harder to identify real children who need protecting, and further normalize child sexual abuse among offenders and those on the periphery of offending. We also assess that viewing these images – whether real or AI generated – increases the risk of some offenders moving on to sexually abusing children in real life,” Biggar said in comment provided to Fox News Digital.
The Role of AI in Sextortion Scams
AI-generated images can also be used in sextortion scams, with the FBI issuing a warning on the crimes last month.
Deepfakes often involve editing videos or photos of people to make them look like someone else by using deep-learning AI and have been used to harass victims or collect money, including kids.
“Malicious actors use content manipulation technologies and services to exploit photos and videos—typically captured from an individual’s social media account, open internet, or requested from the victim—into sexually-themed images that appear true-to-life in likeness to a victim, then circulate them on social media, public forums, or pornographic websites,” the FBI said in June.
SDGs, Targets, and Indicators
1. Which SDGs are addressed or connected to the issues highlighted in the article?
- SDG 5: Gender Equality
- SDG 16: Peace, Justice, and Strong Institutions
The issues highlighted in the article are connected to SDG 5, which aims to achieve gender equality and empower all women and girls. The article discusses the creation and proliferation of explicit images of children, which can lead to an increase in sex crimes against kids. This issue is related to the protection of children’s rights and the prevention of violence against them, which are key components of SDG 5.
The article is also connected to SDG 16, which focuses on promoting peaceful and inclusive societies for sustainable development, providing access to justice for all, and building effective, accountable, and inclusive institutions at all levels. The article highlights the need for regulation and law enforcement to address the use of artificial intelligence in creating and distributing explicit images of children. This relates to the goal of ensuring access to justice and strong institutions that can effectively combat crimes against children.
2. What specific targets under those SDGs can be identified based on the article’s content?
- Target 5.2: Eliminate all forms of violence against all women and girls in public and private spheres
- Target 16.2: End abuse, exploitation, trafficking, and all forms of violence against children
Based on the content of the article, the specific targets that can be identified are Target 5.2 under SDG 5 and Target 16.2 under SDG 16.
Target 5.2 aims to eliminate all forms of violence against all women and girls in public and private spheres. The creation and distribution of explicit images of children can be seen as a form of violence against girls, as it exposes them to harm and increases the risk of sexual abuse. Achieving this target requires taking measures to prevent and respond to such violence, including regulation and law enforcement.
Target 16.2 aims to end abuse, exploitation, trafficking, and all forms of violence against children. The article highlights the need to address the use of artificial intelligence in creating explicit images of children, which can lead to the exploitation and abuse of children. To achieve this target, it is necessary to strengthen institutions, enhance law enforcement capabilities, and regulate the use of AI in a way that protects children from harm.
3. Are there any indicators mentioned or implied in the article that can be used to measure progress towards the identified targets?
- Number of reported cases of AI-generated explicit images of children
- Number of prosecutions and convictions related to the creation and distribution of AI-generated explicit images of children
- Number of AI platforms implementing guidelines to prevent the creation of disturbing photos
- Number of victims identified and provided with assistance by law enforcement
The article does not explicitly mention specific indicators, but based on the issues discussed, the following indicators can be used to measure progress towards the identified targets:
– Number of reported cases of AI-generated explicit images of children: This indicator can measure the prevalence and detection of such images, providing insights into the scale of the problem.
– Number of prosecutions and convictions related to the creation and distribution of AI-generated explicit images of children: This indicator can assess the effectiveness of law enforcement efforts in addressing the issue and holding perpetrators accountable.
– Number of AI platforms implementing guidelines to prevent the creation of disturbing photos: This indicator can measure the adoption of responsible practices by AI platforms to prevent the creation and distribution of explicit images of children.
– Number of victims identified and provided with assistance by law enforcement: This indicator can reflect the effectiveness of law enforcement in identifying and supporting victims of AI-generated explicit images.
4. Table: SDGs, Targets, and Indicators
SDGs | Targets | Indicators |
---|---|---|
SDG 5: Gender Equality | Target 5.2: Eliminate all forms of violence against all women and girls in public and private spheres | – Number of reported cases of AI-generated explicit images of children – Number of prosecutions and convictions related to the creation and distribution of AI-generated explicit images of children |
SDG 16: Peace, Justice, and Strong Institutions | Target 16.2: End abuse, exploitation, trafficking, and all forms of violence against children | – Number of reported cases of AI-generated explicit images of children – Number of prosecutions and convictions related to the creation and distribution of AI-generated explicit images of children – Number of AI platforms implementing guidelines to prevent the creation of disturbing photos – Number of victims identified and provided with assistance by law enforcement |
Behold! This splendid article springs forth from the wellspring of knowledge, shaped by a wondrous proprietary AI technology that delved into a vast ocean of data, illuminating the path towards the Sustainable Development Goals. Remember that all rights are reserved by SDG Investors LLC, empowering us to champion progress together.
Source: fox10phoenix.com
Join us, as fellow seekers of change, on a transformative journey at https://sdgtalks.ai/welcome, where you can become a member and actively contribute to shaping a brighter future.