Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Children make up one-third of internet users worldwide, clearly implying that this generation will be the first to grow up with digital devices having a constant presence in their lives. With this in mind, national AI strategies and the deployment of AI systems should be designed to accommodate the needs and potentials of children.

UNICEF highlighted the urgent need to study the impacts of generative AI on children. In November 2021, UNICEF’s Policy Guidance on AI for Children was published, citing requirements that governments, policymakers and businesses must meet in developing, implementing or using child-centered AI.

In the long run, AI systems must be equitable and inclusive, catering to children from all backgrounds, especially marginalized communities. To support children's development and well-being, AI initiatives must also protect data and privacy, and ensure safety. They should provide transparency, explainability, and accountability; empower stakeholders with AI knowledge; prepare children for future AI advancements; and create an enabling environment.

Sooner rather than later, children will be highly exposed to AI systems, and if children's data is used, it must be collected and processed responsibly, with clear purposes and safety measures.

Without a doubt, the way AI is shaped today will impact future generations. As noted by UN Secretary General, António Guterres, “present generations have a responsibility to ‘halt and prevent developments that could threaten the survival of future generations … [including] new technologies.’”


Ensuring AI Safety for Children

Innovating responsibly also involves creating AI technology that is not only advanced but also safe for everyone, including children. This could be done by implementing strict guidelines and continuously updating protocols to prevent incidents from occurring.

In July 2024, research conducted by a University of Cambridge academic, Dr Nomisha Kurian, identified a significant “empathy gap” in AI chatbots, putting young users or children at risk. Recent incidents showcase the need for emotive AI. For example, in response to a query, Amazon’s Alexa instructed a 10-year-old to put a coin inside an electrical plug. Moreover, Snapchat’s My AI gave tips to a 13-year-old girl, instructing her on how to lose her virginity to an adult. These instances highlight the urgent need for a proactive approach to creating a child-safe AI environment.

With this in mind, Dr. Kurian proposed a comprehensive 28-item framework to help ensure that AI technologies will be catered to responsibly by stakeholders, companies and educators and will address children’s unique needs and vulnerabilities.

“Children are probably AI’s most overlooked stakeholders,” Dr. Kurian said. “Very few developers and companies currently have well-established policies on child-safe AI. That is understandable because people have only recently started using this technology on a large scale for free.”

It is important to bear in mind that even though chatbots have remarkable language abilities, they may not be able to handle the unpredictable or emotional aspects of a conversation properly. This is the reason why it tends to say something out of context, or even harmful, to children using AI without supervision.


AI's Dark Side Requires Parental Vigilance

Given the accessibility of AI nowadays, it is every parent’s duty to keep a close eye on what a child is accessing online. If not, they could fall victim to the dangerous tactics lingering within this virtual world. One of the most common is AI-generated child sexual abuse material (CSAM). This refers to the use of AI algorithms to create fabricated, explicit content involving minors.

If unchecked, AI-generated CSAM could amplify sextortion as online predators can utilize these AI-generated images to threaten or coerce children into complying with their demands, whether it be money or inappropriate acts.  

To address this, in April 2024, leading AI firms like Meta, Microsoft, Amazon, and OpenAI, signed the Safety by Design pledge to uphold child safety principles. These guidelines aim to combat sexual abuse involving children and the dissemination of AI-generated CSAM. The pledge mandates integrating these safety measures across the entire AI lifecycle, from early development to deployment and maintenance.

Another scenario includes AI-driven online grooming, wherein deepfakes are being used to make a person look unsuspicious or friendly. As algorithms become smarter, it becomes easier to detect a user’s behavioral patterns, interests, and even emotional states, making grooming much easier to accomplish (especially in children).

Utilizing parental control tools offered by internet service providers or third-party applications to filter content, monitor online activity, and block access to inappropriate websites could be helpful in navigating this new landscape.

Staying informed about the latest trends and developments in AI technology and online safety, and regularly updating security settings on devices and software used by children, should also be entertained.


Protecting ‘Generation AI’

According to UNICEF, parents or caregivers are seen as key stakeholders in children’s AI-powered lives. Yet, in the UNICEF-led workshops, some child participants acknowledged that most parents don’t have sufficient knowledge on these topics, and expressed that their parents don’t respect their privacy.

Once children get used to having AI around them, they could feel more inclined to interact with it. Interestingly, separate research revealed that children are much more likely to open up to chatbots as if they are human. Hence, Dr. Kurian’s study suggests that the “friendly and lifelike designs” of chatbots could encourage children to trust them. This trust, when taken advantage of, could bring a child into a toxic situation.

Moreover, research by Common Sense Media found that 50% of students aged 12-18 have used ChatGPT for school, but only 26% of their parents knew that they were using this technology. At present, children are using chatbots informally as underage users cannot create their own accounts.

On a more positive note, Virginia Tech researchers are working to build and train AI-powered chatbots and help children and teens identify and avoid cyber predators. However, there is a lack of authentic cybergrooming conversation data—which is used to train the chatbots.

Despite this challenge, the researchers plan to approach the problem using human-centered approaches and establish an ethical platform in which adolescents and their parents can collaborate to generate the required data and enhance their awareness of cybergrooming as part of the data collection process.


Continue Reading:

Digital Safety: Children’s Adherence to Parental Guidance

Umniah And Kaspersky Launch Initiative to Protect Children Online

Internet Safety: A Moving Target

Pin It