(Credit: Robalito/Shutterstock)
From digital therapy to virtual romance, new study exposes the hidden world of teen-AI relationships
CHAMPAIGN, Ill. — In an age when teenagers turn to TikTok for life advice and Instagram for validation, a new digital confidant has entered their lives: artificial intelligence. According to new research, parents have no idea what their children are telling it, nor the emotional digital relationships they’re creating.
Researchers from the University of Illinois Urbana-Champaign have uncovered a significant disconnect between how parents think their teenagers use artificial intelligence and the complex reality. Led by information sciences professor Yang Wang and doctoral student Yaman Yu, the study represents one of the first comprehensive studies examining how teenagers interact with generative AI (GAI) and the associated risks.
“AI technologies are evolving so quickly, and so are the ways people use them,” says Professor Wang, who co-directs the Social Computing Systems Lab, in a statement. “There are some things we can learn from past domains, such as addiction and inappropriate behavior on social media and online gaming.”
The study’s findings paint a picture of teenagers using AI in ways their parents never imagined. While parents typically view AI as primarily an academic tool functioning like a search engine, teens are turning to AI chatbots for emotional support, relationship advice, and social interaction. These AI companions are increasingly embedded in popular platforms like Snapchat and Instagram, where teens incorporate them into group chats and sometimes even develop romantic attachments to them.
“It’s a very heated topic, with a lot of teenagers talking about Character AI and how they are using it,” notes Yu, referring to a platform where users can create and interact with character-based chatbots.
The research team analyzed social media discussions and conducted in-depth interviews with 20 participants – seven teenagers and 13 parents. Their methodology included analyzing 712 posts and 8,533 comments on Reddit related to teenagers’ use of GAI, though only 181 posts proved relevant after filtering.
The study revealed significant misconceptions about GAI among both parents and children. Parents were largely unaware of their children’s use of advanced tools like Midjourney and DALL-E for image generation, or Character AI for companionship. More concerning was parents’ limited understanding of the data their children might be sharing with these AI systems.
While parents worried about basic demographic data collection, they “did not fully appreciate the extent of sensitive data their children might share with GAI…including details of personal traumas, medical records and private aspects of their social and sexual lives,” the researchers wrote.
The study found that teenagers have their own set of concerns, including addiction to AI chatbots, the potential for harassment through AI-generated content, and unauthorized use of their personal information. They also expressed broader societal concerns about AI replacing human labor and intellectual property infringement.
What makes this situation particularly challenging is the inadequacy of current safety measures. Unlike traditional social media platforms, GAI systems generate unique content in real-time, making it more difficult to identify and prevent inappropriate content. The researchers emphasize that both the risks and mitigation strategies are more complex than simply blocking inappropriate content.
Looking ahead, the research team is developing practical solutions. They’re creating a taxonomy of risk categories to help identify early warning signs of risky behavior, such as excessive time spent on GAI platforms or concerning conversation patterns. They’re also collaborating with Illinois psychology professor Karen Rudolph, director of the Family Studies Lab, to establish age-appropriate interventions.
“This is a very cross-disciplinary topic, and we’re trying to solve it in cross-disciplinary ways involving education, psychology and our knowledge of safety and risk management,” Yu explains. “It has to be a technical and a social interaction solution.”
The landmark study will be presented at the IEEE Symposium on Security and Privacy in May 2025.
Paper Summary
Methodology
The study employed two methods of data collection. First, researchers collected Reddit data on April 9, 2024, using the Python Reddit API Wrapper, gathering posts from various subreddits related to teenagers and AI technology. From 712 posts and 8,533 comments initially collected, 181 relevant items were identified after filtering for teen authorship and AI-related content.
The second phase consisted of semi-structured interviews conducted between January and May 2024 with 7 teenagers and 13 parents. Parent participants were recruited through Prolific based on specific criteria: prior experience with Generative AI, U.S. residency, English fluency, and having at least one child aged 13-17. Teen participants were recruited through local high schools and Prolific, with parental consent required for participation.
Key Results
The study revealed three key findings. First, teenagers frequently use AI for emotional support and companionship, often treating AI chatbots as therapists or friends. Second, there’s a significant gap between parents’ and teens’ understanding of AI risks – teens worried about addiction to virtual relationships and AI misuse in social groups, while parents focused on data collection and inappropriate content exposure. Third, existing parental controls on AI platforms proved inadequate, primarily offering only basic content restrictions and age verification.
Study Limitations
The researchers identified several key limitations. The rapid evolution of AI technology meant that some platforms, like Character.ai, emerged as significant during interviews but weren’t prominent in the initial Reddit data collection. The use of ChatGPT as an example in interview guides may have primed participants to focus on this platform. Additionally, the study focused on U.S.-based, English-speaking users, potentially missing cultural variations in AI usage and parental control approaches.
Discussion & Takeaways
The researchers recommend developing more comprehensive protection systems, including parent-AI collaborative systems for content moderation, real-time risk monitoring capabilities, and better tools for family communication about AI use. They emphasize the need for age-appropriate content controls and transparent risk disclosures on AI platforms. The study suggests that current approaches to digital safety need significant updates to address the unique challenges posed by generative AI technology.
Paper Information
This research appears as a preprint on arXiv (2406.10461v2), authored by researchers from the University of Illinois at Urbana-Champaign, Pennsylvania State University, The Hockaday School, and Dougherty Valley High School. The study was approved by the researchers’ institutional ethical review board and data protection office.
While explicit funding information is not disclosed in the paper or press release, the research appears to be supported through the participating academic institutions. The authors do not report any conflicts of interest or external funding sources that could influence the study’s findings.