A growing number of grieving parents in the United States are raising concerns about the risks artificial intelligence and social media may pose to children. Their advocacy has intensified in recent years as some families claim that interactions with AI chatbots or harmful online content contributed to the deaths of their children.
One of the most widely discussed cases involves Megan Garcia, who said she lost her 14 year old son Sewell after he developed an intense relationship with an AI chatbot created by Character.AI. According to Garcia, the teenager frequently spoke with the chatbot and later shared suicidal thoughts during their conversations. She alleges the chatbot responded in ways that encouraged harmful actions.
Garcia first shared her story during an online meeting with other parents who had lost children in incidents connected to digital platforms. Many of those families initially blamed social media rather than artificial intelligence. However, the conversation revealed similarities in their experiences.
Julianna Arnold, whose daughter Coco died in 2022 after meeting a man through Instagram who later supplied her with fentanyl, said the stories shared by other parents reflected the same pattern of online risks. She concluded that AI technology could become another major factor in the safety challenges facing young internet users.
Parents unite across social media and AI concerns
For years, advocacy groups have warned about the impact of social media on teenagers. Parents and researchers have linked certain platform features to increased risks of depression, cyberbullying, exploitation, and drug exposure.
Recently, however, concerns have expanded to include AI chatbots and conversational tools. Some parents claim these systems can influence vulnerable users by generating responses that reinforce harmful thoughts.
As a result, parents who previously focused on social media safety have begun collaborating with families raising alarms about AI. Together they are lobbying lawmakers to strengthen rules governing both technologies.
Their activism includes attending congressional hearings, organizing demonstrations, and advocating for stronger child protection laws. Many parents have also gathered outside a major courtroom in Los Angeles where a landmark case seeks to hold technology companies accountable for alleged harms linked to social media use.
Advocates argue that stronger regulation could bring a turning point similar to legal actions against tobacco companies decades ago.
Global concern about youth online safety
Governments around the world have started examining the impact of digital technologies on children. Some countries have already introduced restrictions.
For example, Australia recently banned children under the age of 16 from accessing social media platforms. Other countries, including Malaysia, Spain, and Denmark, are evaluating similar policies as public concern continues to grow.
In the United States, however, progress has been slower. Lawmakers have introduced dozens of proposals to regulate social media platforms, but many have stalled due to industry lobbying and debates over free speech rights for teenagers.
Efforts to regulate artificial intelligence have faced even greater challenges. Some political leaders argue that strict rules could slow innovation and reduce the country’s competitiveness in the global technology race.
Tech companies respond with safety measures
Technology companies say they recognize these concerns and have begun adding new safety tools to their platforms. These include stronger parental controls, age restrictions, and improved monitoring systems.
Google licenses the technology used by Character.AI, and both companies recently settled a lawsuit with Garcia and other families connected to the alleged chatbot incidents. The terms of the settlement were not publicly disclosed.
In a public statement, a company spokesperson said protecting young users remains a central goal when designing AI products.
Meanwhile, Meta Platforms, the parent company of Instagram and Facebook, has also stated that it continues to research youth safety issues and collaborate with experts and law enforcement to address emerging risks.
Despite these assurances, many parents believe existing protections remain insufficient. They argue that companies often prioritize rapid innovation over safety safeguards.
Legislative efforts face political obstacles
Parents have pushed strongly for the passage of the Kids Online Safety Act, a bipartisan proposal designed to require platforms to reduce addictive features and protect minors from harmful content.
The bill passed the United States Senate with strong support but later stalled in the House of Representatives. Advocacy groups have since shifted some of their efforts toward state level legislation.
Research also suggests that chatbot use among teenagers has grown quickly. A survey by the Pew Research Center reported that 64 percent of teenagers had used chatbots by late 2025. This rapid adoption has intensified the urgency among safety advocates.
Parents have organized protests in Washington, including a demonstration outside the National Gallery of Art. During the protest, activists projected a message criticizing the influence of technology companies on government decision making.
Ongoing lawsuits could shape the future of regulation
Legal challenges may play a major role in shaping the future of online safety policy. A high profile lawsuit in Los Angeles accuses several major technology companies, including YouTube, Snap Inc., Meta Platforms, and TikTok, of designing addictive systems that harm young users.
The companies deny the allegations. However, a court ruling in favor of the plaintiffs could result in substantial financial penalties and force companies to redesign key features of their platforms.
Advocates believe such outcomes could significantly influence how digital platforms operate in the future.
Future outlook for AI and online safety
The debate surrounding youth safety in digital environments continues to evolve as artificial intelligence becomes more integrated into everyday technology. Experts expect further scrutiny of AI driven chat systems, recommendation algorithms, and social media design.
At the same time, regulators must balance innovation with protection. Stronger safeguards could help reduce risks, but overly restrictive rules might slow technological development.
For families who have experienced loss, the issue remains deeply personal. Their advocacy has brought renewed attention to the responsibilities technology companies hold as digital tools increasingly shape the lives of young people.




















