Skip to content

Statement on CA Social Media Addiction Trial | A New Era of Tech Accountability

The core mission of the Kapor Center family of organizations is to create a more equitable and inclusive technology ecosystem, and harness technology’s potential to benefit society. For many years, we have supported efforts to raise awareness of Big Tech’s harms affecting the most marginalized and advocate for accountability and policy change. The landmark verdicts this week holding social media companies liable for harms to children and youth represent an important win–and a moment for a broader call-to-action.

We need collective action to document the harms of AI, advance responsible AI policies and accountability mechanisms, and invest in responsible AI solutions. Read our full statement below.

Allison Scott phd
Allison Scott, Ph.D.
CEO
Kapor Foundation

Patrick Armstrong
Patrick Armstrong
VP of Technology Policy and Government Affairs
Kapor Center Advocacy

Lili Gangas
Lili Gangas
Chief Technology Community Officer
Kapor Foundation

A New Era of Tech Accountability: Why AI Guardrails are Urgently Needed to Protect Youth and Foster Innovation

Two landmark verdicts this week sent an unmistakable message to technology companies that, for far too long, have operated without accountability for the harms they have caused by putting profits above people. A New Mexico jury ordered Meta to pay $375M for failing to protect children from online predators, and a California jury found both Meta and Google liable for knowingly designing addictive platforms that damaged the mental health of a young woman who began using their products as a child, and ordered them to pay $6M in damages. These verdicts are long overdue and provide a warning to the entire tech industry.  

When tech companies are allowed to operate without guardrails or accountability, communities are harmed, public trust erodes, and we miss the opportunity to accelerate innovations that could be harnessed to improve people’s lives. 

Despite years of advocates, researchers, parents, and whistleblowers raising alarm bells about harms and demanding action, companies have largely refused to change their policies, practices, and algorithms. Most recently, they have disbanded teams focused on ethics, trust, and safety, and spent millions of dollars to lobby against any regulation of their technologies. Meanwhile, public trust in Big Tech companies and products has been significantly eroded. The overwhelming majority of Americans disapprove of Big Tech CEOsconfidence in Big Tech firms has declined, and Americans are more supportive of government intervention.  Countries around the world, like Australia and France, have already moved to ban social media for children outright, with many more countriesconsidering bans. Parents in the United States have played a significant role in advocating for child online safety bills in Congress, but to no avail. Several states have taken action by passing their own restrictions–while fighting against efforts by Big Tech pushing for federal preemption that would remove state-level protections.

The social media companies at the heart of these trials are now key players in AI development. They have provided a model for the AI industry to follow, including adopting anti-regulation stances and refusing to implement practices to protect children. They appear content pursuing the same approach as social media companies did, opting to “move fast and break things” by accelerating AI deployment to young people at all costs and despite credible risks

Some important data points to highlight:

  • Two-thirds of teens have used AI chatbots, and Black and Latino youth are more likely to use chatbots than their peers. U.S. adults are far more concerned about AItechnologies than hopeful about its promise.
  • Red flags were raised when two teens committed suicide and their parents filed lawsuits against OpenAI and Character.AI. A wave of additional lawsuits have filed against AI companies to hold them accountable for their AI chatbots contributing to teen suicide and addiction.
  • Character.AI and Google agreed in January 2026 to settle lawsuits alleging the AI chatbot contributed to mental health crises and suicides among young people.  Snap and TikTok also settled ahead of trial at the beginning of the year, with thousands of cases from teens, parents, and attorneys general still unresolved. 
  • In August 2025, a bipartisan coalition of 44 state attorneys general sent a formal letter to Google, Meta, and OpenAI expressing grave concerns about the safety of children using AI chatbot technologies. 
  • There are over 95 chatbot-specific bills under consideration across 34 states and at the federal level.

This trend is noteworthy. The same wave of public outrage, litigation, and regulatory action that eventually came for social media is already impacting AI–and we have the opportunity to get it right this time by keeping up the pressure for policies to protect young people.

At the Kapor Foundation and the Kapor Center, we believe that technology–including AI–holds great potential for tackling some of society’s most pressing challenges, but it requires a responsible approach to design and development, and careful consideration and attention to potential risks to society. In this moment, we must reflect on the lessons learned from what social media companies and product developers got wrong, and what the impacts have been of government inaction and lack of meaningful regulation.  We must also recognize that we ceded power over the regulation of tech companies to companies themselves and their lobbyists. As we face significant pressure for unregulated AI growth, resistance to guardrails, and the belief that exploitative models are required for success in a profit-at-all-costs climate, we must mobilize resources, support organizers, shift investments, and promote policy change to protect youth and families from harm.

Investing in policy change is the most urgent and consequential action we can take right now to ensure that we address and mitigate harms so that AI can be used to transform lives.

We believe the path forward is clear and call for three distinct actions: 

  1. Organizing and Advocacy: We need robust, well-resourced advocacy efforts for identifying harms of AI, raising awareness, and advancing policies or other accountability mechanisms. Grassroots organizations, civil rights groups, coalitions of parents, and whistleblowers have mobilized and fought for vulnerable populations; we need to equip them with the resources to continue working with and advocating on behalf of communities.
  2. Policy Change: We need government officials who will be champions at the local, state, and federal levels to enact policies that establish real guardrails, ensure safety, and give people the confidence to adopt and benefit from AI for good. We cannot allow tech lobbyists to limit progress, and we must invest more heavily in pursuing policy priorities that benefit people and protect kids. 
  3. Investing in Responsible AI Development: We need to drive investments in technology solutions that adopt principles for responsible, ethical, and equitable innovation, understanding that responsible AI investments can be both profitable and beneficial to society. 

We all have a role to play. The time to act is now.

The Kapor Foundation and Kapor Center Advocacy would like to specifically thank the youth and parent advocates, grassroots organizers, scholars, journalists, and legal experts who have worked tirelessly for many years to push for greater tech safety and accountability. Your efforts have been central to these wins and to building a more equitable tech sector.

Allison Scott, Ph.D. CEO, Kapor Foundation
Patrick Armstrong, VP of Tech Policy and Government Affairs, Kapor Center Advocacy
Lili Gangas, Chief Technology Community Officer, Kapor Foundation

Learn more about Kapor Foundation’s Responsible AI Principles and Responsible AI and Tech Justice Guide for K-12.