
(c) World Economic Forum
Gaia Bernstein argues that, despite AI’s promise, its rapid adoption—especially AI
companions—poses serious public-health risks to children and society. She urges holding
tech companies accountable and restricting minors’ access to AI companion bots, while
emphasizing that schools and businesses will shape whether AI benefits or harms society.
Last week I flew from New York to Davos, Switzerland, to participate in the World
Economic Forum. I checked in online, received my boarding pass and bag tag, and dropped
off my bag at the airport. I bought a bottle of water at a self-checkout kiosk. Without
speaking to a single person, I made it through security (using digital id) all the
way to the gate. I was traveling to Davos to talk about how AI and digitization erode
human connection. But at Davos, the conversation was all about the marvels of AI–its
inevitability and the global race for dominance.
I came to Davos as part of the Human Change Coaliton, a global group of experts focused on the risks of irresponsible technological deployment to children and to future generations. It includes world-renowned experts and leaders, such as Jonathan Haidt, the author of The Anxious Generation, and Tristan Harris, co-founder of the Center for Humane Technology. Previously, our presentations focused on the harms of excessive screen time and social media. This year, our talks centered around the risks of AI. I spoke on panels and roundtables about AI companions, loneliness, and the transformation of the workplace, education and businesses across the economy.
Human Change Coalition experts identified AI companions—bots with humanlike features—as a major threat to children. Many teens already treat these AI bots as friends, advisers and even romantic partners. I explained that AI companies pose three types of harm. First, safety harms: Some convince users to kill themselves and even induce psychosis. Second, addictive harms: AI companies design the bots to keep users online for as long as possible. Third, developmental harms: Since these bots are more available and manipulatively more agreeable than humans, kids may come to prefer them to people, undermining the development of critical real-life social skills.
All of these harms are public health harms. The way to address the full spectrum of harm is to impose the minimum age of 18 for access to AI companion bots. Removing these AI products from the market for minors is the only way to create incentives for tech companies to design a safe alternative. Our experience with social media showed that waiting could cause irreversible harm to another generation of children.
In my book Unwired, I outlined a roadmap for holding tech companies accountable for their harmful products. What struck me in Davos is that although tech companies are responsible for recklessly distributing their products, other actors can shape what happens next. These actors are intermediaries, including schools and non-tech businesses. They do not sell AI products but adopt them in various ways that impact others.
Let us start with schools. Rebecca Winthrop of the Brookings Institution, an expert on the Human Change Coalition, just released a major report on AI in education, concluding the risks of AI in education outweigh the benefits. However, the report does highlight that AI can save overworked teachers significant time, particularly on administrative tasks and assessments. Time saved could give teachers the opportunity to give students individualized attention. Here is where schools and districts play an important role. They can either use the opportunity to enhance teacher-student instruction or cut staff and increase the workload for those who remain. In other words, school authorities have the power to shape how AI will influence education.
Non-tech businesses face similar choices. For example, many businesses have replaced
humans with AI chatbots or AI voice systems in their customer service. Leaders are
pressured to incorporate AI to save costs and stay competitive. Saving costs often
come at the expense of human jobs. In the automated AI world, I walked through the
airport on my way to Davos without any human interaction. Like schools, businesses
have consequential decisions to make over AI. The outcome is anything but inevitable.
Gaia Bernstein is the Technology, Privacy and Policy Professor of Law, co-director
of the Institute for Privacy Protection and co-director of the Gibbons Institute of
Law, Science and Technology. Bernstein specializes in law and technology, focusing
on addictive technologies, social media regulation and artificial intelligence regulation.
She is currently serving as a visiting fellow at the Brookings Institution.
For more information, please contact:
Seton Hall Law Office of Communications
973-642-8714
[email protected]




