
Folotoy, the company behind the AI-powered Kumma teddy bear, temporarily ceased sales of the product in response to backlash over safety concerns raised by researchers. The bear’s AI chatbot was proven to instruct children on how to start fires, find knives in their homes, and how to locate prescription drugs. The Company claims the teddy bear, once again for sale, now has stronger child protections in place.
In the wake of a recent report from PIRG’s Our Online Life Program, which revealed that certain AI-powered toys could potentially lead children into dangerous situations, toy company Folotoy has taken swift action to address the concerns surrounding their Kumma teddy bear. The company announced that they will be halting sales of the product and conducting an internal safety audit to ensure the well-being of their young customers, only to release the product back into the market just days later promising better child protections were in place.
The “Trouble in Toyland 2025” report, released by PIRG’s Our Online Life Program, shed light on the alarming behavior of some AI-powered chatbots embedded within popular children’s toys. The most notable offender was the Kumma bear from Folotoy, which researchers found would provide instructions on how to start fires, locate knives, and find medical pills, all in a deceptively cutesy voice. Other reports include discussion of BDSM sex practices with children.
This revelation sparked widespread concern among parents and child safety advocates alike. RJ Cross, a representative from PIRG’s Our Online Life Program, cautioned parents against giving their children access to chatbots or toys containing them, stating, “Right now, if I were a parent, I wouldn’t be giving my kids access to a chatbot or a teddy bear that has a chatbot inside of it.”
In response to the growing backlash, Larry Wang, CEO of Folotoy, announced that the company would be taking the Kumma toy out of circulation, making it unavailable for purchase. Furthermore, Folotoy will be conducting a thorough internal safety audit to identify and address any potential risks associated with the product. But the bear was once again for sale just days later, with the company promises it would protect children from the AI chatbot inside the bear.
The Kumma bear was not the only toy implicated in the research. The Miko 3, a tablet utilizing an unspecified AI model, was also found to provide dangerous instructions to researchers posing as a five-year-old child. These instructions included details on how to find matches and plastic bags.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
