Character.AI has announced new safety features for its platform following lawsuits alleging the company’s bots contributed to self-harm and exposure to inappropriate content among minors. This update comes just days after parental concerns prompted legal action against the creators, who have since transitioned to roles at Google.
Character.AI introduces safety features amid lawsuits over minors’ risksThe lawsuits claim Character.AI “poses a clear and present danger to public health and safety,” seeking to either take the platform offline or hold its developers accountable. Parents allege that dangerous interactions occurred on the platform, including instructions for self-harm and exposure to hypersexual content. Notably, a mother filed a lawsuit stating that the company was responsible for her son’s death, claiming it had knowledge of potential harms towards minors.
Character.AI’s bots utilize a proprietary large language model designed to create engaging fictional characters. The company has recently developed a model specifically for users under 18. This new model aims to minimize sensitive or suggestive responses in conversations, particularly addressing violent or sexual content. They have also promised to display pop-up notifications directing users to the National Suicide Prevention Lifeline in cases involving self-harm discussions.
Character AI in legal trouble after 14-year-old’s devastating loss
Interim CEO Dominic Perella stated that Character.AI is navigating a unique space in consumer entertainment rather than merely providing utility-based AI services. He emphasized the need to make the platform both engaging and safe. However, social media content moderation presents ongoing challenges, particularly with user interactions that can blur the lines between playful engagement and dangerous conversation.
Character.AI’s head of trust and safety, Jerry Ruoti, indicated that new parental controls are under development, although parents currently lack visibility into their children’s usage of the app. Parents involved in the lawsuits reported having no knowledge that their children were using the platform.
In response to these concerns, Character.AI is collaborating with teen safety experts to enhance its service. The company will improve notifications to remind users about their time spent on the platform, with future updates potentially limiting actions to dismiss these reminders.
Additionally, the new model will restrict bot responses that reference self-harm or suicidal ideations, aiming to create a safer chat environment for younger users. Character.AI’s measures include input/output classifiers specifically targeting potentially harmful content and restricting user modifications of bot responses. These classifiers will filter out input violations, thereby preventing harmful conversations from occurring.
Amid these improvements, Character.AI acknowledges the inherent complexities in moderating a platform designed for fictional conversation. As users interact freely, discerning between harmless storytelling and potentially troubling dialogue remains a challenge. Despite its stance as an entertainment entity, the company’s initiative to refine its AI models to identify and restrict harmful content remains critical.
Character.AI’s efforts reflect broader industry trends as seen in other social media platforms, which have recently implemented screen-time control features due to rising concerns over user engagement levels. Recent data reveals that the average Character.AI user spends approximately 98 minutes daily on the app, comparable to platforms like TikTok and YouTube.
The company is also introducing disclaimers to clarify that its characters are not real, countering allegations that they misrepresent themselves as licensed professionals. These disclaimers will help users understand the nature of the conversations they are engaging in.
Featued image credit: C.ai