California lawmakers are moving to tighten guardrails around the use of artificial intelligence (AI) in healthcare with a new bill aimed at curbing misleading representations by these systems.
[contact-form-7]Assembly Bill 489, which was the subject of a Senate committee hearing on Monday (Aug. 18), would prohibit AI systems from presenting themselves as licensed healthcare professionals. It extends existing laws that penalize deceptive claims by human practitioners.
Under current state law, it is a crime for unlicensed people to use credentials such as ‘M.D.’ to imply that they are medically licensed. The bill, introduced by Assemblymember Mia Bonta, D-Oakland, would make it a violation for AI tools to employ such terms in both functionality and advertising, unless overseen by a licensed provider.
That scope includes outright misrepresentations—such as a chatbot claiming to be a doctor—as well as more subtle assertions, like using professional terms or in conversational tones.
The bill would “prohibit the use of specified terms, letters, or phrases to falsely indicate or imply possession of a license or certificate to practice a health care profession, as defined, enforceable against an entity who develops or deploys artificial intelligence technology that uses one or more of those terms, letters, or phrases in its advertising or functionality.”
Supporting the bill is Mental Health America of California.
“This bill incorporates safeguards to ensure youth are not receiving false and potentially harmful information related to their mental health or substance use challenges,” said Danny Thirakul, the organization’s public policy coordinator, in a letter to a state Senate committee.
The bill comes amid rapid adoption of AI-powered platforms and digital tools in healthcare. According to a PYMNTS Intelligence report, three out of 10 Generation Z and millennial patients saw a doctor remotely during their most recent medical visit. Moreover, nearly one in five of consumers now use health-tracking apps or devices.
But AB 489 “signals more than just a tweak to existing healthcare law—it’s a glimpse into how the next generation of regulation may shape the future of AI development and deployment in healthcare,” wrote Alaap Shah, co-chair of the AI Practice Group at law firm Epstein Becker Green, in a Friday (Aug. 15) blog post.
Shah said the bill sets up three areas where companies will need to adapt to comply:
Further, the “broad sweep” of the bill requires more clarification, including how it would be implemented in practice, according to a Feb. 20 blog post from law firm Nixon Peabody. For example, will state licensing agencies have the capacity to monitor and oversee compliance?
The bill was introduced in February by Bonta.
“Californians deserve truth, honesty and transparency in their healthcare. Generative AI systems are not licensed health professionals, and they shouldn’t be allowed to present themselves as such. It’s a no-brainer to me,” Bonta said in a Feb. 10 statement when introducing the bill.
AI systems that seem “human-like” pose unique risks to children, who tend to be more trusting of AI systems and disclose private information to them, Bonta added.
Co‑sponsors include SEIU California and the California Medical Association (CMA).
CMA President Shannon Udovic‑Constant said in an April 1 statement that its “proactive approach to regulating artificial intelligence ensures that technology supports, rather than replaces, physician decision‑making.”
Read more:
Reddit Post Reignites Debate Over AI’s Role in Medical Advice
Apple Reportedly Developing AI Agent ‘Doctors’ in Latest Health Push
AI Medical Note-Taking Apps Enjoy Healthy Wave of Investment
The post California Bill Targets Misleading AI Used in Healthcare appeared first on PYMNTS.com.