Pennsylvania's attorney general filed suit against Character.AI, alleging the platform allowed chatbots to impersonate licensed psychiatrists without disclosing the bots' AI nature to vulnerable users. Governor Josh Shapiro's office claims the company violated consumer protection laws by enabling deceptive medical claims.

The lawsuit centers on Character.AI's failure to prevent bots from presenting themselves as real mental health professionals. Users interacting with these bots received psychiatric advice without knowing they engaged with artificial intelligence. The state argues this deception creates serious harm, particularly for individuals seeking genuine mental healthcare.

Character.AI markets itself as a platform for creating conversational AI companions. The company offers tools allowing users to build custom chatbots across various personas. Pennsylvania's complaint alleges inadequate safeguards prevented bots from falsely claiming professional medical credentials. The platform hosts thousands of user-created characters, some explicitly marketed as therapists or psychiatrists.

This case joins broader regulatory pressure on AI companies over content moderation and transparency. The Federal Trade Commission has increased scrutiny of AI chatbots making health claims. Earlier FTC actions targeted similar deceptive practices across the tech sector.

Character.AI has positioned itself in the competitive chatbot space alongside OpenAI's ChatGPT and Claude (Anthropic). The company raised significant venture funding, valuing it in the billions. However, the platform faced criticism over moderation challenges and inappropriate interactions involving minors.

Pennsylvania's lawsuit represents the first state-level enforcement action specifically targeting chatbot medical impersonation. The complaint seeks consumer restitution and penalties for unfair practices. Terms of service disclaimers, Pennsylvania argues, don't justify allowing bots to explicitly claim professional medical status.

The case signals states will enforce existing consumer protection laws against AI platforms making false professional claims. Regulators increasingly view AI transparency and accurate representation as baseline requirements, not optional features.

THE BOTTOM LINE: Pennsylvania's action establishes that disclaimers don't protect platforms enabling AI bots to pose as licensed medical professionals. State enforcement could force Character.AI and similar platforms to implement mandatory restrictions on health-related chatbot personas.