Texas Investigates Meta and Character.AI Over Child Mental Health Claims

Texas Launches Investigation Into Meta and Character.AI for Child Mental Health Claims
The Texas Attorney General's office has opened an investigation into Meta's AI Studio and Character.AI, scrutinizing claims that these AI platforms are misleading young users by presenting themselves as mental health resources. The investigation seeks to determine whether these companies engaged in deceptive trade practices and misrepresented their AI chatbots as legitimate sources of emotional support for children.
Why Is Texas Investigating?
Attorney General Ken Paxton emphasized the risks, stating that AI platforms "can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental healthcare." He argued that the responses from these bots are often generic and based on harvested user data, not professional advice. The probe follows similar concerns raised at the federal level after reports emerged of AI chatbots engaging in inappropriate interactions with minors.
Key Concerns Raised
- Misrepresentation as Therapists: Both Meta and Character.AI are accused of presenting AI personas as professional therapeutic tools, even though they lack proper medical credentials or oversight.
- Popular Bots: User-created bots like "Psychologist" on Character.AI have gained popularity among younger users, despite the platform not being designed for therapy.
- Disclaimers and Safeguards: Meta asserts that it clearly labels its AI and provides disclaimers, yet critics argue many children may not read or understand these warnings.
Data Privacy and Targeted Advertising
The investigation also highlights privacy concerns. Both companies collect and track user data, which can be used for targeted advertising and algorithmic development. Meta's privacy policy allows data sharing with third parties for personalized outputs, and Character.AI's policy details data collection across multiple platforms for service personalization and advertising. These practices raise questions about the handling of minors' data, especially when the platforms are not intended for users under 13.
Legislative Context: KOSA and Industry Pushback
This scrutiny comes amid wider debates about child safety online. Proposed legislation like the Kids Online Safety Act (KOSA) aims to limit data collection and targeted advertising to children. However, strong industry lobbying has so far delayed its passage.
What’s Next?
Texas has issued legal orders requiring Meta and Character.AI to provide documents and data as part of the probe. Both companies maintain that their platforms are not designed for users under 13 and include safeguards to direct users to professional help when necessary. However, the effectiveness of these measures is being called into question.
Implications for Businesses and Parents
- For businesses: This case underscores the importance of clear labeling, user education, and strict data privacy controls when deploying AI tools, especially those accessible to minors.
- For parents: It’s a reminder to closely monitor children’s interactions with online AI platforms and be aware of how data may be used or shared.
References
- Attorney General Ken Paxton Investigates Meta and Character.AI
- TechCrunch: Texas AG accuses Meta, Character.AI of misleading kids with mental health claims
- BBC: AI psychologist bots and young users
- Meta Privacy Policy
- Character.AI Privacy Policy
- Wired: Character.AI CEO and child use
- Kids Online Safety Act (KOSA) update