Summary: According to a Reuters report, the FTC plans to demand internal documents from major AI firms as it examines mental‑health risks to children from chatbots.
Why it matters
Expect tighter disclosures and safety audits around content filters, age gating, and data collection. Platforms offering youth‑facing assistants should prepare for policy changes and potential remedies.
Key facts
- Focus: child safety, mental health impacts, data use
- Targets: major labs and consumer platforms
- Process: information demands, potentially hearings
What to watch
Any formal rulemaking or consent decrees; cross‑agency moves with education and health regulators; EU/UK harmonization.