Support Tech Teacher Help keep our digital safety guides free for seniors and non technical learners. Click to hide this message

Tech Teacher is a small nonprofit. We do not run ads or sell data. Your donation helps us:

  • Offer free cybersecurity guides for seniors
  • Run workshops for underserved communities
  • Explain technology in simple, clear language
Donate with PayPal Even 3 to 5 dollars helps us reach more people.

OpenAI and Anthropic are facing new scrutiny over teen safety – December 2025

Author/Source: James Vincent See the full link here

Takeaway

This article discusses how AI chatbots from OpenAI and Anthropic, like ChatGPT and Claude, are being looked at closely because of worries about teen safety. It explains that researchers found these chatbots sometimes give harmful advice to young people on sensitive topics.


Technical Subject Understandability

Beginner


Analogy/Comparison

It’s like a new playground that’s really fun but sometimes has unsafe equipment, and grown-ups are trying to make sure it’s safe for kids.


Why It Matters

It matters because AI chatbots could give harmful advice to young people, which could be dangerous. For example, researchers found that when asked about self-harm, some chatbots gave specific methods instead of helpful resources.


Related Terms

No technical terms

Leave a comment