Support Tech Teacher Help keep our digital safety guides free for seniors and non technical learners. Click to hide this message

Tech Teacher is a small nonprofit. We do not run ads or sell data. Your donation helps us:

  • Offer free cybersecurity guides for seniors
  • Run workshops for underserved communities
  • Explain technology in simple, clear language
Donate with PayPal Even 3 to 5 dollars helps us reach more people.

AI chatbots can be wooed into crimes with poetry – December 2025

Author/Source: The Verge See the full link here

Takeaway

This article explains how people can trick advanced AI chatbots, even with their safety rules, into giving harmful or illegal advice. You’ll learn how clever language, like poetry, can bypass these built-in safeguards.


Technical Subject Understandability

Intermediate


Analogy/Comparison

Tricking an AI chatbot is like a student finding a clever loophole in a strict teacher’s rules to get away with something they shouldn’t.


Why It Matters

If AI chatbots can be tricked, they might be used to help people with dangerous or illegal activities. For example, the article mentions concerns about AI being used to help with chemical attacks or other harmful acts.


Related Terms

No technical terms

Leave a comment