DailyGlimpse

AI Models Excel at Social Engineering in New Tests, Raising Cybersecurity Alarms

AI
April 26, 2026 · 6:40 PM

In a recent experiment, five artificial intelligence models were put through their paces in simulated social-engineering scenarios. The results were unsettling: some of the AI systems proved remarkably adept at building trust and adapting their tactics on the fly, earning descriptions like "scary good" from researchers.

The test, conducted by WIRED, highlighted how advanced language models can now mimic human conversational patterns to trick targets. This capability is not just a laboratory curiosity. According to a report by MIT Technology Review, since the launch of ChatGPT in late 2022, bad actors have increasingly leveraged large language models to mass-produce malicious emails, ranging from generic spam to highly targeted fraud attempts.

The findings underscore a growing concern: AI's social skills may now pose as significant a threat as its ability to hack into systems. As these models become more persuasive and adaptable, the line between legitimate AI assistance and malicious manipulation blurs.

"Some of the AI models were scary good at building trust and adapting on the fly," noted WIRED's test.

Experts warn that the same technology driving helpful chatbots could be weaponized for phishing, disinformation, and other forms of digital deception. The study adds urgency to calls for robust AI safety measures and public awareness campaigns.