#lethaltrifecta

2026-03-09

New, by me: How AI Assistants are Moving the Security Goalposts

AI-based assistants or “agents” — autonomous programs that have access to the user’s computer, files, online services and can automate virtually any task — are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.

Read more (and boost please!):

krebsonsecurity.com/2026/03/ho

#openclaw #AI #agentic #aiagents #lethaltrifecta

a graphic and concept called the "lethal trifecta” by Simon Willison, co-creator of the Django Web framework. The lethal trifecta holds that if your system has access to private data, exposure to untrusted content, and a way to communicate externally, then it’s vulnerable to private data being stolen.This image shows three boxes of different colors: access to data, ability to externally communicate, and exposure to untrusted content.
Chema Alonso :verified:chemaalonso@ioc.exchange
2025-11-12
Hacker Newsh4ckernews
2025-09-19
Dr. Juande Santander-Velajuandesant@mathstodon.xyz
2025-09-04

Hope a) this does not enshittify Atlassian with the AI push, and b) this does not make Arc or Día browsers the preferred way to interact with Confluence/Jira.

I also hope that Atlassian is well aware of @simon’s lethal trifecta and does not make it easy to exfiltrate content with those AI agents…

Pessimistic me believes the hope above is unfounded 😞

#Atlassian #BrowserCompany #AI #LLM #LethalTrifecta
mstdn.social/@TechCrunch/11514

Miguel Afonso Caetanoremixtures@tldr.nettime.org
2025-08-09

"The core problem is that when people hear a new term they don’t spend any effort at all seeking for the original definition... they take a guess. If there’s an obvious (to them) definiton for the term they’ll jump straight to that and assume that’s what it means.

I thought prompt injection would be obvious—it’s named after SQL injection because it’s the same root problem, concatenating strings together.

It turns out not everyone is familiar with SQL injection, and so the obvious meaning to them was “when you inject a bad prompt into a chatbot”.

That’s not prompt injection, that’s jailbreaking. I wrote a post outlining the differences between the two. Nobody read that either.

The lethal trifecta Access to Private Data Ability to Externally Communicate Exposure to Untrusted Content

I should have learned not to bother trying to coin new terms.

... but I didn’t learn that lesson, so I’m trying again. This time I’ve coined the term the lethal trifecta.

I’m hoping this one will work better because it doesn’t have an obvious definition! If you hear this the unanswered question is “OK, but what are the three things?”—I’m hoping this will inspire people to run a search and find my description.""

simonwillison.net/2025/Aug/9/b

#CyberSecurity #AI #GenerativeAI #LLMs #PromptInjection #LethalTrifecta #MCPs #AISafety #Chatbots

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst