#MaliciousAI

2025-11-19

“WormGPT did in 30 seconds what used to take hours to build.” - Matt Durrin, LMG Security

Attackers are now using malicious AI to launch holiday scams at a scale we’ve never seen before. And those stolen consumer credentials? They’re flowing straight into SSO, Microsoft 365, and VPN attacks.

We just published a breakdown of this year’s AI-driven holiday fraud surge — plus a free checklist your team can use today: lmgsecurity.com/holiday-hacker

#Cybersecurity #Fraud #AI #MaliciousAI #Holidays #HolidayScams #RemoteWork #Infosec

Fucking weirdo :v_gf: :v_aroace: :v_dis:magnolia@tech.lgbt
2025-08-27

Ooo that's a spicy one, using locally deployed agent to develop ransomware scripts on the fly. BOLO. cybersecuritynews.com/first-ai

#MaliciousAI #AIRansomware #Cybersecurity #Ransomware #AI #cyberthreats #TTP

2024-12-30

The thing to keep in mind about Large Language Models (LLMs, what people refer to as AI, currently) is even though human knowledge in the form of language is fed into them for their training, they are only storing statistical models of language, not the actual human knowledge. Their responses are constructed from statistical analysis of context of prior language used.

Any appearance of knowledge is pure coincidence. Even on the most “advanced” models.

Language is how we convey knowledge, not the knowledge itself. This is why a language model can never actually know anything.

And this is why they’re so easy to manipulate into conveying objectively false information, in some cases, maliciously so. ChatGPT and all the other big vendors do manipulate their models, and yes, in part, with malice.

#LargeLanguageModels #LLM #AI #NotAI #ChatGPT #ChatGPTIsNotAI #MaliciousAI #NotIntelligent #ArtificialIntelligence

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst