#LLMVulnerabilities

TechnoTenshi :verified_trans: :Fire_Lesbian:technotenshi@infosec.exchange
2025-06-11

Researchers disclose "EchoLeak", a zero-click AI vuln in M365 Copilot enabling attackers to exfiltrate sensitive data via prompt injection without user interaction. Exploits flaws in RAG design and bypasses key defenses.

aim.security/lp/aim-labs-echol

#AIsecurity #LLMvulnerabilities #CyberRisk #M365

2025-05-15

AI-powered features are the new attack surface! Check out our new blog in which LMG Security’s Senior Penetration Tester Emily Gosney @baybedoll shares real-world strategies for testing AI-driven web apps against the latest prompt injection threats.

From content smuggling to prompt splitting, attackers are using natural language to manipulate AI systems. Learn the top techniques—and why your web app pen test must include prompt injection testing to defend against today’s AI-driven threats.

Read now: lmgsecurity.com/are-your-ai-ba

#CyberSecurity #PromptInjection #AIsecurity #WebAppSecurity #PenetrationTesting #LLMvulnerabilities #Pentest #DFIR #AI #CISO #Pentesting #Infosec #ITsecurity

2025-04-24
Verified by MonsterInsights

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst