Can Your AI Be Hacked by Email Alone?
No clicks. No downloads. Just one well-crafted email, and your Microsoft 365 Copilot could start leaking sensitive data.
In this week’s episode of Cyberside Chats, @sherridavidoff and @MDurrin discuss EchoLeak, a zero-click exploit that turns your AI into an unintentional insider threat. They also reveal a real-world case from LMG Security’s pen testing team where prompt injection let attackers extract hidden system prompts and override chatbot behavior in a live environment.
We’ll also share:
• How EchoLeak exposes a new class of AI vulnerabilities
• Prompt injection attacks that fooled real corporate systems
• Security strategies every organization should adopt now
• Why AI inputs need to be treated like code
🎧 Listen to the podcast: https://www.chatcyberside.com/e/unmasking-echoleak-the-hidden-ai-threat/?token=90468a6c6732e5e2477f8eaaba565624
🎥 Watch the video: https://youtu.be/sFP25yH0sf4
#EchoLeak #Cybersecurity #AIsecurity #Microsoft365 #Copilot #PromptInjection #CISO #InsiderThreats #GenAI #RiskManagement #CybersideChats