#AIsecurity

2025-06-28

🇬🇧🚨 Master AI cybersecurity in 3 days!
SECUIA by HS2 trains devs, pentesters & AI pros to spot & exploit LLM flaws.
🎯 Focus: generative models & AI threats
📅 Upcoming sessions: July 7–9 & Sept 3–5
đź“© formation@hs2.fr
#AIsecurity #LLM #leHACK

2025-06-26

Secure your AI models with OpenSSF Model Signing (OMS) 🛡️

Learn how OpenSSF’s AI/ML WG designed OMS to build trust in ML model artifacts.

openssf.org/blog/2025/06/25/an

#AIsecurity #OpenSourceSecurity #ModelSigning #OpenSSF

Sentinel SecuritySntlSecurity
2025-06-26

LLMs are now part of phishing kits.
The future isn't coming—it's exploiting your inbox in natural language.
📥📎

2025-06-26

Most security tools find vulnerabilities. We FIX them automatically.

AI's rise creates new attack vectors like "slopsquatting" (term coined by Seth Larson) - where AI hallucinates packages that attackers then register.

Example of why automated remediation > detection-only tools.

#DevSecOps #AISecurity đź§µ1/5

2025-06-26

We built automated vulnerability remediation that actually FIXES security issues (not just detects them).

While researching AI-generated code, we discovered something wild: 19.6% of AI package suggestions don't exist. Hackers are pre-registering them. And folks are committing this stuff to your trunk branch.

Traditional scanners miss this completely. We detect AND fix it.

Journey: indiehackers.com/post/built-th

Blog: rsolv.dev/blog/hidden-cost-ai-

#AutomatedRemediation #AISecurity #DevSecOps

2025-06-25

đź‘‹ Hi y'all! New to infosec.exchange!

We're RSOLV - building automated security vulnerability detection + remediation (yes, a _fix_, not just a red flag)

While researching AI-generated code, we discovered something wild: 19.6% of AI package suggestions don't exist. Hackers are pre-registering them.

Traditional scanners miss this completely. We detect AND fix it.

Journey: indiehackers.com/post/built-th

Blog: rsolv.dev/blog/hidden-cost-ai-

#AutomatedRemediation #AISecurity #DevSecOps

Explore critical need for specialized red-teaming in agentic AI systems, addressing complex attack surfaces and new risks. #RedTeaming #AgenticAI #AISecurity #CloudSecurityAlliance #CSA jpmellojr.blogspot.com/2025/06

Ohmbudsmanohmbudsman
2025-06-24

4/ ⚠️ A chilling new method dubbed the “Echo Chamber” attack bypasses AI guardrails with soft nudges.
đź”— read.readwise.io/archive/read/

2025-06-24

Can Your AI Be Hacked by Email Alone?

No clicks. No downloads. Just one well-crafted email, and your Microsoft 365 Copilot could start leaking sensitive data.

In this week’s episode of Cyberside Chats, @sherridavidoff and @MDurrin discuss EchoLeak, a zero-click exploit that turns your AI into an unintentional insider threat. They also reveal a real-world case from LMG Security’s pen testing team where prompt injection let attackers extract hidden system prompts and override chatbot behavior in a live environment.

We’ll also share:

• How EchoLeak exposes a new class of AI vulnerabilities
• Prompt injection attacks that fooled real corporate systems
• Security strategies every organization should adopt now
• Why AI inputs need to be treated like code

🎧 Listen to the podcast: chatcyberside.com/e/unmasking-
🎥 Watch the video: youtu.be/sFP25yH0sf4

#EchoLeak #Cybersecurity #AIsecurity #Microsoft365 #Copilot #PromptInjection #CISO #InsiderThreats #GenAI #RiskManagement #CybersideChats

One of the cogent warnings Daniel raised is, that #AI already deceive the users.
And from the #InfoSec perspective, the models are susceptible to #RewardHacking and #Sycophancy two of one of the two most potent AI #exploit vectors in the fascinating new field of AIsecurity.

#AIalignment #AIsecurity #alignment

2025-06-22

#FIRSTCON25 great training for #aisecurity

hackmachackmac
2025-06-19

EchoLeak - der "Dosenöffner" für KI‑Sicherheitsrealitäten!

Es war nur eine Frage der Zeit – und hier ist sie: eine Zero‑Click‑Attacke auf ein KI-System wurde Realität. Die Schwachstelle, bekannt als EchoLeak, nutzt nur eine einzige manipulierte E‑Mail – kein Klick, kein Download, keine Warnung – und Copilot exfiltriert heimlich sensible Unternehmensdaten.

Softsasisoftsasi
2025-06-19

Google is boosting security in India with AI-powered fraud detection! 🛡️ This helps protect users & businesses from phishing, malware & more.

SoftSasi provides custom software, IT consulting & cybersecurity services to thrive in the digital age.

2025-06-17

What Happens When AI Goes Rogue?

From blackmail to whistleblowing to strategic deception, today's AI isn't just hallucinating — it's scheming.

In our new Cyberside Chats episode, LMG Security’s @sherridavidoff and @MDurrin share new AI developments, including:

• Scheming behavior in Apollo’s LLM experiments
• Claude Opus 4 acting as a whistleblower
• AI blackmailing users to avoid shutdown
• Strategic self-preservation and resistance to being replaced
• What this means for your data integrity, confidentiality, and availability

📺 Watch the video: youtu.be/k9h2-lEf9ZM
🎧 Listen to the podcast: chatcyberside.com/e/ai-gone-ro

#AIsecurity #RogueAI #ZeroTrust #Cybersecurity #CybersideChats #LMGSecurity #AIWhistleblower #AIgoals #LLM #ClaudeAI #ApolloAI #AISafety #CISO #CEO #SMB #Cyberaware #Cyber #Tech

Alex Carteralex_carter
2025-06-17

Can AI be hacked into going rogue?
Can we really trust large language models like ChatGPT?

In our latest Neuro Sec Ops episode, we expose the wild world of LLM jailbreaks, dive into AI guardrails, and unpack the battle between security vs. usability.

🔊 Buckle up — this is AI safety like you’ve never heard it.

🎧 Listen now: open.spotify.com/episode/6jw1a

Alex Carteralex_carter
2025-06-16

Which AI vulnerability worries you the most?

Papamoscas Cardenalitorivas@mathstodon.xyz
2025-06-15
2025-06-13

Hello World! #introduction

Work in cybersec for 25+ years. Big OSS proponent.

Latest projects:

VectorSmuggle is acomprehensive proof-of-concept demonstrating vector-based data exfiltration techniques in AI/ML environments. This project illustrates potential risks in RAG systems and provides tools and concepts for defensive analysis.
github.com/jaschadub/VectorSmu

SchemaPin protocol for cryptographically signing and verifying AI agent tool schemas to prevent supply-chain attacks (aka MCP Rug Pulls).
github.com/ThirdKeyAI/SchemaPin

#ai #AiResearch #aisecurity #rag #mcp #mcpserver

2025-06-13

New AI Security Risk Uncovered in Microsoft 365 Copilot

A zero-click vulnerability has been discovered in Microsoft 365 Copilot—exposing sensitive data without any user interaction. This flaw could allow attackers to silently extract corporate data using AI-integrated tools.

If your organization is adopting AI in productivity platforms, it’s time to get serious about AI risk management:
• Conduct a Copilot risk assessment
• Monitor prompt histories and output
• Limit exposure of sensitive data to AI tools
• Update your incident response plan for AI-based threats

AI can boost productivity, but it also opens new doors for attackers. Make sure your cybersecurity program keeps up. Contact our LMG Security team if you need a risk assessment or help with AI policy development.

Read the article: bleepingcomputer.com/news/secu

#AISecurity #Microsoft365 #Copilot #ZeroClick #DataLeak #CyberRisk #LMGSecurity #AItools #ShadowAI #Cybersecurity #RiskManagement #SMB #CEO #CISO #Infosec #IT

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst