#aisecurity

Brian Greenberg :verified:brian_greenberg@infosec.exchange
2025-12-22

If you read the cybersecurity sections of the 2026 NDAA closely, you can almost hear a weary sigh. This is not the sound of bold futurism. This is the sound of an institution that just finished grading a stack of exams and realized half the class still doesn’t lock their phone.

After a year of SignalGate and other painfully avoidable security lapses, Congress has decided to do something radical: write laws that assume people will make bad decisions unless gently, repeatedly, and legally discouraged from doing so. Hence, there is a new focus on hardened mobile devices for senior officials and actual rules around AI security. Not vibes. Rules. And it's long overdue.

The subtext is refreshingly honest. Cybersecurity failures this year weren’t caused by zero-days or shadowy genius hackers. They were caused by convenience, overconfidence, and the timeless belief that “it’ll probably be fine.” The NDAA reads like a syllabus revision after the midterm went badly.

There’s a lesson here for the rest of us. You can buy the best tools, fund the smartest teams, and write the cleanest policies. But if leadership treats security like optional homework, the final grade will reflect that.

TL;DR
🧠 Cyber law reacts to real-world faceplants
⚡ Mobile and AI security get adult supervision
🎓 Leadership behavior becomes part of the threat model
🔍 Secure tools don’t cancel careless habits

csoonline.com/article/4103754/

#Cybersecurity #NDAA2026 #Leadership #RiskManagement #AIsecurity #CISO #security #privacy #cloud #infosec

Pen Test PartnersPTP@infosec.exchange
2025-12-22

Our Ross Donald took a look at Eurostar’s public AI chatbot and found four security issues, including guardrail bypass, prompt injection, weak conversation binding, and HTML injection.

The chatbot UI suggested strong controls, but server side enforcement was incomplete. By modifying chat history and IDs, it was possible to influence model behaviour and extract internal details.

This research shows that familiar web and API security failures still apply, even when an LLM sits in the middle.

📌 pentestpartners.com/security-b

#CyberSecurity #AIsecurity #LLM #ApplicationSecurity #AI #Chatbot #Eurostar

Alireza Gharibgh4rib
2025-12-21

Humanizing the Blue Team: ☕
Let’s be real—I signed up to analyze network packets, and now I’m having to learn the inner workings of Neural Networks just to keep the lights on.
The "AI Pivot" is exhausting, but it’s the new baseline. If you’re a SOC Analyst in 2025, you’re also an AI Security Engineer.
Stay vigilant. The payload is in the weights. 🛡️

2025-12-21

Red Hat has acquired Chatterbox Labs, bringing custom AI security and safety tooling to the platform. The acquisition enables active probing of AI models for vulnerabilities including prompt injection, jailbreaking, and data leakage. Greg Kroah-Hartman announced the move alongside addressing 150+ C code vulnerabilities, strengthening both AI security and traditional software safety in the open source ecosystem. 🚀🔓 #FOSS #OpenSource #RedHat #AIsecurity

2025-12-20

Cybersecurity pressure is escalating across ransomware, AI-driven supply chain risk, and identity abuse.

Full roundup:
technadu.com/cybersecurity-pre

#InfoSec #CyberThreats #AIsecurity

Cybersecurity Pressure Builds Amid Crime, AI Risk, and Enforcement Actions
2025-12-20

2026 là năm của AI Agents – nhưng an toàn đặt đâu? Với Vercel AI SDK, một lỗi nhỏ có thể xóa database production. OWASP cảnh báo 10 rủi ro bảo mật chính (ASI01-ASI10) như xác thực yếu, rò rỉ dữ liệu, thi hành mã bất ngờ. Dùng `eslint-plugin-vercel-ai-security` để kiểm tra lỗi thời gian thực: xác minh tham số bằng Zod, giới hạn bước lặp, xác nhận người dùng trước hành động nguy hiểm. Đừng triển khai Agent nếu không có rào chắn an toàn.
#AISecurity #VercelAISDK #OWASP #AgentSecurity #AI #BảoMậtA

2025-12-19

AI và mã nguồn bảo mật: Lỗ hổng XSS trong Mintlify cảnh báo về rủi ro từ tích hợp nhanh công cụ bên thứ 3. AI ưu tiên tốc độ, bỏ qua phân tích an ninh, dễ dẫn đến tấn công chuỗi cung ứng. Nghiên cứu chỉ ra code do LLM tạo thường có lỗ hổng XSS, SQL injection – do huấn luyện từ code cũ hoặc thiếu kiến trúc bảo mật. #AIsecurity #AnToanAI #TucapChuoiCungUng #XSS #LuuYBan

reddit.com/r/programming/comme

2025-12-19

⚠️ Cảnh báo lỗ hổng Prompt Injection trong ứng dụng Vercel AI SDK!
Nhiều ứng dụng AI để lộ điểm yếu nghiêm trọng khi truyền trực tiếp dữ liệu người dùng vào hàm generateText(). Kẻ tấn công có thể:
- Ghi đè hướng dẫn hệ thống
- Đánh cắp prompt gốc
- Kích hoạt công cụ nguy hiểm
Giải pháp: Sử dụng eslint-plugin-vercel-ai-security để tự động phát hiện lỗi khi code, bảo vệ ứng dụng theo tiêu chuẩn OWASP LLM Top 10.

#AISecurity #PromptInjection #Vercel #DevSecOps #Linter
#BảoMậtAI #LỗHổ

2025-12-19

School #security #AI flagged #clarinet as a gun. Exec says it wasn’t an error.

A #Florida middle school was locked down last week after an #AIsecurity system called #ZeroEyes mistook a clarinet for a gun, reviving criticism that AI may not be worth the high price schools pay for peace of mind.
#surveillance #artificialintelligence

arstechnica.com/tech-policy/20

Ars Technica Newsarstechnica@c.im
2025-12-18

School security AI flagged clarinet as a gun. Exec says it wasn’t an error. arstechni.ca/xCNo #ArtificialIntelligence #aigundetection #schoolshooting #AIsecurity #zeroeyes #Policy #AI

2025-12-18

Microsoft Purview no Ignite 2025 traz segurança de dados integrada para a era da IA e agentes autônomos. DSPM unificado com IA, controle de oversharing, auto-labeling expandido (Snowflake, S3, SQL Server), DLP em endpoints e navegadores, políticas para Agent IDs, e RAG seguro com Azure AI Search. Foco em ação automatizada, visibilidade total e governança operacional.

#MicrosoftPurview #DataSecurity #AISecurity #Ignite2025 #DLP #DataGovernance #Cybersecurity #ThreatProtection #AnToànDữLiệu #Bả

OWASP releases game-changing AI security tools: Top 10 for Agentic AI, 250-page testing guide & vulnerability scoring system to help security teams tackle autonomous AI risks jpmellojr.blogspot.com/2025/12 #OWASP #AIsecurity #AgenticAI

2025-12-17

MCP từ "chạy trên máy tôi" đến sản phẩm thật sự: chuyển từ STDIO sang Streamable HTTP, bảo mật với Tool Poisoning, Rug Pull, Shadowing. Kiểm soát quyền, xác thực, quét lỗ hổng, tuân thủ GDPR & cấp phép (n8n hạn chế white-label). Dùng Ollama cho dữ liệu nội bộ. Bảo mật không còn là tùy chọn. #MCP #AIsecurity #AgenticSecurity #ModelContextProtocol #BảoMậtAI #AIProduction #VietnameseTech #LLM #ToolPoisoning #GDPR

dev.to/onlineproxyio/productio

Jay Thoden van Velzen ☁️​🛡️​:lolsob:jaythvv@infosec.exchange
2025-12-16

I still feel in discussions about AI security that we focus too much on malicious use and attacks against AI and AI agents.

But given that the volume of non-malicious interactions is far greater, and the chance of misaligned behavior still there, the risks of "normal" interactions and agents themselves going off-script are probably higher.

We explore that here: community.sap.com/t5/security-

This is part 1 in a 4 part series - if you like this, hit the "blogs" from the breadcrumbs on top to see the rest.

#AIsecurity #agenticAIsecurity

Offensive Sequenceoffseq@infosec.exchange
2025-12-16

⚠️ CRITICAL: CVE-2025-65213 in MooreThreads torch_musa (all versions) allows RCE via unsafe pickle.load() in compare_tool functions. Audit usage & block untrusted pickle files ASAP! More info: radar.offseq.com/threat/cve-20 #OffSeq #Vulnerability #AIsecurity

Critical threat: CVE-2025-65213: n/a

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst