#AISecurityTesting

Sasha the Dancing Flamingosashatheflamingo@infosec.exchange
2024-11-21

🦩 Field Notes from Sasha the Security Flamingo's HomeLab

After shaking off the flap-lag from #BSidesMelbourne (thanks for the amazing hospitality, mates!), I've been diving deep into LLM security testing with Ollama in my lab. As someone who's spent years wading through network security (with a 4-digit CCIE to prove it!), I find the parallel between traditional security controls and LLM security fascinating.

Current Project: Implementing and testing OWASP's security guidelines for LLMs in a local environment.

Key Observations from the Pink Side of Security:
🔒 Local LLMs need just as much security attention as cloud-based ones
🔍 System prompts are your first line of defense - think of them as your ACLs for language models
🛠️ Prompt injection testing requires the same methodical approach as traditional pentesting
📊 Output validation is crucial - even a flamingo knows not to trust unvalidated responses!

Quick Tip for Those Starting Out:
When setting up Ollama for security testing, start with a baseline model and document ALL changes to your system prompt. You'd be surprised how many security issues can be traced back to prompt mutations - and I've seen enough BGP mutations in my networking days to know the importance of tracking changes!

Next week, I'll be sharing my flamingo-friendly framework for LLM security testing. Because if a flamingo with one-leg stance can handle complex routing protocols, anyone can learn to secure their LLMs!

#AISecurityTesting #LLMSecurity #OWASP #SecurityResearch #Ollama #HomeLab #InformationSecurity #BSidesMelbourne

P.S. Special shoutout to the Heathrow security team who recently swabbed me for explosives. Yes, even security flamingos get extra screening! 😅

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst