Joseph Zeng

All opinions and posts are my own.
My employers (past or present) are not responsible and may not agree with any of them.
Posts do not imply endorsement or agreement as I may just be sharing a discussion/topic of interest.

2025-05-01

CVE-2025-24091 - sending Darwin notifications to DoS iPhone

POC: Widget extension VeryEvilNotify šŸ˜€

Blog Post:
rambo.codes/posts/2025-04-24-h

#iOS #cybersecurity

2025-02-18

The writer exposed a critical vulnerability in Hugging Face’s smolagents—a lightweight AI agent framework. By leveraging prompt injection to bypass ā€œsafeā€ module restrictions, attackers can execute arbitrary OS commands, highlighting mounting security risks for autonomous AI systems. - o3-mini summary

magic-box.dev/hacking/smoltalk/

#ai #rce #cybersecurity

2025-02-06

This write-up reveals how attackers exploited Go's module proxy caching to maintain persistent malicious code distribution—even after repository updates. This sophisticated attack remained undetected for years, highlighting a serious supply chain vulnerability.

Takeaways:
• Proxy caching features can be weaponized
• Traditional security measures may miss these attacks
• Urgent need for enhanced package verification

šŸ’” Action Items for Dev Teams:

  • Implement strict cache control
  • Enhance module monitoring
  • Strengthen dependency validation

#cybersecurity #supplychain

socket.dev/blog/malicious-pack

2025-02-04

OWASP Threat and Controls Periodic table

owaspai.org/goto/periodictable/

#ai

2025-01-15

Microsoft’s AI red team has tested over 100 generative AI products and uncovered three essential lessons. First, red teaming is the starting point for identifying both security vulnerabilities and potential harms as part of responsible AI risk management. This includes spotting bias, data leakage, or other unintended consequences early in product development Second, human expertise is indispensable for addressing complex AI threats While automated tools can detect issues, they can’t fully capture the nuanced misuse scenarios and policy gaps that experts can identify Third, a defense-in-depth strategy is crucial for safeguarding AI systems Continuous testing, multiple security layers, and adaptive defenses collectively help mitigate risks, as no single measure can eliminate vulnerabilities in ever-evolving models. By combining proactive stress testing, expert analysis, and layered protections, organizations can better navigate the opportunities and challenges presented by generative AI. - LLM Summary

microsoft.com/en-us/security/b

#ai

2025-01-12

MIT AI Risk Repository

airisk.mit.edu/

#ai

2025-01-10

A Cloud Guru terminates "lifetime" course access for reasons of "plan being retired".

#cloud #pluralsight

2025-01-01

Neural Fictitious Self-Play (NFSP) for Imperfect-Information Games

Reinforcement learning "improviser" and supervised learning "planner"

Blog post and explanation: ai.gopubby.com/neural-fictitio

Research paper (not recent):
arxiv.org/abs/1603.01121v2

#ai

2024-12-21

Series on "Beyond XSS" including topics such as Cross-site leaks and CSTI

aszx87410.github.io/beyond-xss

Very appropriately, it was built using docusaurus, a static site generator :blobgrin:

#xss #cybersecurity

2024-12-14

The Future of Application Security: Integrating LLMs and AI Agents into Manual Workflows

anshumanbhartiya.com/posts/the

#sdlc #appsec #cybersecurity

LLM Summary::
LLMs are no longer just about generating content - they're becoming powerful security allies by combining planning, memory, and sophisticated tool usage capabilities. These agents can understand complex queries and respond naturally to security challenges.

LLM integration is revolutionizing software engineering by:
• Enhancing defect prediction
• Automating security documentation
• Reducing human errors in code review
• Streamlining secure coding practices.

2024-12-08

"Quantum decryption may materialise only two or three decades later, said Prof Ling, but enterprises will defend themselves better if they keep up to date and know their data repositories and vulnerabilities."

straitstimes.com/business/guid (Paywall)

#quantum #Regulatory #singapore

2024-11-14

Shazzer is a fuzzing tool designed to uncover browser quirks and reveal security vulnerabilities by focusing on differences in browser behavior. Whether you're testing for HTML parsing issues, JavaScript execution, or XSS vulnerabilities, Shazzer can be used.

A recent video by Portswigger shows its use : youtu.be/mLzxwmNoAI4?si=NnDWXZ

Shazzer aims to achieve:

  • Make Fuzzing Simple: Shazzer automates the tedious parts of fuzzing by generating inputs, replacing data in templates, and storing results across different browsers. This allows you to focus on finding bugs rather than managing the process.
  • Have different fuzz types:
    • HTML Fuzzing: Ideal for testing how browsers parse HTML without JavaScript execution.
    • JavaScript Fuzzing: Perfect for analyzing JS behaviors and identifying potential security flaws.
    • XSS Fuzzing: Combines HTML and JS fuzzing to find vulnerabilities related to cross-site scripting.
  • Deviation identification as the approach:Shazzer encourages users to not just look for bugs but to fuzz for differences. By identifying deviations in how browsers handle certain inputs, testers can discover new vulnerabilities that traditional methods might miss.

#CyberSecurity #Fuzzing #XSS #WebSecurity

2024-11-14

Video - Hacking Azure: From OSINT to Full Compromise!

Scenario:
A seemingly innocuous post by a new manager at Mega Big Tech, showcasing their new workstation, inadvertently leaked sensitive Azure credentials. This oversight allowed attackers to gain unauthorized access, escalating privileges through a compromised Azure Logic App Automation, leading to potential data breaches.

Learning points:

  • Oversharing can lead to security breaches: Even a photo of your workstation can reveal critical information like VM names, subscription IDs, and public IP addresses.
  • Validate and secure internal automation: Ensure that systems like password reset bots are restricted to only necessary permissions, preventing unauthorized access or privilege escalation.
  • Implement conditional access: Use conditional access policies to safeguard access, ensuring only managed devices can interact with sensitive systems.

youtube.com/watch?v=FCTRNAT4kZ

#Cybersecurity #dataleakage

2024-11-05

SaaS attack techniques blog post has some notable attack vectors such as:

  • Poisoned tenants: Attackers create fake company spaces to lure employees
  • Living-off-the-SaaS-land: Using legit apps like Zapier for malicious workflows
  • OAuth abuse: Tokens persist after password resets & bypass MFA

Read more here:

#infosec #cybersecurity #redteam #saas

2024-11-02

Using burp suite to test Android apps running on Android 14

danaepp.com/hacking-modern-and

#android #pentesting

2024-11-01

"The LLM and Generative AI Security Solutions Landscape ... provides a reference guide of the solutions available to aid in securing LLM applications, equipping them with the knowledge and tools necessary to build robust, secure AI applications."

Three things to read about:

  • Model Serialization Security
  • LLMSecOps framework
  • Continuous model behavior analysis

genai.owasp.org/resource/llm-a

#ai #owasp #llm #cybersecurity

2024-10-30

GitHub Spark - an AI tool that makes web apps with natural language prompts.

githubnext.com/projects/github

#ai #webapp #development

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst