AI Coding Assistants Can be Both a Friend & a Foe
New research shows that GitLab's AI assistant, Duo, can be tricked into writing malicious code and even leaking private source data through hidden instructions embedded in developer content like merge requests and bug reports.
How? Through a classic prompt injection exploit that inserts secret commands into code that Duo reads. This results in Duo unknowingly outputting clickable malicious links or exposing confidential information.
While GitLab has taken steps to mitigate this, the takeaway is clear: AI assistants are now part of your attack surface. If you’re using tools like Duo, assume all inputs are untrusted, and rigorously review every output.
Read the details: https://arstechnica.com/security/2025/05/researchers-cause-gitlab-ai-developer-assistant-to-turn-safe-code-malicious/
#AIsecurity #GitLab #AI #PromptInjection #Cybersecurity #DevSecOps #CISO #Infosec #IT #AIAttackSurface #SoftwareSecurity #CISO