@yossarian this is brilliant. đ
I just (vibe) coded my first action last week, it comes at the perfect time!
Hardcore embedded C/C++ caveman.
Supply chain cybersecurity, SBOM , vulnerability management.
#embedded #linux #oss #psirt
@yossarian this is brilliant. đ
I just (vibe) coded my first action last week, it comes at the perfect time!
this hackerbot-claw thing is a good reminder: attackers (and beg bounty spammers) are using zizmor for offensive research, so you should be using it for defense!
The ENISA is looking for (preferably European I guess) experts for an ad hoc working group on on Security Architecture Engineering and Vulnerability Management.
Regarding VM, it will be about "the maintenance of the EU Vulnerability Database, its cooperation with the Common Vulnerabilities and Exposures (CVE) Program, and potential other activities that contribute to serving its stakeholders and providing the expected value based on a delivery of matured services."
Closing date mid-April.
https://www.enisa.europa.eu/working-with-us/ad-hoc-working-groups-calls/enisas-ad-hoc-working-group-on-security-architecture-engineering-and-vulnerability-management
#europe #vulnerabilitymanagement #enisa #euvd
@floehopper did I miss something? I thought Claude was 3 years old?
https://en.wikipedia.org/wiki/Claude_(language_model)#Claude
@andrewnez sometimes, that's all you have, without the luxury to refuse ÂŻ\_(ă)_/ÂŻ
CPE can also manage firmware, closed source, non-package managed soft, hardware (as in SoC, chipsets, GPU, architectures), and full devices or systems. Poorly, for sure, but they can.
Something that PURL people seem to ignore.
@andrewnez note that for the naming part, the (terrible) CPE can (with their well known limitations) do the job. Ex:
a:ietf:ipv6 for https://nvd.nist.gov/vuln/detail/CVE-2025-23018
a:bluetooth:bluetooth_core_specification or a:bluetooth:mesh_profile, ex:
https://nvd.nist.gov/vuln/detail/cve-2020-26555
https://nvd.nist.gov/vuln/detail/CVE-2020-26556
https://nvd.nist.gov/vuln/detail/CVE-2020-26557
https://nvd.nist.gov/vuln/detail/cve-2020-26558
https://nvd.nist.gov/vuln/detail/CVE-2020-26559
https://nvd.nist.gov/vuln/detail/CVE-2020-26560
(BT specifications are rats nets, see https://www.bluetooth.com/learn-about-bluetooth/key-attributes/bluetooth-security/reporting-security/)
Some news from the ENISA SRP (Single Reporting Platform) for cybersecurity incident reporting.
Roadmap and FAQ:
https://www.enisa.europa.eu/topics/product-security-and-certification/single-reporting-platform-srp
Call for tender:
https://www.enisa.europa.eu/procurement/implementation-of-the-single-reporting-platform
What happens when a large open source project dies?
Today in InfoSec Job Security News:
I was looking into an obvious ../.. vulnerability introduced into a major web framework today, and it was committed by username Claude on GitHub. Vibe coded, basically.
So I started looking through Claude commits on GitHub, thereâs over 2m of them and itâs about 5% of all open source code this month.
https://github.com/search?q=author%3Aclaude&type=commits&s=author-date&o=desc
As I looked through the code I saw the same class of vulns being introduced over, and over, again - several a minute.
RE: https://infosec.exchange/@boblord/116075393614821884
Corollary: Which software companies are CVE Numbering Authorities (CNAs) for their own products, but shouldnât be? And why?
Boost to share trauma.
If you use AI-generated code, you currently cannot claim copyright on it in the US. If you fail to disclose/disclaim exactly which parts were not written by a human, you forfeit your copyright claim on *the entire codebase*.
This means copyright notices and even licenses folks are putting on their vibe-coded GitHub repos are unenforceable. The AI-generated code, and possibly the whole project, becomes public domain.
Source: https://www.congress.gov/crs_external_products/LSB/PDF/LSB10922/LSB10922.8.pdf
@andrewnez not only finite but also already the biggest constraint.
I think people do not realize how much of hobbyist maintainers (which is by far the majority both in term of code volume and maintainers volume) are already far too deep into the negative in term of resources.
Like none of my projects I want to work on has gotten a single commit from me in 2025. I did not have the time. And I know they are use to debug production somewhere, I need them myself, I want them to be better.
But I simply do not have the time.
The first reason in all study for projects to go unmaintained is that *the maintainer changed employer and does not have the time for it anymore*
Deploying generative AI agents in this way is deeply irresponsible and results in real harms to open source maintainers.
https://sethmlarson.dev/automated-public-shaming-of-open-source-maintainers
@jbm The backlash against Linux kernel advisories is confusing. We wanted transparency; now we have it. More data is always better than a black box. If the new influx of CVEs is breaking your vulnerability management workflow, the problem might be your process, not the advisories.
Thanks for the hard work @gregkh
@adulau @gregkh @bagder "If the new influx of CVEs is breaking your vulnerability management workflow, the problem might be your process, not the advisories."
200% agree.
Oh and BTW: also 200% agree with kernel policy off "no CVSS, this depends on your use case, which we do not know". CVSS are (mostly) for people who want a list of CVE in an excel file, and forget all the "CVSS < 7.0". This is compliance, NOT security.
PS:
The _only_ thing that I could ask from the kernel is that it should not be considered as a single component (as in "CPE"). It is rather thousands components/CPE, one per kernel configuration option. Yes, it would be difficult.
But useful.
At least for me, building everything from sources in embedded contexts. đ
I keep seeing stories about LLMs finding vulnerabilities. Finding vulnerabilities was never the hard part, the hard part is coordinating the disclosure
It looks like LLMs can find vulnerabilities at an alarming pace. Humans aren't great at this sort of thing, it's hard to wade through huge codebases, but there are people who have a talent for vulnerability hunting.
This sort of reminds me of the early days of fuzzing. I remember fuzzing libraries and just giving up because they found too many things to actually handle. Eventually things got better and fuzzing became a lot harder. This will probably happen here too, but it will take years.
What about this coordinating thing?
When you find a security vulnerability, you don't open a bug and move on. You're expected to handle it differently. Even before you report it, you need at a minimum a good reproducer and explanation of the problem. It's also polite to write a patch. These steps are difficult, maybe LLMs can help, we shall see.
Then you contact a project, every project will have a slightly different way they like to have security vulnerabilities reported. You present your evidence and see what happens. It's very common for some discussion to ensue and patch ideas to evolve. This can take days or even weeks. Per vulnerability.
So when you hear about some service finding hundreds of vulnerabilities with their super new AI security tool, that's impressive, but the actually impressive part is if they are coordinating the findings. Because the tool probably took an hour or two but the coordination is going to take 10 to 100 times that much time.
The European Open Source Awards ceremony from January 29th, in one loooong recording with yours truly showing up several times.
Most blabbing at 1h24 and onward when @gregkh was up.
@bagder @gregkh and (and I know this is not going to be popular among my peer community of vulnerability management), thank you Greg KH for the very best documented and automatable CVE there is today.
And by far.
https://www.youtube.com/watch?v=KXS5KQjWjns&t=5390s
đ đ