ugh, I'm glad i've moved on from IT but I've had many arguments with 'security managers' about some bogus qualys findings. If the CVE is that a user could do a thing in an unexpected way, but they have permission to do the thing that is a bug not a vulnerability. IMO It's only a vulnerability if someone that is not allowed to do something can do the forbidden thing.
I used to work in a place where we constantly got looked at by security companies and consultants. The wisdom of that time? Companies don’t hire security firms and consultants to find nothing, so no matter how asinine or impractical it is, they’ll still file it because an empty report is bad for business.
Our security handling was pretty strict, and we had to constantly talk customers off the ledge and kindly inform them that their consultant was blowing crazy swamp gas up their asses. My favorite was a firm that listed all Easter eggs as a vulnerability. An open source package could raise the list of developers with a secret key combo, and so the customer saw this on their report and raised a stink. The customer had no idea what this all meant, but their consultant had scared the crap out of them, so we had to layer on a patch to disable the stupid thing.
The author has a point that the NVD has no clue about the security implications of a bug. But can we really expect them to? At a conservative guess, I’d say there are millions of pieces of code floating around. Should the NVD be deeply involved in all of them just to provide the most accurate security score? That’s an impossible ask.
The author also takes issue with the NVD’s stance that they cannot just trust any dude’s email. Is that not a fair take? “Trust me. I’m the maintainer of this project. Do as I say.” Should the NVD now also check each and every email they receive for forgeries? Should they assume that the author of the email would write an assessment in good faith and not downplay a real threat because it looks bad for their project?
My claims above about this issue can of course be verified by reading the publicly available source code and you can run tests to reproduce my claims.
(That quote is from another of his blog posts.) Now this is really ludicrous in my opinion. You cannot expect any outsider to read the internals of “over 160,000 lines of feature packed C code (excluding blank lines)” to verify a claim. There is simply not enough time on the NVD’s hands.
I’m happy I learned something about these magical CVE numbers. My takeaway from this is: The database is good, the scores may not be.
NVD state they task an analyst to review each CVE and assign a score, then do QC to review the analysis before publication.
No one's perfect, but since NVD claim to do QC they should fix their mistakes. So now let's see how they answer to Daniel Stenberg's objection. The publication and objections are recent, it's fair to give them a few days to react.
But if they're giving up on doing proper analysis or QC, and are are just acting as a vulnerability number registry, then they shouldn't publish CVSS values.
NVD analysts use the reference information provided with the CVE and any publicly available information at the time of analysis to associate Reference Tags, Common Vulnerability Scoring System (CVSS) v3.1, CWE, and CPE Applicability statements
CVSS V3.1 exploitability and impact metrics are assigned based on publicly available information and the guidelines of the specification.
Analysis results are given a quality assurance check by another more senior analyst prior to being published to the website and data feeds.
Should the NVD be deeply involved in all of them just to provide the most accurate security score? That’s an impossible ask.
This is a false dilemma. If the task is truly impossible, that's not a valid excuse to try anyway and fail repeatedly, especially if doing so causes negative externalities. Numbered scores with decimal precision are not necessary to the core functionality of a CVE database and there are plenty of alternative solutions which would minimize harm and scale more economically.
You got a point with NVD but this case shows how one could damage the reputation of a product - this really looks like Bagder didnt care about security, even the 2020 prefix is a bad sign looking from the outside. I am not sure how the NVD define CVE scores but as bagder openly explains this isnt a flaw in security, just a bug he already fixed years ago.
The last place I worked, we had a cyber security team, whose job it was to send us CVEs to investigate. I mean random CVEs that had zero relevance to our systems or the technologies we used. Sometimes they sent us low level kernel type CVEs and expected us to explain why we weren’t affected. Mostly it was a waste of time. If they knew how to do their job, they’d have a list of technologies we used on each project and could filter out the irrelevant stuff, instead of wasting developer time.