I’m taking a graduate seminar-type course on secure coding currently, and as part of this class, we are supposed to write a term paper on a topic of our choice, relating in some way to security, coding, testing, etc. Often, when given the choice of topic, I’ll use it as an excuse to do some reading or research that I had been meaning to do earlier but simply hadn’t justified the time for. That’s what I did this time, and I chose to write about fuzzing (the process of discovering vulnerabilities by automatically crafting random inputs for it and logging what happened when it breaks).

It’s a very interesting topic and I have enjoyed reading some of the papers and looking at the different tools that are available. In my outline however, I wanted to have a section where I discussed some actual vulnerabilities that have been discovered through the use of fuzzing. The problem is, there seems to be a discrepancy: there is a large amount of discussion about fuzzing out there in the security community, even detailing methods of fuzzing specific protocols and applications, however there is relatively little talk of actual vulnerabilities found using fuzzing. I know that a lot of recent vulnerabilities have probably been found using fuzzing techniques, but it’s hard to quantify or pull out good examples when all of the advisories out there make no mention of the methods they used to discover the vulnerability.

So we have full disclosure of a vulnerability’s technical details and impact, but no disclosure of how the researcher got to that point.

I’m not saying that this is something that researchers should feel morally obliged to do (it’s not), and I understand that it would take considerable effort to document the procedure and present it in a useful form. Not to mention cases where internal/proprietary tools are used. Wouldn’t it be nice every once in a while to see it, though?

Even though they’ve done great work and don’t deserve to be picked on, I’m going to pick on Determina Security Research for a moment as an example. They recently published a wonderful write-up on the recent .ANI vulnerability. Now imagine how fascinating it would be to be able to read and follow the process or story of how they originally found this vulnerability. It has to be more engaging than the current story: “Vendor notification: Dec 20, 2006″. For my own purposes today, reading over this advisory, it looks like it could very well have been the sort of thing a fuzzer could find. It would have been a slam-dunk for my paper if they’d have come out and said something along the lines “…using the FooFuzz Framework…”. As it stands now, maybe they did, maybe they didn’t :) .

Again, not to pick on them or call them out, as they actually give presentations at conferences, release tools and advisories for the community. I’m just saying that behind every vulnerability discovery is a story that at least some are going to find very interesting, and it’d be nice to see someone tell that story once in a while.

  2 Responses to “Full disclosure… of procedure?”

  1. The month of browser bugs was at least partially the result of fuzzing. HD Moore had been doing some fuzzing of his own and he met up with Matthew Murphy at Cansecwest last year who had developed a nifty CSS fuzzing tool; they did a lightning talk on it, and eventually many of the bugs made the MoBB.


    Sorry if I’m not quite coherent. It’s way too late for me to be posting. ;-)

  2. Thanks for the reminder about the MoBB. I remember reading about some of the vulnerabilties being discovered through fuzzing (which makes a *lot* of sense for this sort of thing), and I suppose I promptly forgot about it.

    I can probably work that in as part of the examples in the paper.

 Leave a Reply



You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

© 2012 McGrew Security Suffusion theme by Sayontan Sinha