Welcome to the first edition of The Hurricane Labs Foundry! I’m Tony Robinson, one of the senior security operations analysts. The goal of this blog is to inform viewers like you(™) about new and innovative information security and Splunk technology around the web, hot information security topics, and various in-house projects and observations that our Splunk and SOC analysts have been working on.
This blog post is in digest format, where I’ll give you a brief description of the topic at hand and links to supporting material, as necessary. So without further Adieu, let’s get started!
Over the past few months I’ve been working on a book — a guide on how to build your own lab network using virtual machines. Originally, I started writing this as a how-to on creating an IPS lab for testing and evaluating Snort or Suricata IDS/IPS rules, which then evolved from there into an introduction to virtualization. So far, the guide covers instructions on how to make a baseline VM lab for practicing offensive security (e.g. pentesting), defensive security (e.g. malware analysis, reverse engineering, NSM) and/or general IT disciplines (automation, monitoring, etc.).
While the book is far from being a final product right now, a vast majority of the material I wanted to include is in the book as it stands.
Malware “sandboxing” is the action of executing malicious files (scripts, executables, etc.) in a controlled environment and observing the results. This is also known as dynamic malware analysis. Cuckoo is an open-source malware sandbox that can utilize VMs from a variety of hypervisors (or even physical machines with the ability to revert to a known, good configuration).
While Cuckoo itself is quite feature-packed, cuckoo-modified is a branch of the original Cuckoo that has a whole variety of new features. Check it out at: https://github.com/spender-sandbox/cuckoo-modified.
Pastebin Hosting Malicious Executables
It’s no secret that Pastebin and other text sharing websites have been used to host some potentially shady things in the past. These sites have hosted everything from code snippets and raw logs, to private RSA keys, database dumps, malicious scripts, and now malicious executables.
If pastebin-like sites only allow you to use their site to host text, then how can it be used to host binary files? The answer is through base64 encoding. A recent blog post by the Sans Internet Storm Center states that this practice of using Pastebin to host base64 encoded binaries is on the rise.
We took the time to write some IDS signatures that should help detect traffic to common pastebin sites, and detect base64 encoded files being requested from said sites. These rules SHOULD work for either Snort IDS or Suricata. As always, it’s recommended to test before you deploy IDS rules to production IDS/IPS deployments.
As you might imagine you could easily modify the rules looking for pastebin.com or sprunge.us in the HTTP host header, to accommodate other pastebin-like sites that use HTTP traffic only. But for now this is a start for you to experiment with and see if such rules will work for you.
Be aware that if you want to monitor for activity to Pastebin sites that offer encryption (e.g. Ghostbin, Zerobin, GitHubGist, etc.), that you should consider deploying IDS rules looking for DNS queries for these sites instead, since in the vast majority of cases, most IDS deployments cannot sniff SSL encrypted traffic.
Oh, and one last thing: These rules are released under MIT licensing. Feel free to use them as you see fit, but they are not offered with any sort of a guarantee implied or otherwise.
DNS blacklisting has been a large component of e-mail security for some time now. If you’re familiar with Spamhaus, the anti-spam international non-profit organization, you are more than likely familiar with their zen.spamhaus.org DNS blacklist service. A project in our lab is currently being worked on to combine both the emerging threats and abuse.ch blacklists into a similar DNS blacklist service. More details will be made available as the project moves forward for general availability!
Cloudflare and the “Cloudbleed” Vulnerability
Cloudflare, one of the largest global CDNs (Content Delivery Networks) in the world recently experienced a critical memory leak bug that exposed sensitive information for hundreds of major sites across the globe that utilize their CDN services. This vulnerability is being dubbed “Cloudbleed”, due to how similar it is to the SSL vulnerability “Heartbleed” discovered in 2014. This class of vulnerability is known as a buffer overrun.
In most cases, when people think buffer overrun/overflow, they think of classic software attack scenario of writing more data to a memory buffer than what that buffer can handle. This usually leads to unpredictable behavior at best, and arbitrary code execution at worst. The type of buffer overrun exhibited by Heartbleed and Cloudbleed, however, is not in terms of memory that the attacker can WRITE to, but memory the attacker can READ, which they should not have access to read. The attacker is able to modify memory pointers that control how big a given buffer is, past the original intended boundaries they are supposed to be able to read, resulting a memory leak. This memory leak gives the attacker the ability to read the contents of adjacent, uninitialized memory. This uninitialized memory has the potential to contain sensitive data from other HTTP/HTTPS sessions.
It is highly recommended that if you utilize web services that use Cloudflare as their CDN of choice that you consider changing your passwords. At this point, a lot of web services and/or vendors who utilize Cloudflare have taken initiative to warn their users of this potentially serious issue, some even going so far as to issue password resets on their own to help mitigate the impact of this issue. If you don’t know what vendors or online services you use actually utilize Cloudflare CDNs, operate by the Axiom of “It is better to be safe, than to be sorry” and change the passwords for all of your online services.
First Sha-1 Hash Collision Discovered
Researchers at Google and CWI have discovered one of the first known SHA-1 hash collisions. Hashing algorithms are used to read-in arbitrary data and return a fix-length digest of the data called a hash. These hashes are used to verify the integrity of various kinds of data. A hash collision results when two different files return the same hash digest result. This has serious implications to a variety of services that rely on that hashing algorithm to verify the integrity of data. For example, some years ago the “Flame” APT abused a hash collision in the MD5 hashing algorithm to produce a fake code-signing certificate to sign malicious executables, and made it appear as though they were trusted by Microsoft.
When collisions are found in a hashing algorithm, it is usually considered a death knell to that hashing algorithm and a wakeup call to move away from components that utilize that hashing algorithm as soon as feasibly possible. This first SHA-1 collision, while unprecedented, did not take many by surprise. Consider the fact that many were moving to deprecate SSL certificates signed with the SHA-1 algorithm back in 2014. Not only that, some crypto experts were predicting the first SHA-1 collisions would occur in the near-future.
This latest research puts the generation of SHA-1 collisions within feasible and affordable range to most well-financed attackers (e.g. nation-states, organized crime, etc.). It is therefore recommended that if you utilize any systems or services that rely on SHA-1 for integrity checking, to begin moving away from those solutions, or integrate additional integrity checks to mitigate the implications of this research.
Until next time!
Keep an eye out for the Volume 2 of the Hurricane Labs Foundry. Follow us on Twitter @hurricanelabs for updates!