New Discoveries for DNS Threat Hunting
Hello, Everyone! Some time ago, I put together a blog post on how to make use of DNS (domain name system) logging in your environment, with a focus around hunting based on frequency analysis and making use of the URL toolbox’s Shannon entropy scoring functions to try and find high entropy domains. Since then, I’ve discovered some improvements and other queries I believe may be valuable for your DNS hunting efforts.
This blog post assumes you are using Splunk’s Stream DNS data sourcetype. You may be able to use other DNS data sources, but your field names and success may vary. Additionally, this blogpost, like the one before it, relies heavily on the Splunk URL Toolbox app.
Visualization to detect higher than average number of DNS queries
As mentioned in the previous post, this query uses the timechart function against a data source logging DNS queries to graph the number of DNS queries observed over a period of time.
There were some problems with the query that I felt the need to address.
The first and most obvious problem is that in order to make any effective use of this query, you need to have run it over a period of time to establish a baseline of activity in order to actually spot deviations. For example, running the query with a one-week timespan might show a huge increase in DNS queries on Tuesday–which may look unusual–but what if the data gathered was gathered during a Microsoft patch Tuesday? Naturally, there would be an increased number of queries.
This issue was addressed in the first blog post, but it ties into the second issue with this query–scaling the timespan to account for the massive volume of queries we’re attempting to chart.
Timechart spans/bins and the default 10,000 result limit
Ever see this message when attempting to do a timechart visualization of your DNS queries?

1-1: This isn’t a good thing. It means not all of your results are being timecharted properly due to too much data to sift through. In layman’s terms, the warning is telling you there is too much data being collected to timechart it all.
I’ve come to find increasing the time of the span value can help alleviate this problem. I can’t give you a good, hard value to enter to alleviate this problem because every single network is different. I have no way of knowing how much DNS data you are trying to dump into a timechart, but it’s good to be aware of this timecharting issue regardless.
You should adjust the span value’s time until the number displayed in the stats tab is 10,000 or less. For example, on a test server I was given access to, I performed the default query with the span of 10s for a 1 week period, and the stats tab stated there were over 60,000 results.

1-2: The spans (data buckets) are too small. There’s too much data being collected for Splunk to create an actual timechart visualization with all of this data being gathered.
So, if 10s spans for 1 week result in a little bit over 60,000 results, what happens if I set span=60s instead?

1-3: Those numbers are a bit more manageable. Sure, the data isn’t quite as granular, but 1 minute spans/buckets for a week’s worth of DNS query data is still pretty granular.
Your goal is to get as close to 10,000 results as possible to avoid truncating huge amounts of data from your timechart, and this second query does that while still giving you a pretty good idea of when DNS traffic spikes may have occurred. The first DNS blog post touched on this, but never really explained why we’re adjusting the timespan, so hopefully this offers a better explanation.

1-4: We can see roughly every 6 hours or so, there’s a spike in DNS traffic, but if we look at Tuesday, June 16th after 4pm, there is a large, sustained spike. You could use either of those data points to analyze the small, consistent spikes, or you could pick the single, large spike on Tuesday and see what queries are being performed.
Detection of DNS queries with high levels of entropy (indication of DGA)
I demonstrated a query that attempts to rank and score DNS queries by their Shannon entropy score that is a more granular version of a query from the SANS research I looked over:
I’ve come to find that in most modern networks with tons of DNS traffic, it takes a lot of time to tune down the results and get rid of the excess noise if you lack an exclusion list, like the top 1 million DNS results or something of that nature. Rather than excluding a massive amount of data, we’re going to take a different approach to making this query more useful–and reduce the massive volumes of data it produces.

1-5: This is likely to return a lot of false positives because cloud computing providers, content/ad delivery networks, and practically everyone else in between looked at Domain Generation Algorithms and said, “what a novel idea! 3100+ statistical results for a week of DNS queries for a query that has been running all of 10 seconds.”
All that said, what’s a better way to formulate this query? Have a look at this:
Breakdown of the improved query
This query is a lot more complicated than the DNS query above. Let’s break it down, piece-by-piece.
First things first, we’re telling the stream:dns data type we’re only interested in DNS A record requests. Readers can choose to remove the clause AND record_type=A
if they want to see all DNS records. Recall from the last post that there is a query you can use to break down the distribution of DNS query types if necessary:

1-6: DNS queries broken out by type for a single week (7 days).
Most of the time, your top three DNS record types are going to be A (what IP address belongs to this domain/hostname?), AAAA (IPv6 A records), and PTR (what domain/hostname belongs to this IP address?) records. If users want to query for multiple specific record types at a time, the clause can be changed to AND record_type IN ([different DNS record types here])
. For example, to search for A, AAAA, PTR, and TXT records, you could change the clause to AND record_type IN (A, AAAA, PTR, TXT)
.
The next clause, AND query IN ([list of comma separated domains, TLDs, or substrings that you want to see results for])
, defines a list of top-level domains we would like DNS query results returned for. For example, let’s say I wanted to return all DNS queries to .live, .tk, .gq, .buzz, .cf, .ga, .loa, .fit, and .ml. that query would look like: AND query IN (*.live, .tk, *.gq, *.buzz, *.cf, *.ga, *.loan, *.fit, *.ml, *.date, *.pw, *.bit, *.club, *.science, *.gq, *.review, *.top, *.stream, *.download)
.
Threat hunting is a very manual, very human-centric process. The best thing you can do to enable good hunting is break down huge chunks of data and logs into more manageable portions. Having the power to control how big a chunk of DNS data comes back that an analyst has to sift through is huge because it turns a gargantuan task into something manageable–allowing the analyst to see their progress and not feel overwhelmed.
Instead of attempting to look at all DNS queries, you can choose a single less common top-level domain–something like AND query IN (*.top)
, or a group of top-level domains as I have specified above. Users can also modify the query look for dynamic DNS queries and sift out high entropy domains from the results as well–maybe try something like AND query IN (*.ddns.net, *.chickenkiller.com, *.3322.org)
to dig for dynamic DNS results.
If you are looking for inspiration on what top-level domains to look for, here are some areas to watch out for that can fuel DNS hunting trips:
- Bad Top Level Domain (TLD) lists – Spamhaus provides a regularly updated top 10 list of the worst TLDs in regards to spam operations. SURBL has a list of abused TLDs that is similar to the Spamhaus list.
- Unusual country code TLDs – Consider some of the country code TLDs: .pw, .cn, .ru, .ga, .gq, .ml, .cf, .su, .ly, etc. This looks especially unusual if your enterprise does not have any international customers or locations. Bear in mind, some trendy sites and startups have a habit of using ccTLDs in order to finish the name of their business–such as bit.ly, bonus.ly, fast.ly, and so on. Websites, such as freenom allow anyone to register a domain name out of a variety of country code TLDs for free, no questions asked.
- Sudden increase in DNS registrations – Resources such as DN Pedia and whoisds allow you to see new DNS registrations. Did you suddenly see a huge jump in .monster domain registrations? It might be a good idea to see if any of your systems have touched that TLD and why.
- Abused dynamic DNS domains – Want to look at dynamic DNS domain queries? Cisco has an interesting (if slightly dated) article on the most abused dynamic DNS domains. Alternatively, a security researcher by the handle “neu5ron” is maintaining a list of known dynamic DNS domains in a github gist. Alternatively, malware-domains.com provides a list of dynamic DNS domains in a zip file, though it appears the zip file hasn’t been updated since 2018.
The next section we come across that differs from the initial query is search ut_shannon > 3
. In a nutshell, this is just setting the Shannon entropy threshold a bit higher from the old value of 2.5. Users can specify a higher or lower number to set the threshold on what this query considers to be a “high entropy” domain name. Setting the number higher excludes more results, but may reveal highly unusual domains, while setting the value lower may result in more benign results to sift through.
The next clause of the query is eval shannon=round(ut_shannon,2)
. This portion of the query uses the round calculation of the eval function against the ut_shannon score, renaming it shannon. We tell the round function to round the Shannon entropy score to no more than two decimal places. Rounding to a smaller number of decimal places makes the Shannon entropy score easier to read, and in some cases, it removes duplicate results from this query. For some reason or another, one domain could result in two unique entropy scores, and this was my method of fixing that.

1-7: here is a version of the query using the URL Toolbox’s ut_shannon function with no rounding.

1-8: Here are the same results with rounding. Much cleaner, right?
The next clause of our query, | stats count by query, shannon, answer
, uses the stats function to group and count the results by the query, shannon, and answer fields. Recall a moment ago that we created the shannon field by rounding the ut_shannon field to make the output a bit cleaner. The query field shows us the domain that was queried, while the answer field shows us the IP address returned for the query, if there was an IP address returned at all. The answer field can result in duplicate entries if a domain name is registered to multiple IP addresses.
As a threat hunter and security analyst, I love stats, and the count by [field] option, because it is a hammer by which I can see all threats as a nail. The ability to count the number of times a domain has been queried is invaluable because it can help to establish a pattern or determine if the domain is frequently visited. Chances are if there are hundreds of hits for a particular domain, then it’s probably not malicious–or it’s very malicious and a very noisy beacon.
That brings us to the last clause of our query,| where count < 25
. This clause basically says “If there are 25 or more instances of this unique domain and IP address combination, don’t bother displaying it.” As mentioned in the paragraph above, past a certain threshold, a domain can probably be determined to not be malicious. As to whether or not it’s business-relevant or needs to be blocked for other reasons (e.g. acceptable use, etc.), that is entirely up to you. If you’re worried about missing potential results, this clause can be removed or the threshold could be bumped up a bit higher to 50 or even 100, if necessary.

1-9: The query in action. 20 statistical results in total for a full week of DNS queries. This is much more manageable, wouldn’t you say?
What if I want to just search DNS results and don’t care about domain entropy?
In some cases, you may simply want to see all DNS queries to various gTLDs and ccTLDs, without caring about the entropy score. Maybe you just want to see all the unique and/or less frequently hit results. Well, doing that is extremely easy:
Pretty simple, right? You can use this query to search for domains that have a specific substring, all results to a specific domain, or you could search for all DNS queries under a specific TLD. Let’s say for example, I wanted to look for all instances of .click, .download, and .review domains, but I also wanted to see if there were any hits for *.pastebin.com, or *.gist.github.com, or any DNS queries containing dropbox, here is how that would look:
Just like the high entropy domains query above, you can always use the | where to filter excessive results, if necessary:
This query will limit results to where there are only 50 queries to a specific domain or less.
Analyzing DNS queries by length
In the previous work, I identified a query from the SANS whitepaper that could be used to analyze the length of a DNS query and compare it to the average query length, and I made some improvements where I could:
Well, here is a new version of that query that can be used to clean up the output:
Let’s break this query down; there’s a lot of ground to cover.
Breakdown of the improved query
Let’s start with the beginning of this query: sourcetype="stream:dns" AND query IN ([list of comma separated domains, TLDs, or substrings that you want to see results for]) AND NOT query IN ([list of comma separated domains, TLDs, or substrings that you do not want to see results for])
. There’s a lot going on here, but it’s very important to pay attention. This query will have an overwhelming amount of data coming back depending on the volume of DNS logging you have. I recommend specifying some of the less commonly used generic or country code top-level domains–known as gTLDs and ccTLDs, respectively.
Here is an example query, with and without filtering any of the results:

1-10: 3300+ statistical results for 1 week of DNS queries. this is with no filtering and no reducing the number of DNS queries through the | where function. This is just 7 days of data.

1-11: This is the same query, same time period, but with benign results resolved via the AND NOT query IN ([blah]) clause, and the | where count > 50 much more manageable, right?
As you can see, filtering out any benign results and/or results with a large number of queries is important to reduce the amount of data to sift through over a period of time. Additionally, I don’t recommend trying to use this query against extremely busy generic top-level domains, such as .com, .net, or .org. This is because there are a huge number of content delivery networks, cloud service providers, and ad delivery networks that use those top-level domains. You’ll spend a lot of time creating a large, complicated filter list in order to keep the results manageable.
The next part of the query, | eval qlen=len(query) | eventstats avg(qlen) as avg stdev(qlen) as stdev | where qlen>(stdev*3)
, is more or less identical to the initial query from the first blog post. These functions are attempting to determine the length of each DNS query, calculate the average query length, and they only show results where the query length is 3 or more times the average.
The next portion of our query | eval ave=round(avg,2), dev=round(stdev,2)|
. This portion uses the rounding function to round both the average query length calculations and the standard deviation score to make the results a bit more presentable.
That brings to the stats portion of the query: | stats count by qlen dev ave query
. This part of the query is mostly unchanged. The only difference is that we’re calling the “ave” and “dev” values we made from rounding stdev, and avg a moment ago.
Finally, we come to the | where qlen > 30
portion of the query. Users can make use of the where function to set criteria to restrict how much data comes back. In the case of our query, we are telling Splunk to only show us results where the length of the query is greater than 30 characters. If you wanted to, you could make this a compound statement of criteria.
For example, let’s say you’re still inundated with a massive amount of data, and you want to set a minimum character length–but you also want to discard results with more than 50 queries. That where statement would look like: |where qlen > 30 AND count < 50
.

1-12: This is the same sourcetype and timeframe as 1-9 and 1-10, just with a compound where statement further filtering the results.
Bonus round: I have a collection of unusual DNS queries. What do I do now?
Now you have a collection of high entropy domain names. What can you do with it? Research, of course! Here are some actions to consider for enriching the results and determining whether or not it’s something that might warrant further investigation.
- Compare the results against the queries – Do you use a threat intelligence service? Does it feature a block list of domains? It might be a good idea to compare the results against the queries you just pulled. If you’re not running TheHive and/or MISP for collecting threat intel data, I’d recommend it.
- Learn more about the pulled domains and IP addresses – We wrote a tool, Machinae, that can be used to poke and prod a couple of different threat intel and reputation sources for determining whether or not an IP address or domain has a negative reputation. Give a try and use it to help learn more about the domains and IP addresses pulled from your queries.
- Gather more information about IP addresses – In spite of its reputation as a red team tool, Shodan–described as the “hacker search engine”–can be used to gather information about IP addresses. Found a suspicious looking domain? Pull the IP address from the answer column, feed it to Shodan, and see what’s being hosted there.
- Do some DNS recon – DNSdumpster is another interesting tool that can be used to map out related domains and subdomains.
- Enrich DNS data with additional threat intelligence – Don’t be afraid of looking at various threat intelligence sources to further enrich DNS data on your own: PassiveTotal, Any.Run, VirusTotal, public Passive DNS databases, ThreatMiner, Blueliv, AlienVault OTX, and others out there.
Further Recommendations: .onion queries and pastebin-like sites
In addition to the TLD recommendations above, consider poking your network for any “.onion” domain queries. “.onion” is a special top-level domain reserved by the TOR network. It’s not an actual public top-level domain, so if you see any .onion queries on your network it warrants further investigation:
The same goes with any DNS queries to pastebin.com or other semi-anonymous websites like gist.github.com–pastebin-style websites can be used for data exfiltration, or to access “staged” backdoors and/or other hacking tools easily. If you see DNS queries to a pastebin site, you’ll probably need to pivot away from DNS and switch to netflow and/or proxy logs to see what URLs are being visited and/or what HTTP method requests are being used in order to determine if text pastes or scripts are being requested FROM the site or posted TO the sites for exfiltration.
To do that, it’s a good idea to have timestamps for when the DNS queries were performed to guide your querying of other data sources. Consider querying DNS logs for the specific domain and including the “_time” field in your stats query. For example:
Pivoting
Between what the timestamps and the answer field show you about what external IP address was used for communication, you can pivot to a NetFlow or traffic log data type (e.g. Suricata flow logs, pan:traffic, Cisco ASA, etc.) confirm how much data was transferred, and identify the IP address pairings involved in the connection.
If you have access to an SSL decryption device or an SSL proxy (for example, Blue Coat sg proxies, pan:threat with SSL decryption enabled. etc.), you can also attempt to query URLs visited. This can lead you to specific GitHub gists and/or Pastebin pastes that were visited.
If you made use of the external threat intelligence sources, you may have been able to grab file hashes and/or related domains to determine if there are other indicators to look for in your data.
Good luck, and happy hunting!
About Hurricane Labs
Hurricane Labs is a dynamic Managed Services Provider that unlocks the potential of Splunk and security for diverse enterprises across the United States. With a dedicated, Splunk-focused team and an emphasis on humanity and collaboration, we provide the skills, resources, and results to help make our customers’ lives easier.
For more information, visit www.hurricanelabs.com and follow us on Twitter @hurricanelabs.
