Welcome to Hurricane Labs Foundry! I’m Tony Robinson, one of the senior security operations analysts at Hurricane Labs. The goal of this blog post is to inform viewers like you(™) about new and innovative Splunk and other technologies around the web, hot information security topics, and various in-house projects that our Splunk and SOC analysts have been working on.
Just like last time, this blog post is in digest format, where I’ll give you a brief description of the topic at hand, and links to supporting material, as necessary. So without further Adieu, let’s get started!
With thanks to: Tom Kopchak for his contributions.
An article published recently in Vice/Motherboard (Note: Broken links are not being removed as we wish to uphold the intent of this post.) highlights a security nightmare in one of the most unlikely of places: farm tractors. Apparently, tractor hacking is becoming more and more common due to manufacturers, such as John Deere, implementing firmware restrictions that are only allowing parts and services to be performed by authorized dealers with factory-sanctioned tools. In some cases, nearly any repair, or part, requires a download from the manufacturer to operate.
If this makes you think of restrictions placed on other consumer technology, you’re absolutely correct. So, the next time you look at a tractor, just think of it as a 30,000 pound printer… A printer that’s entirely responsible for the livelihood and viability of a farmer’s business.
This puts farmers in a tough spot. If a tractor breaks down, it’s often not practical for them to take their tractor in for service to an authorized dealer. It’s also not uncommon for a repair to be able to be completed, with the exception of the software lockout. To circumvent this, technicians are using cracked John Deere software purchased on the black market in Ukraine.
This software allows for tractor repair technicians to work on the equipment without manufacturer approval. Of course, all of this is in violation of the EULA, which is accepted by the farmer at the time of the equipment purchase (yes, tractors have EULAs now too).
We already have enough problems with malicious firmware for devices designed without security in mind (I’m looking at you, Internet of Things). We’ve seen Internet connected cars being remotely controlled by unauthorized users, among other things. The last thing we need is to be encouraging people to install compromised firmware in order to simply operate and use a product for the purpose which it was intended.
To Heavy Forward, or Not To Heavy Forward, That is the Question
For Splunk admins, there has been a lot of discussion recently about whether Heavy Forwarders should be used in between Universal Forwarders and Splunk Indexers. For on-site Splunk deployments, we at Hurricane Labs generally believe that having Universal Forwarders sending data directly to an Indexer, or Index Cluster, is the best option for optimal performance (you might be interested in this Splunk “.conf” conference talk for our justification). For Splunk Cloud deployments, however, your use of Heavy Forwarders in between may vary.
For most Splunk Cloud deployments, we generally recommend having the Universal Forwarders send data through a locally deployed Heavy Forwarder before sending your data to Splunk Cloud. This allows you to deploy new Splunk applications to parse logs or provide additional analytics for your data rapidly. Deploying applications on Splunk Cloud systems requires submitting your desired applications through an approval process. This means it tends to take longer to get the applications you want deployed, and in some cases can result in rejected application requests for any number of reasons.
On the other hand, we have found that there are some edge cases where it may make more sense for your Universal Forwarders to transmit directly to Splunk Cloud. We have observed a few cases where users have massive volumes of data they are attempting to transmit to the Splunk Cloud through a Heavy Forwarder, but not enough bandwidth to consistently forward it in a reasonable amount of time. This results in large queues of data waiting to be transmitted. In these instances, it may be preferable to let individual Universal Forwarders handle transmitting logs to Splunk Cloud, which will prevent high volume data sources from saturating a single queue used for all of your collected logs.