Getting data into our Splunk system
Most Splunk instances we deal with here at Hurricane Labs are what we call “distributed environments”. This means that there are multiple Splunk instances running as a fleet to serve a greater good. When an environment matures enough to require more than just one server, you will want to make sure you have centralized management from the beginning – trust me, figuring this out before you make the mistake of slapping local configs everywhere is a solid win.
Whether or not you’re using an all-in-one Splunk server (one Splunk instance doing the indexing, searching, and anything else you want to make Splunk do), or you are using a distributed environment, we can all agree on something:
You want all the data, from all the sources… or at least some data, from some sources.
So, how do you maintain configurations on multiple endpoints to send data to your environment? What happens when you have hundreds of clients and have to change one line of code because you added another indexer? Centralized management for Splunk can extend further than just the Splunk Enterprise infrastructure itself.
This is where the concept of Universal Forwarders (UFs) comes into play. These tiny, lightweight versions of Splunk can be installed on almost all recent operating systems and can be configured remotely from a Splunk Deployment Server (DS) once installed. These UFs are configured to act as Deployment Clients to the deployment server via settings on the client in a file called deploymentclient.conf. While there are many items that can be configured between a deployment server and client, we are going to focus on deploying deployment-apps to clients to get data into our Splunk system.
Note: I have also created a video (below) to go along with this blog post for the visual learners of the world. So, feel free to read through the post, watch the screencast, or both!