What the HEC: AWS WAF Logs

By |Published On: December 26th, 2019|

Each year more companies are looking to Cloud platforms, such as AWS, for their business needs. This brings about some new challenges in Splunk when talking about data ingestion from these platforms.

As a Managed Services Provider (MSP) who focuses on Splunk and security, the Hurricane Labs team is often engaged in helping to get events out of the Cloud and into Splunk. Combine two teams who are unfamiliar with the other’s product– and the load of jargon and acronyms on both sides– and you’re in for a stormy ride. In this series, I’ll be your captain and show you how to navigate these stormy situations by bridging the gap between Cloud and Splunk.

Screencast

For those who prefer a more audio-visual style of learning, I’ve created a screencast that walks you through how to configure Splunk and AWS for AWS WAF logs. There’s additional information in this blog that is not covered in the screencast, so make sure to read below as well.

WAF Overview

If you’re looking to secure your front end web applications, you might be using AWS Web Application Firewall (WAF) to do so.

AWS WAF allows you to set up rules to secure your front end from common web attacks (SQL Injection, XSS, etc). While we won’t be discussing WAF in depth here, we’re definitely interested in getting logs from WAF into Splunk so that we can analyze traffic and integrate with the Web and Intrusion Detection data models in Enterprise Security.

Getting your WAF logs into Splunk is going to be done through the use of an AWS Kinesis Firehose Delivery Stream (that’s a mouthful). The Delivery Stream sends logs over HTTPS to a Splunk server (or load balancer) that listens for the traffic. In Splunk, this is called HTTP Event Collector, or HEC for short.

HEC works by opening a listening port on the Splunk Server(s) and uses a token for authentication. These tokens and other configuration are handled on a per-input basis, which allows you to differentiate between data sources. We also have a few design considerations to think about when setting this up, because HEC can be used for a lot more than just your AWS WAF data.

HEC Design Considerations

Setup Choices

Bold = Where a valid, publicly-trusted SSL certificate is required

HEC Data > Load Balancer > All Splunk Indexers

  • Pros: Resilient + Maximum throughput and performance + Scalable + Splunk best practice
  • Cons: Harder to configure + May cost more

HEC Data > Load Balancer > Set of Splunk Indexers or Heavy Forwarders

  • Pros: Resilient + Good throughput and performance + Scalable
  • Cons: Harder to configure + May cost more

HEC Data > Load Balancer > Single Splunk Enterprise Server (IDX/HF)

  • Pros: Scalable
  • Cons: Harder to configure + Single Point of failure + May cost more

HEC Data > Single Splunk Enterprise Server (IDX/HF)

  • Pros: Simple Configuration + May cost less
  • Cons: Single Point of failure + May require additional config to be open to the internet

Big Picture

Ultimately, you need to ask yourself a few questions to figure out which design to use:

  • What AWS data will I need to bring into Splunk now and in the future?
  • Will I have any other HEC sources that can use the same infrastructure?
  • Can I tolerate a single point of failure?
  • Do I need scalability?
  • What does my current Splunk environment look like and how will it change?
  • Can I expose my Splunk server(s) to the internet?

The simple answer is if you can’t tolerate a single point of failure, need scalability, or have a large amount of data (all HEC sources, not just AWS), then you need a load balancer front end with either an indexer cluster or set of heavy forwarders on the backend. Something else to consider is that you will have significantly lower performance if you send to a Heavy Forwarder instead of an Indexer directly. See this Splunk Blog post: Universal or Heavy, that is the question.

We usually veer toward having a load balancer in front of an indexer cluster. This is Splunk best practice, and it’s not terribly more difficult or expensive compared to the other options. If you don’t set this up the right way the first time around, you are going to put yourself and/or your team into a situation that is difficult to fix as more data sources are added.

Prerequisites

SSL + Internet Connection

Many AWS services, such as Kinesis Firehose, send data over the internet and require a valid SSL connection to the remote target. This means your remote target will require a valid publicly trusted certificate AND be reachable from the internet on your configured HEC or LB listening port. If you’re using a load balancer, that is what your remote target will be. If you’re sending straight into Splunk, that is what your remote target will be. Often times it’s going to be a security best practice to expose a load balancer to the internet rather than your application.

Load Balancer Prerequisites

If you’re going to be using a load balancer (AWS or otherwise) to handle the WAF data, then make sure you do the following:

  • Use a Classic Load Balancer if using AWS
  • Setup a valid, publicly-trusted SSL certificate on the LB
  • Enable duration-based sticky sessions & set cookie expiration to disabled
  • Load Balancer Protocol MUST be HTTPS, port can be whatever you’d like
  • Instance Protocol and port MUST match what you configured in Splunk (HTTPS & 8088 in this demo). Note: For non-AWS load balancers, this is referring to the target port/protocol, commonly referred to as port forwarding
  • You may also want to refer to the AWS Documentation for more information

Networking Prerequisites

Regardless of your setup, you will need to meet the following network requirements:

AWS > Load Balancer Setup

  • Kinesis must be able to reach the load balancer on your configured listening port
  • Load Balancer must be able to reach Splunk HEC server(s) on your configured HEC port/protocol

AWS > Splunk (direct) Setup

  • Kinesis must be able to reach the Splunk HEC server on your configured HEC port/protocol

Remember, Kinesis comes in through the internet. You can not route it through a VPN or VPC to avoid that. You will either need to allow traffic from all internet sources or restrict it to just Firehose CIDR’s. I’d recommend using the CIDR’s. Feel free to check out the documentation on Firehose CIDR’s.

Something else to remember is that AWS NACL’s and some network appliances are stateless, meaning you may need to explicitly allow return traffic. AWS Security Groups and other network appliances are stateful, so you only need to explicitly allow the incoming traffic.

Also important to note, if you are using a proxy, put in an exception for this traffic so it’s not intercepted.

Splunk Configuration and Setup

Believe it or not, setting up Splunk is going to be the easiest part of this. You need to do three things:

First, Install the AWS WAF Add-on

The AWS WAF Add-on needs to be installed on your HEC receiver(s) and your Search Head(s). Installation should not require a restart, but you will have to perform a debug/refresh if it was untar’d or unzipped via the CLI. Below are the typical locations for app installations:

SH Cluster: on the deployer in $SPLUNK_HOME/etc/shcluster/apps

Indexer Cluster: on the cluster master in $SPLUNK_HOME/etc/master-apps

Anything else: in $SPLUNK_HOME/etc/apps. Note: Unless you want to use a Deployment Server, then it goes in $SPLUNK_HOME/etc/deployment-apps and the app would be added to the target’s serverclass.

Second, Enable HEC on your HEC receivers

GUI

Settings > Data Inputs > HTTP Event Collector > Global Setting

  • All Tokens: Enabled
  • Enable SSL: Checked
  • HTTP Port Number: Use the default 8088 or a port of your choice. Note: If you’re running Splunk as non-root, you may not be able to use privileged ports such as 443.

CLI

  • Use the app “splunk_httpinput” (in newer versions of Splunk you can put this in any app context) and create a local/inputs.conf in the app:
Copy to Clipboard
  • For SSL, it will use your splunkd certificate by default (path is in serverCert under the [sslConfig] stanza in server.conf). If you need to, you can specify a different SSL certificate under the [http] stanza in inputs.conf.

Lastly, Create the AWS WAF HEC Input

GUI

Settings > Data Inputs > HTTP Event Collector > New Token

  • On the first screen, type a name and enable indexer acknowledgment
  • On the sourcetype dropdown select “aws:waf”
  • For app context, store in “TA-aws_waf”
  • For the index, I recommend starting with a “test” index, before using your production index. Be sure to come back and change this later if you do!
  • Save the HEC token for setup in AWS.

CLI

  • Within TA-aws_waf create a local/inputs.conf with the following config:
Copy to Clipboard
  • Note, when working through the CLI you may need to either run a debug/refresh or restart Splunk to get the changes to become live.

AWS WAF Configuration and Setup

Moving over to AWS is where things may become a bit less straightforward. The way your AWS setup is deployed may be very, very different from mine. I’ll be sticking to the core concepts here, and you can adapt them to fit your environment.

Kinesis Firehose Setup

1. Go into Kinesis and click “Create Delivery Stream”

2. Name the stream “aws-waf-logs-splunk”. Note: The start of your stream name needs to match what is in bold, or this will NOT work.

3. Skip to “Step 3: Choose a destination.” From here, click Splunk, enter your remote endpoint with the listening port you setup, and Enter the token that you created in Splunk earlier. Select (or create) an S3 bucket that will store events that fail to send.

4. Click Next to “Step 4: Configure settings.” At the very bottom under permissions, create or choose a new IAM role. When this is done, click allow.

5. Finalize and create the delivery stream

Testing Kinesis Firehose

1. Once your new stream is active from the previous step, click it.

2. Click “Start sending demo data.”

3. Wait a couple of minutes then search “index=test” in Splunk.

4. Once you see an event, click “Stop sending demo data” in AWS. Note: If you do not have data in Splunk at this point, something is not setup correctly. If you click the “Splunk Logs” tab in your Kinesis Delivery Stream, you may get some hints. Otherwise you will need to troubleshoot– see the Splunk and Amazon troubleshooting documentation – likely starting at the network layer.

5. Back in the Splunk, change the index from “test” back to your production index. This can be done by following the same CLI/GUI steps from earlier.

Enabling WAF Logging

1. In AWS go over to the “WAF & Shield” service, and find the Web ACL(s) you want to enable logging on.

2. Click the Logging tab, then click enable logging button.

3. On the next screen, select the Kinesis Firehose Delivery Stream that you created already. Finish by clicking create.

4. At this point you should be all set. Wait a few minutes, then start searching in Splunk to make sure your data shows up and that you see field extractions.

Additional Splunk Configuration (optional)

  • Map rule ID’s to human readable names (Search Head). Note: You can adapt this for any other ID’s you want to map, too.

$SPLUNK_HOME/etc/apps/TA-aws_waf/lookups/aws_waf_rule_lookup.csv

Copy to Clipboard

$SPLUNK_HOME/etc/apps/TA-aws_waf/local/transforms.conf

Copy to Clipboard

$SPLUNK_HOME/etc/apps/TA-aws_waf/local/props.conf

Copy to Clipboard
  • More efficient eventtypes (Search Head). Add your AWS WAF index to the eventtypes named “aws_waf_web” and “aws_waf_ids”.

Conclusion

At this point, hopefully you feel more confident in your abilities of getting AWS and Splunk to work with each other. As a bonus, many of the points presented in this post can be extrapolated to fit other AWS/HEC data sources as well. You should also have a better sense of how to make choices that prepare you for the future as more data sources switch to HEC.

Resources

Splunk Resources

AWS Resources

Share with your network!
Get monthly updates from Hurricane Labs
* indicates required

About Hurricane Labs

Hurricane Labs is a dynamic Managed Services Provider that unlocks the potential of Splunk and security for diverse enterprises across the United States. With a dedicated, Splunk-focused team and an emphasis on humanity and collaboration, we provide the skills, resources, and results to help make our customers’ lives easier.

For more information, visit www.hurricanelabs.com and follow us on Twitter @hurricanelabs.