Improve Your Game: Leveraging Splunk for Enhanced MMO Performance

By |Published On: September 19th, 2017|

How can parsing help improve your game? Great question.

Parsing in an MMORPG is the act of extracting numbers from the game that are not usually visible to the player. An example use-case for this is “DPS” (damage-per-second), which means the amount of damage your character is dealing to an enemy per second. Beyond the obvious potential for bragging rights, there are practical applications for knowing your DPS along with other metrics such as number of misses, amount healed, or even deaths.

Most high-end content requires a certain amount of DPS to clear, and knowing these numbers can lead to improvement. Access to data provided by parses can also be helpful when trying different pieces of equipment, each of which have their own statistics that are visible to the player but aren’t always obvious how they will actually affect the amount of damage (which is better, faster cast times or strong cast potency, for example). If you want to improve something you have to be able to measure it, and parsing is a way to do so.

A solution for number comparison. Enter: Splunk

I am one of few in our group that parses, so I’m constantly asked questions like, “How was my damage that run?” or “Was my damage better than when I’m on Paladin?” I typically have a grasp of what my numbers are and can certainly look to see how others are performing in comparison. However, the ability to look at historical data isn’t very easy. Once I close the parsing program I lose that data unless I’ve manually exported it and even then it is in a fairly ugly XML format, as we’ll see shortly. Digesting this format, if the data is even available to begin with, makes it difficult then to tell a teammate how their damage on Warrior tonight compares to their Paladin that they used last Tuesday. Naturally, Splunk seemed like a perfect solution.

Why doesn’t everyone simply run a parse?

First, it requires significant effort to set up. A parsing program is typically a third-party piece of software that you must run independent of the game, and it also needs special permissions and firewall access. The installation and configuration alone can be overwhelming enough to discourage the average player. Further, for games that are cross-platform on home consoles such as PlayStation 4, there is no way for those users to install such a program.

Second, parses often technically break the terms of service of the game. Since they can be considered a third-party add-on, running a parse could lead to punishment. Generally, as long as you aren’t causing any issues with it there isn’t a problem, but many refuse to run one because of it (they usually don’t hesitate to ask for their numbers though).

The final reason, and one we’ve touched on already, is that the results of parsing can be somewhat difficult to manage. The parsing program does a great job of displaying information for the “here and now” situation, but without a dedicated effort to export results and manually review the lengthy XML files it is difficult to perform any sort of historical comparison.

Here’s how Splunk is actually being used here

This is where Splunk enters the picture. For those who may be unfamiliar, Splunk is a program that ingests machine data and allows administrators to add knowledge and make it easily searchable. We use this in a very wide variety of applications at Hurricane Labs, and many of us have adopted it at home for personal use, as you’re about to see.

Getting the data into Splunk begins with formatting it as best we can outside of Splunk. The developer of the parsing program I use certainly never intended for this sort of use case… In fact, the ability to export data in XML is noted in the program as something along the lines of “added by user request, I don’t know a practical application for this”. Fortunately we do, since Splunk does a good job of reading XML format out of the box.

Note: Make sure you’re aware of the limitations

There are a few limitations, however, the first being that there is no way to automate the export. You must manually export the results when you’re finished, but since you have a reason to actually do it, this isn’t a hard habit to get into. We can, however, configure Splunk to automatically read in any new files that are exported to a folder, so all we should need to do is export the desired data from the parse program.

The second limitation is based on how the parsing program will let you export results. The program I use creates a new “encounter” for each attempt, then summarizes all attempts for a given encounter. These can either be exported individually, or you can simply export the summary.

For “progression” (when your group is learning strategies for an encounter) it isn’t unusual to see an upwards of thirty or so attempts in a night, so exporting these individually doesn’t provide much value that the summary wouldn’t cover.

For encounters that you’ve mastered and are simply trying to optimize, there will likely only be one attempt you’re interested in exporting. Otherwise, the average of all attempts for a given encounter is usually what we’re interested in seeing. This may take a little maintenance throughout the night. E.g. If your group started a fight while a healer is away on accident, you’ll probably want to delete that encounter before exporting to Splunk to avoid having an outlier.

Splunk configuration and deployment approach

It may be a good idea to manually read the file into Splunk for the first few runs, just to make sure everything looks as you’d expect. Once you’ve confirmed the configuration is correct within Splunk, a Universal Forwarder package can be installed on the system you play the game/parse on. This will monitor the directory you export the parse results to and automatically read them into Splunk.

Splunk can read XML automatically, and it does a lot of the work for us already. Below is an example raw export:

Copy to Clipboard

When we bring this data in, we define it as XML and Splunk will automatically adjust for it, such as performing field extractions based on the <Field></Field> format of the log. You will notice that part of this behavior is prepending “Row” to the fields since it is included in the XML, which is expected.

We’ll clean this up later, but for now we will automatically have these values pulled out for us, such as Row.EncDPS indicating the amount of DPS that was dealt for the encounter. We also want to set up the line-breaking (how Splunk determines to separate events) which would be the <Row> string, and time extraction. We have two timestamps in this log, when the encounter began and when the encounter ended. Since the log is finalized when the event ends, this is the value we should use for the time within Splunk. As such, our props.conf looks like this:

Copy to Clipboard

This can be configured directly on the command line or during the preview when performing a one-time read on the file (for testing). The latter has the advantage of being able to ensure the events break and the time is extracted correctly in the preview.

You’ll notice in the example above that this is the XML output for my character (which is labelled as “YOU” in the program, we’ll fix that later). There isn’t anything in that example that identifies what encounter this was. Encounters are typically identified by the enemy in them. If I was interested in how much damage I did against an enemy named “Mill Bathews”, there’s no way for me to correlate this data other than perhaps knowing that we fought Mill on August 27th at around 2:00 PM. The parsing program I used actually logs the enemy data as a separate event, such as:

Copy to Clipboard

Normally, we aren’t concerned with the statistics of the enemy, but the name of the enemy in this log is something we’d definitely want. Luckily, there is the field Row.EncId which provides a unique number identifier that ties these two logs together: for Row.EncId = ba5e5ba1, we have my character’s data and the enemy’s data, though we just want to pull the name of the enemy from the latter. There is also another handy field here, Row.Ally, with either a value of “T” (for true) or “F” (for false). We can use this field in a search to see either only player data or enemy data, such as:

Copy to Clipboard

This would show us just the enemy data from the encounter ba5e5ba1, which was between my character and Mill Bathews. I worked with this search further to create a table containing Row.EncID and Row.Name and writes a lookup table based on the results:

Copy to Clipboard

I scheduled this search so that it would run every few minutes and populate the table with any new EncId values and the enemy name that was associated with it. Since we have the same EncId in the player data, I created a new field leveraging this lookup table in props.conf

Copy to Clipboard

Now, even in the player data where the enemy name was absent before, I have a field “EnemyName” populated from the lookup table, matching the Row.EncId value with the appropriate enemy name:

With this information, I can create a basic search that outputs the data we’d be interested in:sourcetype=parse:example Row.Ally=T | table Row.EndTime EnemyName Row.Name Row.Job Row.EncDPS Row.CritHits Row.DirectHitCount Row.CritDirectHitCount | rename Row.EndTime AS “End Time”, EnemyName AS “Enemy Name” Row.Name AS “Name”, Row.Job AS “Job”, Row.EncDPS AS “DPS”, Row.CritHits AS “CritHits”, Row.DirectHitCount AS “DirectHits”, Row.CritDirectHitCount AS “CritDirect Hits”

Copy to Clipboard

Note that we’re using Row.Ally=T in our search to only return player data. We’re not interested in seeing the numbers for the enemy (sorry, Mill).

I can now show our tank how much damage he did in the fight against Mill Bathews last week when he was using his Paladin and how it compares against this week when he used Warrior instead. It’s also interesting to chart overall performance as we become more familiar with encounters, gear up, or try new tactics. Although we can certainly answer specific questions, I wanted to create a dashboard that would automatically present some basic data for each member of the group, and something that could changed depending on the encounter.

To do this, I created a dashboard with a dropdown input that pulls from the enemy name lookup table created earlier. This then passes the token $EnemyName$ in the search to populate the results for each encounter and a table of results for each attempt is populate in a separate panel for each member (names have been sanitized for privacy purposes):

This is an example of the dashboard for multiple runs (one on 8/28 and one on 8/29). Note that the character “D. C.” was absent for our run on the 28th so we only have data from the 29th:

I also developed a few other panels to provide additional metrics such as deaths and healing done. Note that “HPS” represents “heals per second” and is a similar metrics to “damage per second” for healing classes:

Once I had the dashboards built, I packaged all the configuration into one app for easy management, and as discussed setup a separate deployment app to be pushed to my Windows system that runs the parsing program. This has input configuration that monitors a specific directory on that system where I export logs from the parsing program, so once I export the data within a matter of minutes it is searchable within Splunk and the data populate throughout these dashboards.


Although my use case example is for a particular parser program from a specific game, it can easily be applied to many situations assuming the data is available. Ingesting performance statistics into Splunk, such as DPS as I’ve done above, provides a great way to archive the data, answer specific questions by being able to search historical encounters on demand, and create visuals to quickly show fellow group members their personal improvement.

Share with your network!
Get monthly updates from Hurricane Labs
* indicates required

About Hurricane Labs

Hurricane Labs is a dynamic Managed Services Provider that unlocks the potential of Splunk and security for diverse enterprises across the United States. With a dedicated, Splunk-focused team and an emphasis on humanity and collaboration, we provide the skills, resources, and results to help make our customers’ lives easier.

For more information, visit and follow us on Twitter @hurricanelabs.