Visualizing and Enriching the Data
It explains how to visualize and enrich your data with knowledge.
Using Splunk to Understand Data
It shows how to explore, categorize, and become familiar with your data. The first step in getting to know data is using Splunk to identify fields in the data. You can think of this like looking at all the pieces in a puzzle, first noticing their shapes. The next step is to categorize data as a preamble to aggregation and reporting. This is like sorting the puzzle pieces into border pieces and interior pieces. The more you are able to understand the data and piece the puzzle together, the clearer the picture becomes. At last, the picture is complete (displaying the data) and you can share it with others.
Watch this Splunk Tutorial video
Identifying Fields: Looking at the Pieces of the Puzzle
Splunk Training recognizes many common types of data, referred to as source types. If you set the right source type, Splunk can use preconfigured settings to try to identify fields. This is the case with most types of web server logs.
But there are often hidden attributes embedded in machine data. For example, a product category may be part of a URL. By examining events that have certain product categories in their URLs, you can determine response times and error rates for different sections of the site or information about which products are viewed the most.
- Automatic Field Discovery
When you search, Splunk automatically extracts fields by identifying common patterns in the data, such as the presence of an equal sign (=) between a key and a value. For example, if an event contains “… id=11 lname=smith … ” Splunk automatically creates id and lname fields that have the example values. Some fields (such as source, sourcetype, host, _time, and linecount) are always identified.
The Field Discovery switch on the Fields sidebar in the UI turns this behavior on and off. You can see some selected fields (fields that Splunk selected by default or that you have selected), followed by fields that Splunk pulled out because they appeared in multiple events. If you click Edit, Splunk lists more fields that you can add to the group of selected fields. Clicking any field shows you the top values extracted from your search results.
- Configuring Field Extraction
Configuring field extraction can happen in two ways. You can let Splunk automate the configuration for you by using the Interactive Field Extractor, or you can manually specify the configuration yourself.
1. The Interactive Field Extractor
From any event in your search results, you can start the Interactive Field Extractor (IFX) by selecting Extract Fields from the Event options menu, which you reach by clicking the down arrow to the left of an event in the events list. The IFX appears in another tab or window in your browser. By entering
the kinds of values you seek (such as a client IP address in web logs), Splunk generates a regular expression that extracts similar values (this is especially helpful for the regular expression-challenged among us). You can test the extraction (to make sure it finds the field you’re looking for) and save it with the name of the field.
2. Manually Configuring Field Extraction
From Manager » Fields » Field extractions, you can manually specify regular expressions to extract fields, which is a more flexible but advanced method for extracting fields.
- Search Language Extraction
Another way to extract fields is to use search commands. The most common command for extracting data is the rex command. It takes a regular expression and extracts fields that match that expression. Sometimes the command you use depends on the kind of data from which you’re extracting fields. To extract fields from multiline tabular events (such as command-line output), use multikv, and to extract from XML and JSON data, use spath or xmlkv.
Exploring the Data to Understand its Scope
After fields are extracted, you can start exploring the data to see what it tells you. Returning to our analogy of the puzzle, you begin by looking for patterns.
The Search dashboard’s Fields sidebar gives you some immediate information about each field:
- The basic data type of the field, indicated by a character to the left of the field name (“a” is text and “#” is numeric).
- The number of occurrences of the field in the events list (in parentheses following the fieldname).
When you click a field name in the Fields sidebar, a summary of the field pops up, including top values and links to additional charts. You can also narrow the events list to see only events that have a value for that field.
The top command gives you the most common field values, defaulting to the top ten. You can use the top command to answer questions like these:
- What are my top 10 web pages?
sourcetype=”access*” | top uri
- Who are the top users for each host?
sourcetype=”access*” | top user by host
- What are the top 50 source and destination IP pairs?
…| top limit=50 src_ip, dest_ip
- Exploring data using stats
The stats command provides a wealth of statistical information about your data. Here is a simple ways to use it:
- How many 503 response errors2 have I had?
sourcetype=”access*” status=503 | stats count
- What is the average kilobytes per second for each host?
sourcetype=”access*” | stats avg(kbps) by host
Adding sparklines to the mix –
We can add simple line graphs, known as sparklines, to your tabular results. Sparklines let you quickly visualize a data pattern without creating a separate line chart.
For example, this search uses sparklines to show the number of events over time for each host:
* | stats sparkline count by host
Preparing for Reporting and Aggregation
After you have identified fields and explored the data, the next step is to start understanding what’s going on. By grouping your data into categories, you can search, report, and alert on those categories.
The categories we are talking about are user-defined. You know your data, and you know what you want to get out of your data. Using Splunk, you can categorize your data as many ways as you like. There are two primary ways that Splunk helps with categorizing data: tagging and event types.
Tags are an easy way to label any field value. If the host name bdgpu-login-01 isn’t intuitive, give it a tag, like authentication_server, to make it more understandable. If you see an outlier value in the UI and want to be able to revisit it later and get more context, you might label it follow_up. To tag a field value in the events list, click the down arrow beside the field value you want to tag. You can manage all your tags by going to Manager » Tags.
When you search in Splunk, you start by retrieving events. You implicitly look for a particular kind of event by searching for it. You could say that you were looking for events of a certain type. That’s how “event types” are used: they let you categorize events.
Event types facilitate event categorization using the full power of the search command, meaning you can use Boolean expressions, wildcards, field values, phrases, and so on. In this way, event types are even more powerful than tags, which are limited to field values. But, like tags, how your data is categorized is entirely up to you. You might create event types to categorize events such as where a customer purchased, when a system crashed, or what type of error condition occurred.
Here are some ground rules for a search that defines an event type:
- No pipes. You can’t have a pipe in a search used to create an event type (i.e., it cannot have any search commands other than the implied search command).
- No subsearches. At the end of Chapter 3, we briefly covered the wheel-within-a-wheel that is subsearches; for now, remember that you can’t use them to create event types.
Here’s a simple example. In our ongoing quest to improve our website, we’re going to create four event types based on the status field:
- status=”2*” is defined as success.
- status=”3*” is defined as redirect.
- status=”4*” is defined as client_error.
- status=”5*” is defined as server_error.
To create the event type success as we’ve defined it, you would perform a search like this:
Next, choose Create » Event type. The Save As Event Type dialog appears where you name the event type, optionally assign tags, and click Save.
We create the other three event types in just the same way, and then run a stats count to see the distribution:
sourcetype=”access*”| stats count by eventtype
There are relatively few events with an event type of server_error but, nonetheless, they merit a closer look to see if we can figure out what they have in common. Clicking server_error lets us to drill down into events of just that event type, where we see 15 events. The server_error events have one rather disturbing thing in common: people are trying to buy something when the server unavailable status occurs. In other words, this is costing us money! It’s time to go talk to the person who administers that server and find out what’s wrong.
Watch this Splunk Tutorial for Beginners video:
Tagging Event Types
Event types can have tags (and so can any field value for that matter). For example, you can tag all error event types with the tag error. You can then add a more descriptive tag about the types of errors relevant to that event type. Perhaps there are three types of errors: one type that indicates early warnings of possible problems, others that indicate an outage that affects the user, and others that indicate catastrophic failure. You can add another tag to the error event types that is more descriptive, such as early_ warning, user_impact, or red_alert, and report on them separately.
- Clicking a fieldname in the Fields sidebar to see some quick graphics about a field.
- Using the top and stats search commands.
- Using sparklines to see inline visualizations in the events table results.
This section shows you how to create charts and dashboards for visualizing your data.
When you look at a table of data, you may see something interesting. Putting that same data into charts and graphs can reveal new levels of information and bring out details that are hard to see otherwise.
To create charts of your data, after you run a search, select Create » Report.
Splunk offers various chart types: column, line, area, bar, pie, and scatterplots. What product categories are affected most by 404 errors? This search calculates the number of events for each category_id and generates the pie chart.
sourcetype=”access*” status=”404” | stats count by category_id
The end result of using Splunk for monitoring is usually a dashboard with several visualizations. A dashboard is made up of report panels, which can be a chart, a gauge, or a table or list of search results (often the data itself is interesting to view).
Splunk automatically handles many kinds of drill downs into chart specifics with a simple click on the chart.
The best way to build a dashboard is not from the top down but from the bottom up, with each panel. Start by using Splunk’s charting capabilities to show the vital signs in various ways. When you have several individual charts showing different parts of the system’s health, place them onto a dashboard.
To create a dashboard and add a report, chart, or search results to it:
- Run a search that generates a report for a dashboard.
- Select Create » Dashboard panel.
- Give your search a name, and click Next.
- Decide if you want this report to go on a new dashboard or on an existing dashboard. If you’re creating a new dashboard, give it a name. Click Next.
- Specify a title for your dashboard and a visualization (table, bar, pie, gauge, etc.), and when you want the report for the panel to run (whenever the dashboard is displayed or on a fixed schedule).
- Click Next followed by the View dashboard link or OK.
At any time you can view a dashboard by selecting it from the Dashboards & Views menu at the top of the page.
While viewing your dashboard, you can edit it by clicking On in the Edit mode selector and then clicking the Edit menu of any panel you want to edit. From there, you can edit the search that generates a report or how it’s visualized, or delete the panel.
An alert is a search that runs periodically with a condition evaluated on the search results. When the condition matches, some actions are executed.
Creating Alerts through a Wizard
To get started with creating an alert, the first step is to search for the condition about which you want to be alerted. Splunk takes whatever search is in the search bar when you create an alert and uses that as a saved search, which becomes the basis for your alert (the “if” in your “if-then”).
With the search you want in the search bar, select Create » Alert. This starts a wizard that makes it easy to create an alert.
Scheduling an Alert
On the Schedule screen of the Create Alerts dialog, you name the alert and specify how you want Splunk to execute it. You can choose whether Splunk monitors for a condition by running a search in real time, by running a scheduled search periodically, or by monitoring in real time over a rolling window.
Here are the use cases for these three options:
- Monitor in real time if you want to be alerted whenever the condition happens.
- Monitor on a scheduled basis for less urgent conditions that you nonetheless want to know about.
- Monitor using a real-time rolling window if you want to know if a certain number of things happen within a certain time period (it’s a hybrid of the first two options in that sense). For example, trigger the alert as soon as you see more than 20404s in a 5-minute window.
If you specify that you want to monitor on a schedule or in a rolling window, you must also specify the time interval and the number of results that should match the search to trigger the alert. Alternatively, you could enter a custom condition, which is a search that is executed if the alert condition is met.
The next step is to set limits and specify what to do if the alert is triggered.
What should happen if the alert condition occurs? On the Action screen of the Create Alert dialog, you specify what action or actions you want to take (sending email, running a script, showing triggered alerts in Alerts Manager).
- Send email. Email has the following options:
1. Email addresses. Enter at least one.
2. Subject line. You can leave this as the default, which is Splunk Alert: AlertName. The alert name is substituted for $name$. (This means you could change that subject to: Oh no! $name$ happened.)
3. Include the results that triggered the alert. Click the checkbox to include them either as an attached CSV file or select inline to put them right into the email itself.
- Run a script. You specify the script name, which must be placed in Splunk’s home directory, within /bin/scripts or within an app’s / bin/scripts directory.
- Show triggered alerts in Alert manager, which you reach by clicking Alerts in the upper right corner of the UI.
Tuning Alerts Using Manager
Setting the right limits for alerting usually requires trial and error. It may take some adjustment to prevent too many unimportant alerts or too few important ones. The limits should be tuned so that, for example, one spike in an isolated vital sign doesn’t trigger an alert, but 10 vital signs getting within 10% of their upper limits do.
It’s easy to create alerts quickly using the wizard, but still more options for tuning alerts are available using Manager. Remember that saved searches underlie alerts. As a result, you edit them like you would a saved search. To edit to your alert, choose Manager and then Searches and Reports. Select a saved search from the list to display its parameters.
Setting Alert Conditions
Thinking of an alert as an If-Then statement, you have more flexibility on the If side by editing through the Manager. The alert can be set to trigger:
- Depending on the number of events, hosts, sources
- Custom condition
Setting Custom Conditions
Although the UI offers flexibility for configuring the most common kinds of alert conditions, sometimes you need even more flexibility in the form of custom conditions.
A custom condition is a search against the results of the alert’s main search. If it returns any results, the condition is true, and the alert is fired.
Splunk lets you throttle alerts so that even if they are triggered, they go off only once in a particular time interval. In other words, if the first alert is like the first kernel of popcorn that pops, you don’t want alerts for all those other kernels, which are really related to that first alert. (If popcorn had a second alert, it should go off just after all functional kernels pop and before any of them burn.) This is what throttling does. You can tell Splunk to alert you but not to keep alerting you.
Customizing Actions for Alerting
By writing or modifying scripts, you can set up custom actions for alerts.
For example, you may want an alert to:
- Send an SMS to the people who can help with the problem.
- Create a helpdesk ticket or other type of trouble ticket.
- Restart the server.
All alert actions are based on a script, including sending an email. So is creating an RSS feed. With that in mind, you can see that you can set up alert actions as flexibly as needed using scripting.
The Alerts Manager
Mission control for alerts is the Alert manager. Click Alert in the upper right corner of the screen to display the Alert manager.
The Alert manager shows the list of most recent firings of alerts (i.e., alert instances). It shows when the alert instance fired, and provides a link to view the search results from that firing and to delete the firing. It also shows the alert’s name, app, type (scheduled, real-time, or rolling window), severity, and mode (digest or per-result). You can also edit the alert’s definition.
Learn more about Splunk in this insightful blog now!