14 min read

In this article by Betsy Page Sigman, author of the book Splunk Essentials, Splunk, whose “name was inspired by the process of exploring caves, or splunking, helps analysts, operators, programmers, and many others explore data from their organizations by obtaining, analyzing, and reporting on it. This multinational company, cofounded by Michael Baum, Rob Das, and Erik Swan, has a core product called “Splunk Enterprise. This manages searches, inserts, deletes, and filters, and analyzes big data that is generated by machines, as well as other types of data. “They also have a free version that has most of the capabilities of Splunk Enterprise and is an excellent learning tool.

(For more resources related to this topic, see here.)

Understanding events, event types, and fields in Splunk

An understanding of events and event types is important before going further.

Events

In Splunk, an event is not just one of” the many local user meetings that are set up between developers to help each other out (although those can be very useful), “but also refers to a record of one activity that is recorded in a log file. Each event usually has:

  • A timestamp indicating the date and exact time the event was created
  • Information about what happened on the system that is being tracked

Event types

An event type is a way to allow “users to categorize similar events. It is field-defined by the user. You can define an event type in several ways, and the easiest way is by using the SplunkWeb interface.

One common reason for setting up an event type is to examine why a system has failed. Logins are often problematic for systems, and a search for failed logins can help pinpoint problems. For an interesting example of how to save “a search on failed logins as an event type, visit http://docs.splunk.com/Documentation/Splunk/6.1.3/Knowledge/ClassifyAndGroupSimilarEvents#Save_a_search_as_a_new_event_type.

Why are events and event types so important in Splunk? Because without events, there would be nothing to search, of course. And event types allow us to make meaningful searches easily and quickly according to our needs, as we’ll see later.

Sourcetypes

Sourcetypes are also “important to understand, as they help define the rules for an event. A sourcetype is one of the default fields that Splunk assigns to data as it comes into the system. It determines what type of data it is so that Splunk can format it appropriately as it indexes it. This also allows the user who wants to search the “data to easily categorize it.

Some of the common sourcetypes are listed as follows:

  • access_combined, for “NCSA combined format HTTP web server logs
  • apache_error, for standard “Apache web server error logs
  • cisco_syslog, for the “standard syslog produced by Cisco network devices (including PIX firewalls, routers, and ACS), usually via remote syslog to a central log host
  • websphere_core, a core file” export from WebSphere

(Source: http://docs.splunk.com/Documentation/Splunk/latest/Data/Whysourcetypesmatter)

Fields

Each event in Splunk is” associated with a number of fields. The core fields of host, course, sourcetype, and timestamp are key to Splunk. These fields are extracted from events at multiple points in the data processing pipeline that Splunk uses, and each of these fields includes a name and a value. The name describes the field (such as the userid) and the value says what that field’s value is (susansmith, for example). Some of these fields are default fields that are given because of where the event came from or what it is. When data is processed by Splunk, and when it is indexed or searched, it uses these fields. For indexing, the default fields added include those of host, source, and sourcetype. When searching, Splunk is able to select from a bevy of fields that can either be defined by the user or are very basic, such as action results in a purchase (for a website event). Fields are essential for doing the basic work of Splunk – that is, indexing and searching.

Getting data into Splunk

It’s time to spring into action” now and input some data into Splunk. Adding data is “simple, easy, and quick. In this section, we will use some data and tutorials created by Splunk to learn how to add data:

  1. Firstly, to obtain your data, visit the tutorial data at http://docs.splunk.com/Documentation/Splunk/6.1.5/SearchTutorial/GetthetutorialdataintoSplunk that is readily available on Splunk.
  2. Here, download the folder tutorialdata.zip. Note that this will be a fresh dataset that has been collected over the last 7 days. Download it but don’t extract the data from it just yet.
  3. You then need to log in to Splunk, using admin as the username and then by using your password.
  4. Once logged in, you will notice that toward the upper-right corner of your screen is the button Add Data, as shown in the following screenshot. Click “on this button:Splunk Essentials

    Button to Add Data

  5. Once you have “clicked on this button, you’ll see a screen” similar to the “following screenshot:Splunk Essentials

    Add Data to Splunk by Choosing a Data Type or Data Source

  6. Notice here the “different types of data that you can select, as “well as the different data sources. Since the data we’re going to use is a file, under “Or Choose a Data Source, click on From files and directories.
  7. Once you have clicked on this, you can then click on the radio button next to Skip preview, as indicated in the following screenshot, since you don’t need to preview the data” now. You then need to click on “Continue:Splunk Essentials

    Preview data

    You can download the tutorial files at: http://docs.splunk.com/Documentation/Splunk/6.1.5/SearchTutorial/GetthetutorialdataintoSplunk

  8. As shown in the next screenshot, click on Upload and index a file, find the tutorialdata.zip file you just downloaded (it is probably in your Downloads folder), and then click on More settings, filling it in as shown in the following screenshot. (Note that you will need to select Segment in path under Host and type 1 under Segment Number.) Click on Save when you are done:Splunk Essentials

    Can specify source, additional settings, and source type

  9. Following this, you “should see a screen similar to the following” screenshot. Click on Start Searching, we will look at the data now:Splunk Essentials

    You should see this if your data has been successfully indexed into Splunk.

  10. You will now” see a screen similar to the following” screenshot. Notice that the number of events you have will be different, as will the time of the earliest event. At this point, click on Data Summary:Splunk Essentials

    The Search screen

  11. You should see the Data Summary screen like in the following screenshot. However, note that the Hosts shown here will not be the same as the ones you get. Take a quick look at what is on the Sources tab and the Sourcetypes tab. Then find the most recent data (in this case 127.0.0.1) and click on it.Splunk Essentials

    Data Summary, where you can see Hosts, Sources, and Sourcetypes

  12. After” clicking on the most recent data, which in “this case is bps-T341s, look at the events contained there.

    Later, when we use streaming data, we can see how the events at the top of this list change rapidly.

  13. Here, you will see a listing of events, similar to those shown in the “following screenshot:Splunk Essentials

    Events lists for the host value

  14. You can click on the Splunk logo in the upper-left corner “of the web page to return to the home page. Under Administrator at the “top-right of the page, click on Logout.

Searching Twitter data

We will start here by doing a simple search of our Twitter index, which is automatically created by the app once you have enabled Twitter input (as explained previously). In our earlier searches, we used the default index (which the tutorial data was downloaded to), so we didn’t have to specify the index we wanted to use. Here, we will use just the Twitter index, so we need to specify that in the search.

A simple search

Imagine that we wanted to search for tweets containing the word coffee. We could use the code presented here and place it in the search bar:

index=twitter text=*coffee*

The preceding code searches only your Twitter index and finds all the places where the word coffee is mentioned. You have to put asterisks there, otherwise you will only get the tweets with just “coffee”. (Note that the text field is not case sensitive, so tweets with either “coffee” or “Coffee” will be included in the search results.)

The asterisks are included before and after the text “coffee” because otherwise we would only get events where just “coffee” was tweeted – a rather rare occurrence, we expect. In fact, when we search our indexed Twitter data without the asterisks around coffee, we got no results.

Examining the Twitter event

Before going further, it is useful to stop and closely examine the events that are collected as part of the search. The sample tweet shown in the following screenshot shows the large number of fields that are part of each tweet. The > was clicked to expand the event:Splunk Essentials

A Twitter event

There are several items to look closely at here:

  1. _time: Splunk assigns a timestamp for every event. This is done in UTC (Coordinated Universal Time) time format.
  2. contributors: The value for this field is null, as are the values of many
    Twitter fields.
  3. Retweeted_status: Notice the {+} here; in the following event list, you will see there are a number of fields associated with this, which can be seen when the + is selected and the list is expanded. This is the case wherever you see a {+} in a list of fields:Splunk Essentials

    Various retweet fields

In addition to those shown previously, there are many other fields associated with a tweet. The 140 character (maximum) text field that most people consider to be the tweet is actually a small part of the actual data collected.

The implied AND

If you want to search on more than one term, there is no need to add AND as it is already implied. If, for example, you want to search for all tweets that include both the text “coffee” and the text “morning”, then use:

index=twitter text=*coffee* text=*morning*

If you don’t specify text= for the second term and just put *morning*, Splunk assumes that you want to search for *morning* in any field. Therefore, you could get that word in another field in an event. This isn’t very likely in this case, although coffee could conceivably be part of a user’s name, such as “coffeelover”. But if you were searching for other text strings, such as a computer term like log or error, such terms could be found in a number of fields. So specifying the field you are interested in would be very important.

The need to specify OR

Unlike AND, you must always specify the word OR. For example, to obtain all events that mention either coffee or morning, enter:

index=twitter text=*coffee* OR text=*morning*

Finding other words used

Sometimes you might want to find out what other words are used in tweets about coffee. You can do that with the following search:

index=twitter text=*coffee* | makemv text | mvexpand text | top 30 text

This search first searches for the word “coffee” in a text field, then creates a multivalued field from the tweet, and then expands it so that each word is treated as a separate piece of text. Then it takes the top 30 words that it finds.

You might be asking yourself how you would use this kind of information. This type of analysis would be of interest to a marketer, who might want to use words that appear to be associated with coffee in composing the script for an advertisement. The following screenshot shows the results that appear (1 of 2 pages). From this search, we can see that the words love, good, and cold might be words worth considering:Splunk Essentials

Search of top 30 text fields found with *coffee*

When you do a search like this, you will notice that there are a lot of filler words (a, to, for, and so on) that appear. You can do two things to remedy this. You can increase the limit for top words so that you can see more of the words that come up, or you can rerun the search using the following code. “Coffee” (with a capital C) is listed (on the unshown second page) separately here from “coffee”. The reason for this is that while the search is not case sensitive (thus both “coffee” and “Coffee” are picked up when you search on “coffee”), the process of putting the text fields through the makemv and the mvexpand processes ends up distinguishing on the basis of case. We could rerun the search, excluding some of the filler words, using the code shown here:

index=twitter text=*coffee* | makemv text | mvexpand text |
search NOT text="RT" AND NOT text="a" AND NOT text="to" AND
NOT text="the" | top 30 text

Using a lookup table

Sometimes it is useful to use a lookup file to avoid having to use repetitive code. It would help us to have a list of all the small words that might be found often in a tweet just by the nature of each word’s frequent use in language, so that we might eliminate them from our quest to find words that would be relevant for use in the creation of advertising. If we had a file of such small words, we could use a command indicating not to use any of these more common, irrelevant words when listing the top 30 words associated with our search topic of interest. Thus, for our search for words associated with the text “coffee”, we would be interested in words like ” dark”, “flavorful”, and “strong”, but not words like “a”, “the”, and “then”.

We can do this using a lookup command. There are three types of lookup commands, which are presented in the following table:

Command

Description

lookup

Matches a value of one field with a value of another, based on a .csv file with the two fields.

Consider a lookup table named lutable that contains fields for machine_name and owner. Consider what happens when the following code snippet is used after a preceding search (indicated by . . . |):

. . . | lookup lutable owner

Splunk will use the lookup table to match the owner’s name with its machine_name and add the machine_name to each event.

inputlookup

All fields in the .csv file are returned as results. If the following code snippet is used, both machine_name and owner would be searched:

. . . | inputlookup lutable

outputlookup

This code outputs search results to a lookup table. The following code outputs results from the preceding research directly into a table it creates:

. . . | outputlookup newtable.csv saves

The command we will use here is inputlookup, because we want to reference a .csv file we can create that will include words that we want to filter out as we seek to find possible advertising words associated with coffee. Let’s call the .csv file filtered_words.csv, and give it just a single text field, containing words like “is”, “the”, and “then”. Let’s rewrite the search to look like the following code:

index=twitter text=*coffee*
| makemv text | mvexpand text
| search NOT [inputlookup filtered_words | fields text ]
| top 30 text

Using the preceding code, Splunk will search our Twitter index for *coffee*, and then expand the text field so that individual words are separated out. Then it will look for words that do NOT match any of the words in our filtered_words.csv file, and finally output the top 30 most frequently found words among those.

As you can see, the lookup table can be very useful. To learn more about Splunk lookup tables, go to http://docs.splunk.com/Documentation/Splunk/6.1.5/SearchReference/Lookup.

Summary

In this article, we have learned more about how to use Splunk to create reports, dashboards. Splunk Enterprise Software, or Splunk, is an extremely powerful tool for searching, exploring, and visualizing data of all types. Splunk is becoming increasingly popular, as more and more businesses, both large and small, discover its ease and usefulness. Analysts, managers, students, and others can quickly learn how to use the data from their systems, networks, web traffic, and social media to make attractive and informative reports.

This is a straightforward, practical, and quick introduction to Splunk that should have you making reports and gaining insights from your data in no time.

Resources for Article:


Further resources on this subject:



Subscribe to the weekly Packt Hub newsletter. We'll send you the results of our AI Now Survey, featuring data and insights from across the tech landscape.

* indicates required