5 min read

Log management and analysis for many organizations start and end with just three letters: EL, and K, which stands for Elasticsearch, Logstash, and Kibana. In today’s tutorial, we will learn about analyzing CloudTrail logs which are E, L and K.

[box type=”shadow” align=”” class=”” width=””]This tutorial is an excerpt from the book AWS Administration – The Definitive Guide – Second Edition written by Yohan Wadia. This book will help you enhance your application delivery skills with the latest AWS services while also securing and monitoring the environment workflow.[/box]

The three open-sourced products are essentially used together to aggregate, parse, search and visualize logs at an enterprise scale:

  • Logstash: Logstash is primarily used as a log collection tool. It is designed to collect, parse, and store logs originating from multiple sources, such as applications, infrastructure, operating systems, tools, services, and so on.
  • Elasticsearch: With all the logs collected in one place, you now need a query engine to filter and search through these logs for particular events. That’s exactly where Elasticsearch comes into play. Elasticsearch is basically a search server based on the popular information retrieval software library, Lucene. It provides a distributed, full-text search engine along with a RESTful web interface for querying your logs.
  • Kibana: Kibana is an open source data visualization plugin, used in conjunction with Elasticsearch. It provides you with the ability to create and export your logs into various visual graphs, such as bar charts, scatter graphs, pie charts, and so on.

You can easily download and install each of these components in your AWS environment, and get up and running with your very own ELK stack in a matter of hours! Alternatively, you can also leverage AWS own Elasticsearch service! Amazon Elasticsearch is a managed ELK service that enables you to quickly deploy operate, and scale an ELK stack as per your requirements. Using Amazon Elasticsearch, you eliminate the need for installing and managing the ELK stack’s components on your own, which in the long run can be a painful experience.

For this particular use case, we will leverage a simple CloudFormation template that will essentially set up an Amazon Elasticsearch domain to filter and visualize the captured CloudTrail Log files, as depicted in the following diagram:

AWS CloudTrail
  1. To get started, log in to the CloudFormation dashboard, at https://console.aws.amazon.com/cloudformation.
  2. Next, select the option Create Stack to bring up the CloudFormation template selector page. Paste http://s3.amazonaws.com/concurrencylabs-cfn-templates/cloudtrail-es-cluster/cloudtrail-es-cluster.json in, the Specify an Amazon S3 template URL field, and click on Next to continue.
  3. In the Specify Details page, provide a suitable Stack name and fill out the following required parameters:
    • AllowedIPForEsCluster: Provide the IP address that will have access to the nginx proxy and, in turn, have access to your Elasticsearch cluster. In my case, I’ve provided my laptop’s IP. Note that you can change this IP at a later stage, by visiting the security group of the nginx proxy once it has been created by the CloudFormation template.
    • CloudTrailName: Name of the CloudTrail that we set up at the beginning of this chapter.
    • KeyName: You can select a key-pair for obtaining SSH to your nginx proxy instance:
parameters
    • LogGroupName: The name of the CloudWatch Log Group that will act as the input to our Elasticsearch cluster.
    • ProxyInstanceTypeParameter: The EC2 instance type for your proxy instance. Since this is a demonstration, I’ve opted for the t2.micro instance type. Alternatively, you can select a different instance type as well.
  1. Once done, click on Next to continue. Review the settings of your stack and hit Create to complete the process.

The stack takes a good few minutes to deploy as a new Elasticsearch domain is created. You can monitor the progress of the deployment by either viewing the CloudFormation’s Output tab or, alternatively, by viewing the Elasticsearch dashboard. Note that, for this deployment, a default t2.micro.elasticsearch instance type is selected for deploying Elasticsearch. You should change this value to a larger instance type before deploying the stack for production use.

You can view information on Elasticsearch Supported Instance Types at http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-supported-instance-types.html.

With the stack deployed successfully, copy the Kibana URL from the CloudFormation Output tab:

"KibanaProxyEndpoint": "http://<NGINX_PROXY>/_plugin/kibana/"

The Kibana UI may take a few minutes to load. Once it is up and running, you will need to configure a few essential parameters before you can actually proceed. Select Settings and hit the Indices option. Here, fill in the following details:

  • Index contains time-based events: Enable this checkbox to index time-based events
  • Use event times to create index names: Enable this checkbox as well
  • Index pattern interval: Set the Index pattern interval to Daily from the drop-down list
  • Index name of pattern: Type [cwl-]YYYY.MM.DD in to this field
  • Time-field name: Select the @timestamp value from the drop-down list

Once completed, hit Create to complete the process. With this, you should now start seeing logs populate on to Kibana’s dashboard. Feel free to have a look around and try out the various options and filters provided by Kibana:

Kibana dashboard

 

Phew! That was definitely a lot to cover! But wait, there’s more!

AWS provides yet another extremely useful governance and configuration management service AWS Config, know more from this book AWS Administration – The Definitive Guide – Second Edition.

Read Next

The Cloud and the DevOps Revolution

Serverless computing wars: AWS Lambdas vs Azure Functions

LEAVE A REPLY

Please enter your comment!
Please enter your name here