22 min read

Developing IoT applications has never been easier thanks to the cloud. All the major cloud vendors provide IoT tools; in this tutorial you’ll learn how to build a complete IoT application with AWS IoT.

This article is an excerpt from the book, Enterprise Internet of Things Handbook, written by Arvind Ravulavaru. 

End-to-end communication

To get started with AWS IoT, you need an AWS account. If you don’t have an AWS account, you can create one here.

Once you have created your account, you can log in and navigate to the AWS IoT Console.

Setting up the IoT Thing

Once you are on the AWS IoT Console page, make sure you have selected a region that is close to your location. I have selected the US East (N. Virginia) region as shown in the following screenshot:

  1. Now, click on the Get started button in the center of the page. From the side menu, navigate to Manage | Things and you should see a screen as shown here:
  1. Next, click on the Register a thing button and you should see a screen as shown here:
  1. Right now, we are going to onboard only one Thing. So, click on Create a single thing.
  2. On the next screen, we will start filling in the form by naming the device. I have called my device Pi3-DHT11-Node. You can give your Thing any name but do remember to update the code where applicable.
  3. Next, we are going to apply a Type. Since this is our first device, we are going to create a new Type. Click on Create a thing type and fill in the form as shown in the following screenshot:
  1. If we have different types of devices, such as motion sensors, door sensors, or DHT11 sensors, we can create a Type to easily group our nodes.
  2. Click on the Create thing type button and this will create a new type; select that value as the default.
  3. Next, we are going to add this device to a group—a group of Raspberry Pi 3, DHT11 nodes. You can group your devices as per your requirements and classification.
  4. Now, click on Create group and create it with the following values:

We have added two attributes to identify this group easily, as shown in the previous screenshot.

  1. Click on the Create thing group and this will create a new group—select that value as the default. These are the only Things we are going to set up in this step. Your form should look something like this:
  1. At the bottom of the page, click on the Next button.
  2. Now, we need to create a certificate for the Thing. AWS uses certificate-based authentication and authorization to create a secure connection between the device and AWS IoT Core.
For more information, refer to MQTT Security Fundamentals: X509 Client Certificate Authentication: https://www.hivemq.com/blog/mqtt-security-fundamentals-x509-client-certificate-authentication.

The current screen should look as shown here:

  1. Under One-click certificate creation (recommended), click on the Create certificate button. This will create three certificates as illustrated in the following screenshot:
Do not share these certificates with anyone. These are as good as the username and password of your device to post data to AWS IoT.
  1. Once the certificates are created, download the following:
My keys start with db80b0f635. Yours may start with something else.
  1. Once you have downloaded the keys, click on the Activate button.
  2. Once the activation is successful, click on Attach a policy. Since we did not create any policies, you will see a screen similar to as what is shown here:

No issues with that. We will create a policy manually and associate it with this certificate in a moment.

  1. Finally, click on the Register Thing button and a new Thing named Pi3-DHT11-Node will be created. Click on Pi3-DHT11-Node and you should see something like this:
  1. We are not done with the setup yet. We still need to create a policy and attach it with a certificate to proceed.
  2. Navigate back to the Things page, and from the side menu on this page, select Secure | Policies:
  1. Now, click on Create a policy and fill in the form as demonstrated in the following screenshot:
In the previously demonstrated policy, we are allowing any kind of IoT operation to be performed by the device that uses this policy and on any resource. This is a dangerous setup, mainly for production; however, this is okay for learning purposes.
  1. Click on the Create button and this will create a new policy. Now, we are going to attach this policy to a certificate.
  2. Navigate to Secure | Certificates and, using the options available at the top-right of the certificate we created, we are going to attach the policy:
  1. Click on Attach policy on the previous screen and select the policy we have just created:
  1. Now, click on Attach to complete the setup.

With this, we are done with the setup of a Thing.

In the next section, we are going to use Node.js as a client on Raspberry Pi 3 to send data to the AWS IoT.

Setting up Raspberry Pi 3 on the DHT11 node

Now that we have our Thing set up in AWS IoT, we are going to complete the remaining operation in Raspberry Pi to send data.

Things needed

You will need the following hardware to set up Raspberry Pi 3 on the DHT11 node:

If you are new to the world of Raspberry Pi GPIO’s interfacing, take a look at this Raspberry Pi GPIO Tutorial: The Basics Explained video tutorial on YouTube, at: https://www.youtube.com/watch?v=6PuK9fh3aL8.

Connect the DHT11 sensor to Raspberry Pi 3 as shown in the following diagram:

Next, start Raspberry Pi 3 and log in to it. On the desktop, create a new folder named AWS-IoT-Thing. Open a new Terminal and cd into this folder.

Setting up Node.js

If Node.js is not installed, please refer to the following steps:

  1. Open a new Terminal and run the following commands:
$ sudo apt update
$ sudo apt full-upgrade
  1. This will upgrade all the packages that need upgrades. Next, we will install the latest version of Node.js. We will be using the Node 7.x version:
$ curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash -
$ sudo apt install nodejs
  1. This will take a moment to install, and once your installation is done, you should be able to see the version of Node.js and NPM after running the following commands:
$ node -v
$ npm -v

Developing the Node.js Thing app

Now, we will set up the app and write the required code:

  1. From the Terminal, once you are inside the AWS-IoT-Thing folder, run the following command:
$ npm init -y
  1. Next, we will install aws-iot-device-sdk (http://npmjs.com/package/aws-iot-device-sdk) from NPM. This module has the required client code to interface with AWS IoT. Execute the following command:
$ npm install aws-iot-device-sdk --save
  1. Next, we will install rpi-dht-sensor (https://www.npmjs.com/package/rpi-dht-sensor) from NPM. This module will help in reading the DHT11 temperature and humidity values. Let’s run the following command:
$ npm install rpi-dht-sensor --save
  1. Your final package.json file should look like this:
{ 
 "name": "AWS-IoT-Thing", 
 "version": "1.0.0", 
 "main": "index.js", 
 "scripts": { 
  "test": "echo "Error: no test specified" && exit 1" 
 }, 
 "keywords": [], 
 "author": "", 
 "license": "ISC", 
 "description": "", 
 "dependencies": { 
  "aws-iot-device-sdk": "^2.2.0", 
  "rpi-dht-sensor": "^0.1.1" 
 } 
}

Now that we have the required dependencies installed, let’s continue:

  1. Create a new file named index.js at the root of the AWS-IoT-Thing folder. Next, create a folder named certs at the root of the AWS-IoT-Thing folder and move the four certificates we have downloaded there. Your final folder structure should look something like this:
  1. Open index.js in any text editor and update it as shown in the following code snippet:
var awsIot = require('aws-iot-device-sdk'); 
var rpiDhtSensor = require('rpi-dht-sensor'); 
 
var dht = new rpiDhtSensor.DHT11(2); // `2` => GPIO2 
const NODE_ID = 'Pi3-DHT11-Node'; 
const INIT_DELAY = 15; 
const TAG = '[' + NODE_ID + '] >>>>>>>>> '; 
 
console.log(TAG, 'Connecting...'); 
 
var thingShadow = awsIot.thingShadow({ 
  keyPath: './certs/db80b0f635-private.pem.key', 
  certPath: './certs/db80b0f635-certificate.pem.crt', 
  caPath: './certs/RootCA-VeriSign-Class 3-Public-Primary-Certification-Authority-G5.pem', 
  clientId: NODE_ID, 
  host: 'a1afizfoknpwqg.iot.us-east-1.amazonaws.com', 
  port: 8883, 
  region: 'us-east-1', 
  debug: false, // optional to see logs on console 
}); 
 
thingShadow.on('connect', function() { 
  console.log(TAG, 'Connected.'); 
  thingShadow.register(NODE_ID, {}, function() { 
    console.log(TAG, 'Registered.'); 
    console.log(TAG, 'Reading data in ' + INIT_DELAY + ' seconds.'); 
    setTimeout(sendData, INIT_DELAY * 1000); // wait for `INIT_DELAY` seconds before reading the first record 
  }); 
}); 
 
function fetchData() { 
  var readout = dht.read(); 
  var temp = readout.temperature.toFixed(2); 
  var humd = readout.humidity.toFixed(2); 
 
  return { 
    "temp": temp, 
    "humd": humd 
  }; 
} 
 
function sendData() { 
  var DHT11State = { 
    "state": { 
      "desired": fetchData() 
    } 
  }; 
 
  console.log(TAG, 'Sending Data..', DHT11State); 
 
  var clientTokenUpdate = thingShadow.update(NODE_ID, DHT11State); 
  if (clientTokenUpdate === null) { 
    console.log(TAG, 'Shadow update failed, operation still in progress'); 
  } else { 
    console.log(TAG, 'Shadow update success.'); 
  } 
 
  // keep sending the data every 30 seconds 
  console.log(TAG, 'Reading data again in 30 seconds.'); 
  setTimeout(sendData, 30000); // 30,000 ms => 30 seconds 
} 
 
thingShadow.on('status', function(thingName, stat, clientToken, stateObject) { 
  console.log('received ' + stat + ' on ' + thingName + ':', stateObject); 
}); 
 
thingShadow.on('delta', function(thingName, stateObject) { 
  console.log('received delta on ' + thingName + ':', stateObject); 
}); 
 
thingShadow.on('timeout', function(thingName, clientToken) { 
  console.log('received timeout on ' + thingName + ' with token:', clientToken); 
});

In the previous code, we are using awsIot.thingShadow() to connect to our AWS Thing that we have created. To the awsIot.thingShadow(), we pass the following options:

  • keyPath: This is the location of private.pem.key, which we have downloaded and placed in the certs folder.
  • certPath: This is the location of certificate.pem.crt, which we have downloaded and placed in the certs folder.
  • caPath: This is the location of RootCA-VeriSign-Class 3-Public-Primary-Certification-Authority-G5.pem, which we have downloaded and placed in the certs folder.
  • clientId: This is the name of the Thing we have created in AWS IoT Pi3-DHT11-Node.
  • host: This is the URL to which the Thing needs to connect to. This URL is different for different Things. To get the host, navigate to your Thing and click the Interact tab as shown in the following screenshot:
The highlighted URL is the host.
  • port: We are using SSL-based communication, so the port will be 8883.
  • region: The region in which you have created the Thing. You can find this in the URL of the page. For example, https://console.aws.amazon.com/iot/home?region=us-east-1#/thing/Pi3-DHT11-Node.
  • debug: This is optional. If you want see some logs rolling out from the module during execution, you can set this property to true.

We will connect to the preceding host with our certificates. In the thingShadow.on('connect') callback, we call thingShadow.register() to register. We need to register only once per connection. Once the registration is completed, we will start to gather the data from the DHT11 sensor and, using thingShadow.update(), we will update the shadow. In the thingShadow.on('status') callback, we will get to know the status of the update.

Save the file and execute the following command:

$ sudo node index.js

We should see something like this:

As you can see from the previous logs on the console screen, the device first gets connected then registers itself. Once the registration is done, we will wait for 15 seconds to transmit the first record. Then we wait for another 30 seconds and continue the process.

We are also listening for status and delta events to make sure that what we have sent has been successfully updated.

Now, if we head back to the AWS IoT Thing page in the AWS Console and click on the Shadow tab, we should see the last record that we have sent update here:

Underneath that, you can see the metadata of the document, which should look something like:

{ 
 "metadata": { 
  "desired": { 
   "temp": { 
    "timestamp": 1517888793 
   }, 
   "humd": { 
    "timestamp": 1517888793 
   } 
  } 
 }, 
 "timestamp": 1517888794, 
 "version": 16 
}

The preceding JSON represents the data structure of the Thing’s data, assuming that the Thing keeps sending the same structure of data at all times.

Now that the Thing is sending data, let us actually read the data coming from this Thing.

Reading the data from the Thing

There are two approaches as to how you can get the shadow data:

The following example used the MQTTS approach to fetch the shadow data. Whenever we want to fetch the data of a Thing, we publish an empty packet to the $aws/things/Pi3-DHT11-Node/shadow/get topic. Depending on whether the state was accepted or rejected, we will get a response on $aws/things/Pi3-DHT11-Node/shadow/get/accepted or $aws/things/Pi3-DHT11-Node/shadow/get/rejected, respectively.

For testing the data fetch, you can either use the same Raspberry Pi 3 or another computer. I am going to use my MacBook as a client that is interested in the data sent by the Thing.

In my local machine, I am going to create the following setup, which is very similar to what we have done in Raspberry Pi 3:

  1. Create a folder named test_client. Inside the test_client folder, create a folder named certs and get a copy of the same four certificates we have used in Raspberry Pi 3.
  2. Inside the test_client folder, run the following command on the Terminal:
$ npm init -y
  1. Next, install the aws-iot-device-sdk module using the following command:
$ npm install aws-iot-device-sdk --save
  1. Create a file inside the test_client folder named index.js and update it as shown here:
var awsIot = require('aws-iot-device-sdk'); 
 
const NODE_ID = 'Pi3-DHT11-Node'; 
const TAG = '[TEST THING] >>>>>>>>> '; 
 
console.log(TAG, 'Connecting...'); 
 
var device = awsIot.device({ 
  keyPath: './certs/db80b0f635-private.pem.key', 
  certPath: './certs/db80b0f635-certificate.pem.crt', 
  caPath: './certs/RootCA-VeriSign-Class 3-Public-Primary-Certification-Authority-G5.pem', 
  clientId: NODE_ID, 
  host: 'a1afizfoknpwqg.iot.us-east-1.amazonaws.com', 
  port: 8883, 
  region: 'us-east-1', 
  debug: false, // optional to see logs on console 
}); 
 
device.on('connect', function() { 
  console.log(TAG, 'device connected!'); 
  device.subscribe('$aws/things/Pi3-DHT11-Node/shadow/get/accepted'); 
  device.subscribe('$aws/things/Pi3-DHT11-Node/shadow/get/rejected'); 
  // Publish an empty packet to topic `$aws/things/Pi3-DHT11-Node/shadow/get` 
  // to get the latest shadow data on either `accepted` or `rejected` topic 
  device.publish('$aws/things/Pi3-DHT11-Node/shadow/get', ''); 
}); 
 
device.on('message', function(topic, payload) { 
  payload = JSON.parse(payload.toString()); 
  console.log(TAG, 'message from ', topic, JSON.stringify(payload, null, 4)); 
});
Update the device information as applicable.
  1. Save the file and run the following command on the Terminal:
$ node index.js

We should see something similar to what is shown in the following console output:

This way, any client that is interested in the data of this Thing can use this approach to get the latest data.

You can also use an MQTT library in the browser itself to fetch the data from a Thing. But do keep in mind this is not advisable as the certificates are exposed. Instead, you can have a backend microservice that can achieve the same for you and then expose the data via HTTPS.

With this, we conclude the section on posting data to AWS IoT and fetching it. In the next section, we are going to work with rules.

Building the dashboard

Now that we have seen how a client can read the data of our Thing on demand, we will move on to building a dashboard, where we show data in real time.

For this, we are going to use Elasticsearch and Kibana.

Elasticsearch

Elasticsearch is a search engine based on Apache Lucene. It provides a distributed, multi-tenant capable, full-text search engine with an HTTP web interface and schema-free JSON documents.

Read more about Elasticsearch at http://whatis.techtarget.com/definition/ElasticSearch.

Kibana

Kibana is an open source data visualization plugin for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Users can create bar, line, and scatter plots, or pie charts and maps on top of large volumes of data.

Read more about Kibana at https://www.elastic.co/products/kibana.

As we have seen in the architecture diagram, we are going to create a rule in AWS IoT. The job of the rule is to listen to an AWS topic and then send the temperature and humidity values from that topic to an Elasticsearch cluster that we are going to create using the AWS Elasticsearch Service (https://aws.amazon.com/elasticsearch-service/).

The cluster we are going to provision on AWS will also have a Kibana setup for easy visualizations.

We are going to use Kibana and build the visualization and then a dashboard from that visualization.

We are going to use Elasticsearch and Kibana for a basic use case.

The reason I have chosen to use Elasticsearch instead of building a simple web application that can display charts is because we can do way more than just building dashboards in Kibana using Elasticsearch. This is where the IoT Analytics comes in.

We are not going to explore IoT analytics per se, but this setup should give you an idea and get you started off.

Setting up Elasticsearch

Before we proceed further, we are going to provision a new Elasticsearch cluster.

Do note that the cluster we are going to provision is under free tier and has a limitation of resources. Read more about the limitations at https://aws.amazon.com/about-aws/whats-new/2017/01/amazon-elasticsearch-service-free-tier-now-available-on-t2-small-elasticsearch-instances/. Neither Packt Publishing nor me is in any way responsible for any billing that happens as a by-product of running any example in this book. Please read the pricing terms carefully before continuing.

To set up Elasticsearch, head over to the Amazon Elasticsearch Service console or use the services menu on the of AWS console page to reach the Amazon Elasticsearch Service console page. You should see a screen similar to what is shown here:

  1. Click on the Create a new domain button and fill in the next screen, as shown in the following screenshot:
  1. Click on the Next button. Under the Node configuration section, update it as illustrated here:
If you are planning to run bigger queries, I would recommend checking Enable dedicated master and setting it up.
  1. Under the Storage configuration section, update it as illustrated in the following screenshot:
  1. Leave the remaining sections as the defaults and click Next to continue.
  2. On the Setup access screen, under Network configuration, select Public access, and for the access policy, select Allow open access to the domain and accept the risks associated with this configuration.
We are using this setup to quickly work with services and to not worry about credentials and security. DO NOT use this setup in production.
  1. Finally, click Next, review the selections we have made, and click Confirm.

Once the process has started, it will take up to 10 minutes for the domain to be provisioned.

Once the domain is provisioned, we should see a similar screen to what is illustrated here:

Here we have our Endpoint, to which data will be sent for indexing. And we also have the URL for Kibana.

When we click on the Kibana URL, after it loads, you will be presented with the following screen:

 

The previous screen will change once we start indexing data. In the next section, we are going to create an IAM role.

Setting up an IAM Role

Now that we have Elasticsearch and Kibana up and running, we will get started with setting up an IAM role. We will be using this role for the IoT rule and to put data into Elasticsearch:

  1. To get started, head over to https://console.aws.amazon.com/iam. From the side menu, click on the Roles link and you should see a screen like this:
  1. Select AWS service from the top row and then select IoT. Click Next: Permissions to proceed to the next step:
  1. All the policies needed for AWS IoT access resources across AWS are preselected. The one we are interested in is under AWSIoTRulesActions and Elasticsearch Service. All we need here is the ESHttpPut action.
  2. Finally, click the Next: Review button and fill in the details as shown in the following screenshot:
  1. Once we click Create role, a new role with the name provided will be created.

Now that we have Elasticsearch up and running, as well as the IAM role needed, we will create the IoT Rule to index incoming data into Elasticsearch.

Creating an IoT Rule

To get started, head over to AWS IoT and to the region where we have registered our Thing:

  1. From the menu on the left-hand side, click on Act and then click the Create a rule option. On the Create rule screen, we will fill in the details as shown in the following table:
Field Value
Name ES_Indexer
Description Index AWS IoT topic data to Elasticsearch service
SQL version 2016-03-23
Attribute cast(state.desired.temp as decimal) as temp, cast(state.desired.humd as decimal) as humd, timestamp() as timestamp
Topic filter $aws/things/Pi3-DHT11-Node/shadow/update
Condition
  1. Once we fill the form in with the information mentioned in the previous table, we should see the rule query statement as demonstrated here:
SELECT cast(state.desired.temp as decimal) as temp, cast(state.desired.humd as decimal) as humd, timestamp() as timestamp FROM '$aws/things/Pi3-DHT11-Node/shadow/update'
  1. This query selects the temperature and humidity values from the $aws/things/Pi3-DHT11-Node/shadow/update topic and casts them to a decimal or float data type. Along with that, we select the timestamp.
  2. Now, under Select one or more actions, click Add action, and then select Send messages to the Amazon Elastic Service. Click on Configure action.
  3. On the Configure action screen, fill in the details as illustrated here:
Field Value
Domain Name pi3-dht11-dashboard
Endpoint Will get auto selected
ID ${newuuid()}
Index sensor-data
Type dht11
IAM role name iot-rules-role
  1. Once the details mentioned in the table are filled in, click on the Create action button to complete the setup.
  2. Finally, click on Create rule and a new rule should be created.

Elasticsearch configuration

Before we continue, we need to configure Elasticsearch to create a mapping. The timestamp that we generate in AWS IoT is of the type long. So, we are going to create a mapping field named datetime with the type date.

From a command line with cURL (https://curl.haxx.se/) present, execute the following command:

    curl -XPUT 'https://search-pi3-dht11-dashboard-tcvfd4kqznae3or3urx52734wi.us-east-1.es.amazonaws.com/sensor-data?pretty' -H 'Content-Type: application/json' -d'
    {
      "mappings" : {
        "dht11" : {
          "properties" : {
            "timestamp" : { "type" : "long", "copy_to": "datetime" },
            "datetime" : {"type": "date", "store": true }
          }
        }
      }
    }
    '

Replace the URL of Elasticsearch in the previous command as applicable. This will take care of creating a mapping when the data comes in.

Running the Thing

Now that the entire setup is done, we will start pumping data into the Elasticsearch:

  1. Head back to Raspberry Pi 3, which was sending the DHT11 temperature and humidity data, and run our application. We should see the data being published to the shadow topic:
  1. Head over to the Elasticsearch page, to the pi3-dht11-dashboard domain, and to the Indices tab, and you should see the screen illustrated here:
  1. Next, head over to the Kibana dashboard. Now we will configure the Index pattern as shown in the following screenshot:
  1. Do not forget to select the time filter. Click on Create and you should see the fields on the next screen, as shown here:
  1. Now, click on the Discover tab on the left-hand side of the screen and you should see the data coming in, as shown in the following screenshot:

Building the Kibana dashboard

Now that we have the data coming in, we will create a new visualization and then add that to our dashboard:

  1. Click on the Visualize link from the side menu and then click on Create a Visualization. Then under Basic Charts select Line.
  2. On the Choose search source screen, select sensor-data index and this will take us to the graph page.
  3. On the Metrics section in the Data tab, set the following as the first metric:
  1. Click on the Add metrics button and set up the second one as follows:
  1. Now, under Buckets, select X-Axis and select the following:
  1. Click on the Play button above this panel and you should see a line chart as follows:

This is our temperature and humidity data over a period of time. As you can see, there are plenty of options to choose from regarding how you want to visualize the data:

  1. Now, click on the Save option at the top of the menu on the page and name the visualization Temperature & Humidity Visualization.
  2. Now, using the side menu, select Dashboard then Create Dashboard, click on Add, and select Temperature & Humidity Visualization.
  3. Now, click on Save from the top-most menu on the page and name the dashboard Pi3 DHT11 dashboard.

Now we have our own dashboard, which show the temperature and humidity metrics:

This wraps up the section on building a visualization using IoT Rule, Elasticsearch, and Kibana.

With this, we have seen the basic features and implementation process needed to work with the AWS IoT platform.

If you found this post useful, do check out the book,  Enterprise Internet of Things Handbook, to build a robust IoT strategy for your organization.

Read Next:

5 reasons to choose AWS IoT Core for your next IoT project

Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project?

How to run and configure an IoT Gateway

3 COMMENTS

  1. Hi, thanks for the great article. However, I run into a problem at “Elasticsearch configuration” section.

    Where do I write the cURL command at the Elasticsearch in AWS ? I tried to run the same command you wrote at my Windows / Linux PC and RPI3 terminal program, but everytime it will return this error message :

    curl: (6) Could not resolve host: search-rpi3test-oozdwlcvxt216pjhwmbqlaaq3a.ap-southeast-1.es.amazonaws.com ( the URL is gotten from the Elastic Search End Point at the Overview page ).

    Any suggestion on how to do the mapping ? Thanks !

  2. Thanks for this tutorial!

    I just wanted to help others who might try to use it:
    Please stop AWS-IoT-Thing process before starting to configure Elasticsearch part because you will be unable to run the curl command (that one for datetime formatting). You will get the error that index already exists (“resource_already_exists_exception”, status 400).
    In case you were actually running it like me, stop the AWS-IoT-Thing and run the script:

    curl -X “DELETE” ‘https:///sensor-data?pretty’

    this will remove the data and you will be able to create the datetime property with the curl code specified in this tutorial.

LEAVE A REPLY

Please enter your comment!
Please enter your name here