In today’s tutorial, we will look at how to build a sensor application to measure the ambient light.
Preparing our Sensor project
We will create a new Universal Windows Platform application project. This time, we call it Sensor. We can use the Raspberry Pi 3, even though we will only use the light sensor and motion detector (PIR sensor) in this project. We will also add the latest version of a new NuGet package, the Waher.Persistence.FilesLW package. This package will help us with data persistence. It takes our objects and stores them in a local object database. We can later load the objects back into the memory and search for them. This is all done by analyzing the metadata available in the class definitions, so there’s no need to do any database programming. Go ahead and install the package in your new project.
The Waher.Persistence.Files package contains similar functionality, but it performs data encryption and dynamic code compilation of object serializes as well. These features require .NET standard v1.5, which is not compatible with the Universal Windows Platform in its current state. That is why we use the Light Weight version of the same library, which only requires .NET standard 1.3. The Universal Windows Platform supports .NET Standard up to 1.4. For more information, visit https://docs.microsoft.com/en-us/dotnet/articles/standard/library#net-platforms-support.
Initializing the inventory library
The next step is to initialize the libraries we have just included in the project. The persistence library includes an inventory library (Waher.Runtime.Inventory) that helps with dynamic type-related tasks, as well as keeping track of available types, interfaces and which ones have implemented which interfaces in the runtime environment. This functionality is used by the object database defined in the persistence libraries. The object database figures out how to store, load, search for, and create objects, using only their class definitions appended with a minimum of metadata in the form of attributes. So, one of the first things we need to do during startup is to tell the inventory environment which assemblies it and, by extension, the persistence library can use. We do this as follows:
Log.Informational("Starting application."); Types.Initialize( typeof(FilesProvider).GetTypeInfo().Assembly, typeof(App).GetTypeInfo().Assembly);
Here, Types is a static class defined in the Waher.Runtime.Inventory namespace. We initialize it by providing an array of assemblies it can use. In our case, we include the assembly of the persistence library, as well as the assembly of our own application.
Initializing the persistence library
We then go on to initialize our persistence library. It is accessed through the static Database class, defined in the Waher.Persistence namespace. Initialization is performed by registering one object database provider. This database provider will then be used for all object database transactions. In our case, we register our local files object database provider, FilesProvider, defined in the Waher.Persistence.Files namespace:
Database.Register(new FilesProvider( Windows.Storage.ApplicationData.Current.LocalFolder.Path + Path.DirectorySeparatorChar + "Data", "Default", 8192, 1000, 8192, Encoding.UTF8, 10000));
The first parameter defines a folder where database files will be stored. In our case, we store database files in the Data subfolder of the application local data folder. Objects are divided into collections. Collections are stored in separate files and indexed differently, for performance reasons. Collections are defined using attributes in the class definition. Classes lacing a collection definition are assigned the default collection, which is specified in the second argument.
Objects are then stored in B-tree ordered files. Such files are divided into blocks into which objects are crammed. For performance reasons, the block size, defined in the third argument, should be correlated to the sector size of the underlying storage medium, which is typically a power of two. This minimizes the number of reads and writes necessary. In our example, we’ve chosen 8,192 bytes as a suitable block size. The fourth argument defines the number of blocks the provider can cache in the memory. Caching improves performance, but requires more internal memory. In our case, we’re satisfied with a relatively modest cache of 1,000 blocks (about 8 MB).
Binary Large Objects (BLOBs), that is, objects that cannot efficiently be stored in a block, are stored separately in BLOB files. These are binary files consisting of doubly linked blocks. The fifth parameter controls the block size of BLOB files. The sixth parameter controls the character encoding to use when serializing strings. The seventh, and last parameter, is the maximum time the provider will wait, in milliseconds, to get access to the underlying database when an operation is to be performed.
Sampling raw sensor data
After the database provider has been successfully registered, the persistence layer is ready to be used. We now continue with the first step in acquiring the sensor data: sampling. Sampling is normally done using a short regular time interval. Since we use the Arduino, we get values as they change. While such values can be an excellent source for event-based algorithms, they are difficult to use in certain kinds of statistical calculations and error-correction algorithms. To set up the regular sampling of values, we begin by creating a Timer object from the System.Threading namespace, after the successful initialization of the Arduino:
this.sampleTimer = new Timer(this.SampleValues, null, 1000 - DateTime.Now.Millisecond, 1000);
This timer will call the SampleValues method every thousand milliseconds, starting the next second. The second parameter allows us to send a state object to the timer callback method. We will not use this, so we let it be null. We then sample the values, as follows:
privateasync void SampleValues(object State) { try { ushort A0 = this.arduino.analogRead("A0"); PinState D8= this.arduino.digitalRead(8); ... } catch (Exception ex) { Log.Critical(ex); } }
We define the method as asynchronous at this point, even though we still haven’t used any asynchronous calls. We will do so, later in this chapter. Since the method does not return a Task object, exceptions are not propagated to the caller. This means that they must be caught inside the method to avoid unhandled exceptions closing the application.
Performing basic error correction
Values we sample may include different types of errors, some of which we can eliminate in the code to various degrees. There are systematic errors and random errors. Systematic errors are most often caused by the way we’ve constructed our device, how we sample, how the circuit is designed, how the sensors are situated, how they interact with the physical medium and our underlying mathematical model, or how we convert the sampled value into a physical quantity. Reducing systematic errors requires a deeper analysis that goes beyond the scope of this book.
Random errors are errors that are induced stochastically and are often unbiased. They can be induced due to a lack of resolution or precision, by background noise, or through random events in the physical world. While background noise and the lack of resolution or precision in our electronics create a noise in the measured input, random events in the physical world may create spikes. If something briefly flutters past our light sensor, it might register a short downwards spike, even though the ambient light did not change. You’ll learn how to correct for both types of random errors.
Canceling noise
Since the digital PIR sensor already has error correction built into it, we will only focus on how to cancel noise from our analog light sensor. Noise can be canceled electronically, using, for instance, low-pass filters. It can also be cancelled algorithmically, using a simple averaging calculation over a short window of values. The averaging calculation will increase our resolution, at the cost of a small delay in the output.
Statistically, the expected average value is the same as the expected value, if the input is a steady signal overlaid with random noise.
The implementation is simple. We need the following variables to set up our averaging algorithm:
privateconstintwindowSize = 10; privateint?[] windowA0 = new int?[windowSize]; privateint nrA0 = 0; privateint sumA0 = 0;
After sampling the value, we first shift the window one step, and add our newly sampled value at the end. We also update our counters and sums. This allows us to quickly calculate the average value of the entire window, without having to loop through it each time:
if (this.windowA0[0].HasValue) { this.sumA0 -= this.windowA0[0].Value; this.nrA0--; } Array.Copy(this.windowA0, 1, this.windowA0, 0, windowSize - 1); this.windowA0[windowSize - 1] = A0; this.sumA0 += A0; this.nrA0++; double AvgA0 = ((double)this.sumA0) / this.nrA0; int? v;
Removing random spikes
We now have a value that is 10 times more accurate than the original, in cases where our ambient light is not expected to vary quickly. This is typically the case, if ambient light depends on the sun and weather. Calculating the average over a short window has an added advantage: it allows us to remove bad measurements, or spikes. When a physical quantity changes, it normally changes continuously, slowly, and smoothly. This will have the effect that roughly half of the measurements, even when the input value changes, will be on one side of the average value, and the other half on the other side. A single spike, on the other hand, especially in the middle of the window, if sufficiently large, will stand out alone on one side, while the other values remain on the other. We can use this fact to remove bad measurements from our window. We define our middle position first:
private const int spikePos = windowSize / 2;
We proceed by calculating the number of values on each side of the average, if our window is sufficiently full:
if (this.nrA0 >= windowSize - 2) { int NrLt = 0; int NrGt = 0; foreach (int? Value in this.windowA0) { if (Value.HasValue) { if (Value.Value < AvgA0) NrLt++; else if (Value.Value > AvgA0) NrGt++; } }
If we only have one value on one side, and this value happens to be in the middle of the window, we identify it as a spike and remove it from the window. We also make sure to adjust our average value accordingly:
if (NrLt == 1 || NrGt == 1) { v = this.windowA0[spikePos]; if (v.HasValue) { if ((NrLt == 1 && v.Value < AvgA0) || (NrGt == 1 && v.Value > AvgA0)) { this.sumA0 -= v.Value; this.nrA0--; this.windowA0[spikePos] = null; AvgA0 = ((double)this.sumA0) / this.nrA0; } } } }
Since we remove the spike when it reaches the middle of the window, it might pollute the average of the entire window up to that point. We therefore need to recalculate an average value for the half of the window, where any spikes have been removed. This part of the window is smaller, so the resolution gain is not as big. Instead, the average value will not be polluted by single spikes. But we will still have increased the resolution by a factor of five:
int i, n; for (AvgA0 = i = n = 0; i < spikePos; i++) { if ((v = this.windowA0[i]).HasValue) { n++; AvgA0 += v.Value; } } if (n > 0) { AvgA0 /= n;
Converting to a physical quantity
It is not sufficient for a sensor to have a numerical raw value of the measured quantity. It only tells us something if we know something more about the raw value. We must, therefore, convert it to a known physical unit. We must also provide an estimate of the precision (or error) the value has.
To avoid creating a complex mathematical model that converts our measured light intensity into a known physical unit, which would go beyond the scope of this book, we convert it to a percentage value. Since we’ve gained a factor of five of precision using our averaging calculation, we can report two decimals of precision, even though the input value is only 1,024 bits, and only contains one decimal of precision:
double Light = (100.0 * AvgA0) / 1024; MainPage.Instance.LightUpdated(Light, 2, "%"); }
Illustrating measurement results
Following image shows how our measured quantity behaves. The light sensor is placed in broad daylight on a sunny day, so it’s saturated. Things move in front of the sensor, creating short dips. The thin blue line is a scaled version of our raw input A0. Since this value is event based, it is being reported more often than once a second. Our red curve is our measured, and corrected, ambient light value, in percent. The dots correspond to our second values. Notice that the first two spikes are removed and don’t affect the measurement, which remains close to 100%. Only the larger dips affect the measurement. Also, notice the small delay inherent in our algorithm. It is most noticeable if there are abrupt changes:
If we, on the other hand, have a very noisy input, our averaging algorithm helps our measured value to stay more stable. Perhaps the physical quantity goes below some sensor threshold, and input values become uncertain. In the following image, we see how the floating average varies less than the noisy input:
Calculating basic statistics
A sensor normally reports more than the measured momentary value. It also calculates basic statistics on the measured input, such as peak values. It also makes sure to store measured values regularly, to allow its users to view historical measurements. We begin by defining variables to keep track of our peak values:
private int? lastMinute = null; private double? minLight = null; private double? maxLight = null; private DateTime minLightAt = DateTime.MinValue; private DateTime maxLightAt = DateTime.MinValue;
We then make sure to update these after having calculated a new measurement:
DateTime Timestamp = DateTime.Now; if (!this.minLight.HasValue || Light < this.minLight.Value) { this.minLight = Light; this.minLightAt = Timestamp; } if (!this.maxLight.HasValue || Light > this.maxLight.Value) { this.maxLight = Light; this.maxLightAt = Timestamp; }
Defining data persistence
The last step in this is to store our values regularly. Later, when we present different communication protocols, we will show how to make these values available to users. Since we will use an object database to store our data, we need to create a class that defines what to store. We start with the class definition:
[TypeName(TypeNameSerialization.None)] [CollectionName("MinuteValues")] [Index("Timestamp")] public class LastMinute { [ObjectId] public string ObjectId = null; }
The class is decorated with a couple of attributes from the Waher.Persistence.Attributes namespace. The CollectionName attribute defines the collection in which objects of this class will be stored. The TypeName attribute defines if we want the type name to be stored with the data. This is useful, if you mix different types of classes in the same collection. We plan not to, so we choose not to store type names. This saves some space. The Index attribute defines an index. This makes it possible to do quick searches. Later, we will want to search historical records based on their timestamps, so we add an index on the Timestamp field. We also define an Object ID field. This is a special field that is like a primary key in object databases. We need it to be able to delete objects later.
Next, we define some member fields. If you want, you can use properties as well, if you provide both getters and setters for the properties you wish to persist. By providing default values, and decorating the fields (or properties) with the corresponding default value, you can optimize storage somewhat. Only members with values different from the declared default values will then be persisted, to save space:
[DefaultValueDateTimeMinValue] public DateTime Timestamp = DateTime.MinValue; [DefaultValue(0)] public double Light = 0; [DefaultValue(PinState.LOW)] public PinState Motion= PinState.LOW; [DefaultValueNull] public double? MinLight = null; [DefaultValueDateTimeMinValue] public DateTime MinLightAt = DateTime.MinValue; [DefaultValueNull] public double? MaxLight = null; [DefaultValueDateTimeMinValue] public DateTime MaxLightAt = DateTime.MinValue;
Storing measured data
We are now ready to store our measured data. We use the lastMinute field defined earlier to know when we pass into a new minute. We use that opportunity to store the most recent value, together with the basic statistics we’ve calculated:
if (!this.lastMinute.HasValue) this.lastMinute = Timestamp.Minute; else if (this.lastMinute.Value != Timestamp.Minute) { this.lastMinute = Timestamp.Minute;
We begin by creating an instance of the LastMinute class defined earlier:
LastMinute Rec = new LastMinute() { Timestamp = Timestamp, Light = Light, Motion= D8, MinLight = this.minLight, MinLightAt = this.minLightAt, MaxLight = this.maxLight, MaxLightAt = this.maxLightAt };
Storing this object is very easy. The call is asynchronous and can be executed in parallel, if desired. We choose to wait for it to complete, since we will be making database requests after the operation has completed:
await Database.Insert(Rec);
We then clear our variables used for calculating peak values, to make sure peak values are calculated within the next period:
this.minLight = null; this.minLightAt = DateTime.MinValue; this.maxLight = null; this.maxLightAt = DateTime.MinValue; }
Removing old data
We cannot continue storing new values without also having a plan for removing old ones. Doing so is easy. We choose to delete all records older than 100 minutes. This is done by first performing a search, and then deleting objects that are found in this search. The search is defined by using filters from the Waher.Persistence.Filters namespace:
foreach (LastMinute Rec2 in await Database.Find<LastMinute>( new FilterFieldLesserThan("Timestamp", Timestamp.AddMinutes(-100)))) { await Database.Delete(Rec2); }
You can now execute the application, and monitor how the MinuteValues collection is being filled.
We created a simple sensor app for the Raspberry Pi using C#. You read an excerpt from the book, Mastering Internet of Things, written by Peter Waher. This book will help you augment your IoT skills with the help of engaging and enlightening tutorials designed for Raspberry Pi 3.
Read Next
25 Datasets for Deep Learning in IoT