Aaron Kuehler

80% Scientist, 20% Artist. Theorist and Practitioner

Home Temperature Monitor

Introduction

A few weeks ago, I purchased a Raspberry Pi. After reading Eben Upton's Raspberry Pi User Guide, particularly the two chapters on which he focuses attention on the GPIO, I had an idea for my first project. The post covers the first iteration of a home temperature monitoring project I put together using a Raspberry Pi, a MCP9808, an old Mac Mini (early 2008), InfluxDB, Grafana, a bit of Python, and runit.

The Sensor Hardware

For this project I chose to use the MCP9808 Breakout Board from Adafruit – an excellent source for components, circuits, and ideas. I chose this unit for a few reasons:

  1. It's controlled over I²C – Raspberry Pi's GPIO supports the require I²C bus over pins 3 (Serial Data Line, SDA) and 5 (Serial Clock Line, SCL)
  2. It runs in the 2.7V to 5.5V power and logic range – the Raspberry Pi provides power lines at 3.3V and 5V
  3. It was pretty cheap (< $5 USD) – My soldering skills are a little rusty.

Circuit Assembly

The MCP9808 Breakout Board ships from Adafruit mostly assembled. This particular kit requires only that you solder the included 8 pin header strip to the breakout board.

I used a GPIO Breakout and a breadboard to connect the Raspberry Pi to the MCP9808; this approach is a bit easier to manage, correct wiring mistakes, and less permanent than soldering the sensor to the Raspberry Pi. To read temperatures from the MCP9808, only the power pin, ground, and the I²C SDA and SCL pins are required:

GPIO Pin # GPIO Pin Name MCP9808 Pin Name
1 3.3V VDD
3 SDA SDA
5 SCL SCL
6 GND GND

The remaining, optional, pins are not used in this project. They provide workarounds for I²C addressing issues when multiple devices are used on the same bus and a pin for alerting if the sensor reads a temperature above or below a threshold.

The Datastore

I knew that I wanted to back the project with a persistent datastore. This would allow me to capture data points and later analyze them for general trends, cross-reference heating/cooling patterns with weather events, etc.

I chose InfluxDB because of its time centric query language and storage model. I installed Influx on an old Mac Mini (Early 2009 with OSX 10.10) I had sitting under my desk. Getting a basic installation of InfluxDB up and running is well documented; since I already use the Homebrew to manage most of my 3rd party dependencies and a formula for InfluxDB exists, installation was completed by issuing brew install influxdb.

Configure the InfluxDB database

With InfluxDB installed, I setup a database for storing my temperature readings and a database user to manage it. I used my InfluxDB instance's web console to do this; by default it runs on port 8083 of the InfluxDB host.

  1. Create the new database; I named mine mcp9808_test
  2. Create the database admin user; I named mine mcp9808
    1. Click on the database name in the Databases list view
    2. Create a New Database User

Raspberry Pi Configuration

Now that the hardware and datastore are setup, there's a bit of OS configuraiton needed to an out-of-the-box Raspberry Pi in order communicate with the MCP9808 over the I²C bus.

Enable I²C

By default, the Raspberry Pi does not load the required kernel modules to use the I²C bus. To enable I²C communication over the GPIO, I added the following two lines to /etc/modules

1
2
i2c-bcm2708
i2c-dev

Then reboot the Raspberry Pi

1
sudo reboot

After the system initalizses the system should be able to recognize the MCP9808 is connected. I used the i2cdetect cli tool to do so:

1
sudo i2cdetect 1 # channel 1 is the default on the Raspberry Pi B+ model

The Sensor Software

Adafruit provides a MCP9808 wrapper and a I²C abstraction. I made use of both of these in the main driver script for this project.

Install build dependencies

1
2
sudo apt-get update
sudo apt-get install build-essential python-dev python-smbus

Install the Adafruit_Python_MCP9808 wrapper

1
2
3
4
cd ~/Downloads
git clone https://github.com/adafruit/Adafruit_Python_MCP9808/blob/master/Adafruit_MCP9808
cd Adafruit_MCP9808
sudo python setup.py install

This will also install the I²C abstraction as the MCP9808 wrapper depends on it.

Read, Report, Repeat

Next I wrote a little python script, poll.py, to read from the MCP9808 on an interval and report its findings to the mcp9808_test InfluxDB database instance.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
#!/usr/bin/python
import time
import Adafruit_MCP9808.MCP9808 as MCP9808
from influxdb import InfluxDBClient

# Generates the necessary payload to post
# temperature data into the InfluxDB
def temperature_data(degrees_c):
  return [
      {
        'points': [[c_to_f(degrees_c)]],
        'name': 'Temperature Readings',
        'columns':['degrees_f']}]

# Converts temperature representations in Centigrade
# to Farenheight
def c_to_f(c):
  return c * 9.0 / 5.0 + 32.0

# Initializes comminication with the MCP9808
# over the I2C bus.
sensor = MCP9808.MCP9808()
sensor.begin()

# Defines the interval on which the capture logic
# will occur
capture_interval = 60.0 # Every 60 seconds

# Establishes a connection to the mcp9808_test
# InfluxDB instance
influxClient = InfluxDBClient('<influx-db-host>', 8086, 'mcp9808', '<my_mcp9808_influxdb_user_password>', 'mcp9808_test')

# Read, Report, Repeat
while True:
  temp = sensor.readTempC()
  print "Temperature {0:0.3} F".format(c_to_f(temp))
  influxClient.write_points(temperature_data(temp))
  time.sleep(capture_interval)

Now it can be run using the following command; note that the script needs to be run as the root user of the Raspberry Pi in order to interact with the GPIO.

1
sudo python <path_to>/poll.py

For-ev-er

Horray! Everything was up and running… until I kicked the on/off switch of the powerstip under my desk. At this point I realized that I wanted to ensure that the poll.py script ran so long as the Raspberry Pi had power. To achieve this, I used the runit process supervisor.

  1. Install runit
    1
    
    sudo apt-get runit
    
  2. Initialize the poll.py Process supervisor
    1
    2
    3
    
    sudo mkdir -p /etc/sv/mcp9808_poll
    sudo mkdir -p /etc/sv/mcp9808_poll/log/main
    sudo touch /etc/sv/mcp9808_poll/run /etc/sv/mcp9808_poll/log/run
    
  3. Edit /etc/sv/mcp9808_poll/run and define the process
    1
    2
    3
    4
    5
    6
    
    #!/bin/sh
    exec 2>&1
    
    # Note that we intend runit to run as root
    # so we don't need to sudo here
    exec <path_to>/poll.py
    
  4. Edit /etc/sv/mcp9808_poll/log/run
    1
    2
    
    #!/bin/bash
    exec svlogd -tt ./main
    

Now the polling process can be started and managed by runit by executing the following:

1
sudo sv start mcp9808_poll

ANALYZE ALL THE THINGS!

Sensor hardware, check! Datastore, check! Gluecode, check! Ok, cool; but now what?

InfluxDB Graphing

Well, now that the Raspberry Pi is reporting temperature readings every minute, it's time to start analyzing the data. As I mentioned before, I chose InfluxDB because of its ability to collate and aggregate time series data. For this project it makes sense to aggregate values of the degrees_f datapoints posted by the poll.py script for some given time interval.

For example, I like to look at the lowest recorded temperature readings for each hour over the course of a day. I might write a query to do this like so:

1
SELECT min(degrees_f) FROM "Temperature Readings" WHERE time > now() - 1d GROUP BY time(1h)

When a query is issued from the InfluxDB web console, the results are rendered in two formats. First, a graph of the data returned by the query is rendered; the X axis always represents time and Y axis the selected columns of the query. Second, InfluxDB renders a table of the datapoints matching the query and their timestamps.

This is really good for point analysis of things like high, low, and average temperatures for a given time interval, but it's an entirely manual process. What I really want is to see this data over a rolling time window; indefinitely. This is where Grafana comes into play.

Grafana

Grafana is a web applicaiton which provides metrics dashboards and graph editing. It can be configured, rather easily, to use InfluxDB as a metric source. It provides the ability to define queries against InfluxDB time series and plot the results, much like the InfluxDB web ui, but it also provides the ability to attach these graphs to dashboards and auto-refresh their content on a time interval.

Grafana is written, mostly, in javascript. As such, it only needs a webserver to host it. I chose to run Nginx on the Mac Mini as well. Nginx is fairly easy to install an configure on Mac OSX with homebrew:

  1. Create a new InfluxDB called grafana to store dashboard configuration made through the UI
    1. Create a grafana user in this database
  2. Install Nginx
    1
    2
    3
    4
    
    brew install nginx
    ...
    ln -sfv /usr/local/opt/nginx/*.plist ~/Library/LaunchAgents
    launchctl load ~/Library/LaunchAgents/homebrew.mxcl.nginx.plist
    
  3. Download grafana and untar/unzip the application source into Nginx's public directory
  4. Copy the default grafana configuration
    1
    
    cp config.sample.js config.js
    
  5. Edit config.js and tell grafana to use InfluxDB as its metric and dashboard datasource
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    
    ...
      return new Settings({
    
          /* Data sources
          * ========================================================
          * Datasources are used to fetch metrics, annotations, and serve as dashboard storage
          *  - You can have multiple of the same type.
          *  - grafanaDB: true    marks it for use for dashboard storage
          *  - default: true      marks the datasource as the default metric source (if you have multiple)
          *  - basic authentication: use url syntax http://username:password@domain:port
          */
    
          // InfluxDB example setup (the InfluxDB databases specified need to exist)
          datasources: {
            influxdb: {
              type: 'influxdb',
              url: "http://<influxdb_host>:8086/db/mcp9808_test",
              username: 'mcp9808',
              password: '<mcp9808_user_password>'
            },
            grafana: {
              type: 'influxdb',
              url: "http://<influx_dbhost>:8086/db/grafana",
              username: 'grafana',
              password: '<grafana_user_password>',
              grafanaDB: true
            },
          ...
        });
    

Grafana is now available at http:/<nginx_host>/grafana

The last thing to do is define the grafana dashboard and use the datapoints from the Temperature Readings series of the InfluxDB.

  1. Add a new "graph" panel to the dashboard
  2. Define the metric query and graph attributes
  3. Return to the dashboard and select a time period against which the query should be run and an auto refresh interval

And voila! An view of the Temperature readings for the last day that updates every minute.

Conclusion

It's kind of hacky, but for about $50 USD and an afternoon of research, installation, configuration, and coding I have a very crude implementation of a digital thermometer and a way to collate historical temperature data about one particular area of my house. Future iterations of this project will most likely include cleanup and organization of the poll.py script, infrastructure and security enhancements (I'd really like to build a web application inbetween the poll.py script and the datastore to add notifications of temperature events, etc), and the addition of a few more sensors throughout the house.

iOS Environments Variables

I find that there are usually at least two distinct environments in which any
iOS project is built: generally one set of configuration for development and
another for production. Each mode usually requires its own set of configuration:
resource URLs for data fetching and manipulation, 3rd party service
authentication keys, and the like. Usually I see these types of configurations
defined through conditional macros inline with application code, like so:

1
2
3
4
5
6
7
#ifdef DEBUG
const NSString* myAppAPIBaseUrl = @"http://development.myapp.com"; // development services
#else
const NSString* myAppAPIBaseUrl = @"http://www.myapp.com"; // production services
#endif

[[MyAppAPIClient alloc] initWithBaseUrl:myAppAPIBaseUrl];

In this approach, the URL used to initialize the MyAppAPIClient is determined
by the truth-y presence of the DEBUG preprocessor macro at build time,
implicitly describing two separate configurations in which the application is
built– presumably DEBUG (development) and otherwise (production). This ad-hoc,
imperative approach is generally scattered across the code base; choosing to
co-locate configuration and the using class(es). Maintaining values in this
manner proves to be challenging over time; configuration requirements change,
new application components share configuration values essentially making the
existence of one class implementation dependent upon another solely for its
constant definitions.

What if the application code and configuration could live independent of each
other? This would imply that configuration changes are less likely to introduce
regressions into the application code. For example, imagine the
above example rewritten to simply ask for the value of myAppAPIBaseUrl:

1
2
NSString* myAppAPIBaseUrl = [[Environment sharedInstance] fetch:@"MyAppAPIBaseUrl"];
[MyAppAPIClient alloc] initWithBaseUrl:myAppAPIBaseUrl];

It is the responsibility of the [[Environment sharedInstance] fetch:@"MyAppAPIBaseUrl"]
message to determine the appropriate value to return based on configuration
living apart from the application code.

As it turns out, implementing this behavior is not at all complicated:

Capture the build configuration used to build the Application bundle

  • Add a new String value to the <application_name>-Info.plist to capture the name of the Build Configuration (environment name) used to create the application bundle at build time:

Configure the Environments

  • Add an Environments.plist to the application. This serves as the centralized listing of environment specific configuration values for each environment the application supports.

  • Add a Dictionary type entry to the Environments.plist for each build configuration name the application supports; in this example the two default build configurations Apple creates for any iOS project: Debug and Release

  • Define the application's configuration concerns with the appropriate values for each 'environment' key in the Environments.plist

Define the Environment object:

1
2
3
4
5
// Environment.h
@interface Environment : NSObject
+ (Environment*) sharedInstance;
- (id) fetch:(NSString*)key;
@end
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
// Environment.m
#import "Environment.h"

static Environment *sharedInstance = nil;

@interface Environment()
@property (strong, nonatomic) NSDictionary* environment;
@end


@implementation Environment

+ (Environment*) sharedInstance {

    static Environment *_sharedInstance = nil;
    static dispatch_once_t oncePredicate;

    dispatch_once(&oncePredicate, ^{
        _sharedInstance = [[Environment alloc] init];
    });

    return _sharedInstance;
}


- (id)init
{
    self = [super init];
    if (self) {
        NSBundle* bundle = [NSBundle mainBundle];

        // Read in the 'Environment' name used to build the application (Debug or Release)
        NSString* configuration = [bundle objectForInfoDictionaryKey:@"Configuration"];

        // Load the Environment.plist
        NSString* environmentsPListPath = [bundle pathForResource:@"Environments" ofType:@"plist"];
        NSDictionary* environments = [[NSDictionary alloc] initWithContentsOfFile:environmentsPListPath];

        // Read the values for the 'Environment' name into the 'environment property'
        NSDictionary* environment = [environments objectForKey:configuration];
        self.environment = environment;
    }

    return self;
}

- (id)fetch:(NSString*)key {

    /**
     * If the key is present in the environment, then return its value;
     * otherwise return nil.
     */

    return [self.environment objectForKey:key];
}
@end

Example

With the above code in place we can run the application to make sure
everything's wired up correctly. In my example, I've added the following logger
statement to the application's launch lifecycle;

1
2
3
4
5
@implementation MAIOSAppDelegate
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
NSLog(@"Using %@ as the API Base URL", [[Environment sharedInstance] fetch:@"MyAppAPIBaseUrl"]);
...
@end

When the application is built in the Development mode, with the Debug build
configuration, the log statement outputs:

1
MyAppIOS[12289:60b] Using http://development.myapp.com as the API Base URL

And when using the Production mode, with the Release build configuration, the
log statement outputs:

1
MyAppIOS[12351:60b] Using http://www.myapp.com as the API Base URL

Code Example Source