Chris Mullins I occasionally write about things. Usually these things are about computers. Mon, 10 Sep 2018 02:28:52 +0000 en-US hourly 1 Custom Prusa IKEA Lack Enclosure Parts Sun, 09 Sep 2018 05:05:50 +0000 Continue reading ]]> Earlier this year, Prusa released their take on a 3D printer enclosure made from the famous IKEA Lack tables and printable parts.

There are a wealth of printable accessories for this enclosure.  I’ve found these ones really nice:

I’ve designed a few parts of my own that I’m pretty happy with.  I would not be surprised to learn there are equivalent or better alternatives to these.  I did try looking, but not too hard.  I was happy to have the design challenge.

Fan Mount

Thingiverse link.

Enclosures get hot enough to screw with PLA print quality.  I added a ventilation fan which is capable of keeping the temperature in safe ranges (~27 C).

This is a mount for a standard 120x120mm computer case fan.  I’m using this Corsair AF120 fan*.

The mount slides into a centered cutout approximately 129x129mm on one of the acrylic sheets (I’m using the rear one).  

I had intended for the cutout in my sheet to be closer to 122x122mm, but the company I bought the sheet from didn’t get the measurements exactly right.  It was nice to be able to easily resize the part in Fusion 360 and print it out to-size.

1″ Grommet

Thingiverse link.

I drilled a 1″ hole through the bottom table to feed these cables through:

  • Two Logitech C270 * USB cables
  • LCD ribbon cables
  • 24v cables from the PSU

To make the hole look nicer I “designed” a grommet to fit the crappy hole my 1″ drill made.

Birdseye Mount for Logitech C270

Thingiverse link.

The Logitech C270* is a super cheap (~$20) 1080p USB webcam that works really well with Octoprint.

I have two of them in my setup.  First, the aforementioned x-axis mounted camera.  Great for making sure the print is looking good where it’s at.  Example view:

And the one placed in this mount, which gives a birds-eye view of the whole print bed.  Example view:

Modified Door Handles

Thingiverse link.

I redesigned the included door handles from scratch, mostly in order to improve my Fusion 360 design skills.

There are a few aesthetic differences, but the functional difference is that there are recesses appropriately sized for some 20x10x2mm N50 magnets* I had laying around.


I’ll share how I’m controlling the fan and lights in a future post.  Long story short, it’s an ESP8266 with some MOSFETs and ancillary circuitry.

[*] Contains affiliate link

]]> 0
Reusable Dash Button Case Sun, 02 Sep 2018 08:08:32 +0000 Continue reading ]]> I use Dash Buttons* in quite a few places around my home — mostly as a substitute for a light switch where one is inconveniently located, or not present at all.

I prefer them to alternative options like the Flic Button* because they’re dramatically cheaper (a Dash is $5, compared to $35 for a Flic).  They’re also occasionally on sale for $0.99.

My only frustration with Dash buttons is that they’re meant to be disposable, despite being powered by a replaceable AAA battery.  The electronics are encased by two pieces of welded plastic.  It’s easy to break the weld, but difficult to reassemble in a pretty way.

Having recently started dabbling in 3D design and printing, I decided to create a reusable case.  The humble fruit of my efforts is here:

I’m happy with how this turned out — it’s easy to open the case and replace the battery without damaging anything.


Pretty straightforward.  I took apart the stock case using some channel locks to break the welds:

With a little bit of elbow grease, and a T5 screwdriver to remove the battery enclosure, it comes apart like so:

A pry tool can be used to remove the PCB if it doesn’t come off by itself.

Assembly is straightforward.  First, put the plastic button and the rubber seal in place.

Then the PCB is placed back on the pegs, battery enclosure placed on top, and T5 screws added back.  Do not over-tighten the screws!  The printed pegs are quite fragile and will break under too much pressure.

After adding the battery back, the lid can be pressed onto the body:

And that’s it!  Fully assembled Dash case.

Update: Sept 4, 2018

I’ve uploaded a slightly modified version.  The main change makes it harder to over-tighten screws making the button unpressable.

[ * ] Contains Amazon affiliate link

]]> 0
Security Hardening Octoprint/Octopi Mon, 23 Jul 2018 05:03:33 +0000 Continue reading ]]> Octoprint is a great web frontend for 3D printers. Octopi is a raspbian-based image for a Raspberry Pi that comes with everything you need set up and configured.

Octoprint is an extremely convenient way to manage your 3D printer.  However, it’s capable of a lot of spooky things:

  1. If you have them, provides access to webcams showing prints
  2. Can set temperatures of both the tool and the heatbed
  3. Start whatever print you feel like
  4. Control steppers

In the best case, Octoprint gives whoever can access it the ability to see into your house and what’s going on with your printer.  In the worst case, someone with malicious intent could burn down your house, or at least wreck your printer.

The smartest approach here is probably to put Octoprint on a trusted network and refrain from poking holes in your router to allow access from the Internet.

But I’m not that smart.

In this post I’m going to outline a couple of things I did that make me feel better about exposing my Octoprint instance to the Internet.

Prior Art

First of all, Octoprint has builtin access controls.  And you should definitely use those.

I feel strongly that these are not sufficient, however:

  1. Unauthenticated users can do way too much.  Most importantly, they can view webcam feeds.  Yikes!
  2. There have been bugs with the builtin access controls.

Secondly, others have done things similar to what I’ve done.  However, there are a couple of things I’m going to do differently, and there are a few additional things I want to do.


  1. Every interaction with Octoprint should go through a reverse proxy.  It should not be possible to access any part of Octoprint except through the reverse proxy.
  2. The last requirement should apply even if you’re on my local network.  Something about unauthenticated Webcam feeds gives me the jeebies.  Even if they’re pointed at a corner.
  3. I’m not going to run a web-facing nginx instance on Octoprint.  I want to use my main server as an entry point.
  4. Use client certificates for auth (I covered this in a previous post).
  5. TLS via letsencrypt.

Close down the ports

By default, Octopi exposes the Octoprint web interface on port 80 (via haproxy), and the webcam feed via mjpeg_streamer on port 8080.

I didn’t want these ports accessible except through loopback.  This is easy enough to change.

To shut down access to the Octoprint instance, just disable haproxy:

$ sudo service haproxy stop
$ sudo update-rc.d haproxy disable

The Octoprint instance itself listens on port 5000 by default, and is bound to loopback.

To shut down access to mjpeg_streamer, we’ll have to fiddle with the script stored at


$ diff /root/bin/webcamd /root/bin/webcamd.bkup
< camera_http_options="-n -l"
> camera_http_options="-n"

This tells mjpeg_streamer’s http plugin to bind itself to loopback.  For it to take effect, make sure to restart the webcamd service (or just reboot the pi to be safe).

To test that this worked, try accessing http://octopi.local and http://octopi.local:8080.  You should get connection refused errors for both.

Open up the ports (on nginx server)

If you plan on running nginx on the pi, you can skip this step.  I have a different server running nginx.

In the last step, we shut down the ports to Octoprint.  Now we need to give the server running nginx a way to access them.

An easy way to accomplish this is with local SSH tunnels.  Setting this up is easy enough:

  1. Create a user on the octopi instance.  I called mine something to the effect of “ssh-proxy”
  2. Create a corresponding user on the server running nginx.  Generate an SSH key.
  3. Add the public key for ssh-proxy@nginx-server to ssh-proxy@octopi:~/.ssh/authorized_keys
  4. Set up autossh to establish a persistent SSH tunnel.  This will reestablish the tunnel when the pi reboots or connectivity is broken for any reason.  This is the command I used:
    sudo -u ssh-proxy bash -cl 'autossh -f -nNT -L 25000:localhost:5000 -L 28080:localhost:8080 -L 28081:localhost:8081 ssh-proxy@octopi'
  5. Execute the above command on boot.  I accomplished this by putting it in /etc/rc.local.

Now Octoprint should be available on the nginx server via port 25000.  Same deal for the webcam feed on 28080 (I have another webcam accessible via 28081).

Note that these should be bound to loopback because of the way the tunnel is set up.  No point in all of this noise if that’s not the case.

Make ’em accessible

Now we can go about this if it were a standard reverse proxy setup.  The backends are accessible by loopback on ports local to the nginx server.

You can set up authentication however you like.  It’s probably easy and safe to use TLS, HTTP auth, and something like fail2ban.

I like client certificates, and already had them set up for other stuff I run, so I’m using those.

This is my config:

map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;

upstream octopi_camera1 {

upstream octopi_camera2 {

upstream octopi_backend {

server {
  listen 80;
  listen 81;
  return 301 https://$host$request_uri;

server {
  listen 443 ssl; # managed by Certbot

  error_log  /var/log/nginx/ info;
  access_log /var/log/nginx/;

#.... bunch of SSL jazz auto-generated by certbot .....

  proxy_buffering off;
  proxy_redirect http:// https://;
  proxy_set_header        X-Real-IP $remote_addr;
  proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header        X-Forwarded-Proto $scheme;
  proxy_http_version 1.1;
  proxy_set_header Upgrade $http_upgrade;
  proxy_set_header Connection $connection_upgrade;
  proxy_set_header Host $host;

  # I found this necessary in order to be able to upload large-ish gcode
  # files.
  client_max_body_size 1G;

  location /webcam/ {
    proxy_pass  http://octopi_camera1/;
    access_by_lua_file /etc/nginx/scripts/sso.lua;

  location /camera2/ {
    proxy_pass  http://octopi_camera2/;
    access_by_lua_file /etc/nginx/scripts/sso.lua;

  location / {
    proxy_pass  http://octopi_backend;
    access_by_lua_file /etc/nginx/scripts/sso.lua;

What’s this access_by_lua hocus pocus?

I covered this in a previous post.  The problem is that modern web applications don’t really play nicely with client certificates, and this seemed to include Octoprint.  There’s a bunch of wizardry with web sockets and service workers that don’t send the client cert when they’re supposed to.

The basic idea behind the solution is to instead authenticate by a couple of cookies with an HMAC.  When these cookies aren’t present, nginx redirects to a domain that requires the client certificate.  If the certificate is valid, it generates and drops the appropriate cookies, and the client is redirected to the original URL.

See the aforementioned post for more details.

Goes without saying, but…

The Raspberry Pi itself should be secured as well.  Change the default password for the pi user.

]]> 0
Single Sign On with client certificates Sun, 22 Jul 2018 21:08:02 +0000 Continue reading ]]> In a previous post, I detailed a trick to get complicated webapps working with client certificates.

The problem this solves is that some combination of web sockets, service workers (and perhaps some demonic magic) don’t play nicely with client certificates.  Under some circumstances, the client certificate is just not sent.

The basic idea behind the solution is to instead authenticate by a couple of cookies with an HMAC.  When these cookies aren’t present, you’re required to specify a client certificate.  When a valid client certificate is presented, HMAC cookies are generated and dropped.  If the cookies are present, you’re allowed access, even if you don’t have a client certificate.

This has worked well for me, but I still occasionally ran into issues.  Basically every time I started a new session with something requiring client certs, I’d get some sort of bizarre access error.  I dug in a little, and it seemed like the request to fetch the service worker code was failing because the browser wasn’t sending client certificates.

This led me to double down on the HMAC cookies.

Coming clean

When I call this Single Sign On, please understand that I really only have the vaguest possible understanding of what that means.  If there are standards or something that are implied by this term, I’m not following them.

What I mean is that I have a centralized lua script that I can include in arbitrary nginx server configs, and it handles auth in the same way for all of them.

The nitty gritty

Rather than using HMAC cookies as a fallback auth mechanism and having “ssl_verifiy_client” set to “optional,” I do the following:

  1. If HMAC cookies are not present, nginx redirects to a different subdomain (it’s important that it’s on the same domain).  This server config requires the client certificate.
  2. If the certificate is valid, it generates and drops the appropriate cookies, and the client is redirected to the original URL.  The cookies are configured to be sent for all subdomains of a given domain.
  3. Now that the client has HMAC cookies, it’s allowed access.  If the cookies were present to begin with, the above is skipped.

The setup has a couple of pieces:

  1. An nginx
     for an “SSO” domain.  This is the piece responsible for dropping the HMAC cookies.
  2. A lua script which is included everywhere you want to auth using this mechanism.

This is the SSO server config:

server {
  listen 80;
  return 301 https://$host$request_uri;

server {
  listen 443 ssl; # managed by Certbot

  error_log  /var/log/nginx/;/error.log info;
  access_log /var/log/nginx/;/access.log;

#....bunch of stuff generated by certbot....#

  ssl_client_certificate /etc/ssl/ca/certs/ca.crt;
  ssl_crl                /etc/ssl/ca/private/ca.crl;
  ssl_verify_client      on;

  location / {
    access_by_lua_file "/etc/nginx/scripts/sso.lua";

And the SSO lua script:

-- Make this file only readable by the nginx process, and keep it away from web roots.
local HMAC_SECRET = “hunter2”

-- Set this to your domain.  Note that you’ll only be able to use this
-- for things that have this same TLD.
local DOMAIN = “”

local COOKIE_TTL = 864000
local crypto = require "crypto"

function ComputeHmac(msg, expires)
  return crypto.hmac.digest("sha256", string.format("%s%d", msg, expires), HMAC_SECRET)

function formatCookie(key, value)
  return string.format(
    "%s=%s; Secure; Path=/; Expires=%s; domain=.%s", 
    ngx.cookie_time(ngx.time() + COOKIE_TTL), 

if ngx.var.server_name == string.format(“sso.%s”, DOMAIN) then
  verify_status = ngx.var.ssl_client_verify

  if verify_status == "SUCCESS" then
    client = crypto.digest("sha256", ngx.var.ssl_client_cert)
    expires = ngx.time() + COOKIE_TTL

    ngx.header["Set-Cookie"] = {
      formatCookie("AccessToken", ComputeHmac(client, expires)),
      formatCookie("ClientId", client),
      formatCookie("AccessExpires", expires)

    return ngx.redirect(ngx.unescape_uri(ngx.var.arg_r))
  client = ngx.var.cookie_ClientId
  client_hmac = ngx.var.cookie_AccessToken
  access_expires = ngx.var.cookie_AccessExpires

  if client ~= nil and client_hmac ~= nil and access_expires ~= nil then
    hmac = ComputeHmac(client, access_expires)

    if hmac ~= "" and hmac == client_hmac and tonumber(access_expires) > ngx.time() then

  return ngx.redirect(string.format(“https://sso.%s/?r=%s”, DOMAIN, ngx.escape_uri("https://" .. ngx.var.http_host .. ngx.var.request_uri)))

An example of it being used:

map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;

upstream myservice {

server {
  listen 80;
  return 301 https://$host$request_uri;

server {
  listen 443 ssl; # managed by Certbot

#.....bunch of stuff managed by certbot.....#

  proxy_buffering off;
  proxy_redirect http:// https://;
  proxy_set_header        X-Real-IP $remote_addr;
  proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header        X-Forwarded-Proto $scheme;
  proxy_http_version 1.1;
  proxy_set_header Upgrade $http_upgrade;
  proxy_set_header Connection $connection_upgrade;
  proxy_set_header Host $host;

  location / {
    proxy_pass  http://myservice;
    access_by_lua_file /etc/nginx/scripts/sso.lua;


]]> 1
Customizable e-Paper Information Display with 3D Printed Case Sat, 30 Jun 2018 21:43:08 +0000 Continue reading ]]> I recently finished a project which tied together a bunch of different tinkering skills I’ve had a lot of fun learning about over the last couple of years.

The finished product is this:

It shows me:

  • The time
  • Weather: current, weekly forecast, readings from an outdoor thermometer, and temperature in the city I work in.
  • Probably most usefully — the times that the next BART trains are showing up.

Obviously the same thing could be accomplished with a cheap tablet.  And probably with way less effort involved.  However, I really like the aesthetic of e-paper, and it’s kind of nice to not have yet another glowing rectangle glued to my wall.

I’m going to go into a bit of detail on the build, but keep in mind that this is still pretty rough around the edges.  This works well for me, but I would by no means call it a finished product. 🙂


This is made up of the following components:

  1. 400×300 4.2″ Waveshare e-Paper display module *
  2. HiLetgo ESP32 Dev Board *
  3. 60x40mm prototype board *
  4. Command strips (for sticking to the wall)
  5. Some patch wires
  6. MicroUSB Cable
  7. Custom 3D Printed enclosure (details follow)

Depending on where you get the components, this will run you between $40 and $50.


The e-Paper display module connects to the ESP32 over SPI.  Check out this guide to connecting the two.

I chose to connect using headers, sockets, and soldered wires.  This makes for a more reliable connection, and it’s easier to replace either component if need be.  I cut the female jacks from the jumper bus that came with my display and soldered the wires onto header pins.  I then put a whole mess of hot glue to prevent things from moving around.


I’m using something I wrote called epaper_templates (clearly I was feeling very inspired when I named it).

The idea here is that you can input a JSON template defined in terms of variables bound to particular regions on the screen.  Values for those variables can be plumbed in via a REST API or to appropriately named MQTT topics.

The variables can either be displayed as text, or be used to dynamically choose a bitmap to display.

My vision for this project is to have an easy to use GUI to generate the JSON templates, but right now you have to input it by hand.  Here is the template used in the picture above as an example.


The only variable that the firmware fills in for you is a timestamp.  Everything else must be provided externally.

I found Node-RED to be a fantastic vehicle for this.  It’s super easy to pull data from a bunch of different sources, format it appropriately, and shove it into some MQTT topics.  The flow looks like this:

Here is an export of the flow (note that some URLs reference internal services).


I designed a box in Fusion 360.  I’m a complete newbie at 3D design, but I was really pleased with how easy it was to create something like this.

The display mounts onto the lid of the box with the provided screws and hex nuts.  The lid sticks to the bottom of the box with two tabs.  The bottom has a hole for the USB cable and some tabs to hold the prototype board in place.

My box is printed on a Prusa i3 MK3.

3D files:

Tips on printing:

  • The top should be printed in PETG or some other slightly flexible material.  The tabs will probably break if printed in PLA.  Material does not matter much for the bottom.  Mine is PLA.
  • Both pieces should be printed with supports.  For the top, the recessed screw holes need it.  For the bottom, the tabs, lid tab holes and USB cable hold need them.



* Contains Amazon affiliate link

]]> 0
Solar-Powered Outdoor Thermometer with Multiple DS18B20 Sensors Sun, 24 Dec 2017 21:44:05 +0000 Continue reading ]]> I’ve been pretty happy with how my ESP8266-powered outdoor thermometers turned out.  One of these has two sensors — one to measure ambient temperature, and one to measure the temperature of a hot tub.  It’s solar-powered (with an 18650 Li-Ion battery), uses a single GPIO pin, and never needs charging!

DS18B20 Temperature Sensor

DS18B20s* are great digital thermometers for DIY projects.  They’re about the same price as the more popular DHT11*, and while they don’t measure humidity, they have a really cool advantage: you can use multiple sensors on the same data pin!  They also support a parasitic power mode, which drops their standby power draw to close to 0.

Diagram showing how to set up DS18B20s in parasite power mode. Image from tweaking4all.


I updated the ESP8266 firmware I’ve been using to support multiple sensors, and added a nice web UI:

This allows me to push readings from multiple sensors connected to the same GPIO pin to different MQTT topics.  You can also push readings to an HTTP server.

The finished setup

In deep sleep mode, this project is suitable for battery life.  My outdoor thermometer uses an 18650 cell with a 5v solar panel and battery protection circuit.  I’ve never needed to charge it.  It’s been running for months.

Tips on power efficiency:

  1. Use an efficient voltage regulator.  Many dev boards use an AMS1117, which has a very high quiescent current.  You’re probably best off with a buck converter*, but an HT7333 or similar would be a lot better too.
  2. Use parasitic power mode!  Just wire Gnd and Vin on the DS18B20 to Gnd, and add a 4.7 KΩ pullup resistor to the data line.
  3. Disable any LEDs that are on for long periods of time.  I just used a pair of pliers to break the power LED.

I use a 3.5mm aux cable soldered to the probe wires to connect the DS18B20 probe to the ESP8266 circuit:

This is nice because it’s easy to add length with an extension cable or split the line with a standard aux splitter:

Here is a sloppy circuit in Fritzing.


  1. GitHub page for firmware
  2. DS18B20 datasheet

Components Used*

  1. 5V battery protection circuit for 18650 cells
  2. 5V 180mA solar panel (mine is actually 160mA, but can’t find the product link anymore)
  3. 18650 Li-Ion Battery (recommend getting Samsung or Panasonic)
  4. ESP8266-07 and Breakout Board (soldering required. Wemos D1 Mini works if you want to avoid soldering).
  5. Waterproof DS18B20 Temperature Sensor
  6. Buck Converter (not necessary if using Wemos D1 Mini)
  7. Optional: 3.5mm extension cable (cut in half, solder one end to ESP8266 board, other to DS18B20)

* Contains Amazon affiliate link

]]> 5
Using a Milight remote with HomeAssistant Sat, 08 Jul 2017 18:20:30 +0000 Continue reading ]]> Milight sells 2.4 GHz remotes* for quite cheap (~$12) which can be used to control Milight bulbs*. They also make in-wall panels* that do the same thing. These work quite well, but if you’re using something like HASS, you’ll end up with stale state. For example, if you turn lights on with the remote, that change won’t be reflected in HASS.

I recently released v1.4.0 of esp8266_milight_hub which allows you to use Milight remotes in conjunction with HomeAssistant, or any other platform that works with MQTT. Here’s some footage of it in action:

The ESP8266 is passively listening for packets sent by the remote, and forwards the data to an MQTT topic. HASS reacts by sending commands to a different MQTT topic which instructs the ESP8266 to turn bulbs on. You could just as easily have HASS do something completely different, like control a non-Milight bulb.

If you’re interested in setting this up yourself, instructions are available GitHub project wiki.

* Contains Amazon affiliate link

]]> 2
Securing HomeAssistant with client certificates (works with Safari/iOS) Sun, 30 Apr 2017 08:28:35 +0000 Continue reading ]]> I recently moved from SmartThings to HomeAssistant. One of the things I didn’t have to think about too much with SmartThings was how to authenticate all of my connected devices (laptops, phones, tablets, etc.) with my HA platform. I wanted to find a good balance between security and convenience with HomeAssistant.

HomeAssistant makes it easy to secure your install with a password. Coupled with TLS, this is pretty solid. But there’s just something about the idea of a publically facing page that anyone on the Internet can get to, protected with nothing but a password that made me feel uneasy.

Client certificates are a very robust authentication mechanism that involves installing a digital certificate on each device you wish to grant access to. Each certificate is signed by the server certificate, which is how the server knows that the client is valid.

This feels nicer than HomeAssistant’s built-in security measures to me for a few reasons:

  1. Individual client certificates can be revoked. You don’t have to configure authentication on every device you own if someone loses their phone.
  2. While I highly doubt there are any issues with HomeAssistant, I feel more confident in nginx and openssl.
  3. Unless you add a passphrase to the client certificates (I didn’t), the whole thing is passwordless and still manages to be pretty darn secure.
  4. If I ever became truly paranoid, I could turn on HomeAssistant’s password protection and my HA dashboard would essentially need two authentication factors (the SSL cert + the password).

While I did find this approach more appealing, there are several drawbacks:

  1. It’s way harder to set up. You need to run a bunch of openssl commands, and install a certificate on each device you want to grant access to.
  2. The HomeAssistant web UI requires WebSockets, which seem to not play nicely in combination with client certificates on Safari or iOS devices. My household has iOS users, so this was something I needed to figure out.

I think I managed to get this working. The only disadvantage is that clients are granted access for an hour after successfully authenticating once. The basic approach is to tag authenticated browsers with an access token that’s good for a short period of time, long enough for them to establish a WebSocket connection. I’ll go through the steps in setting this up.

What you need

  1. Install packages: 
    sudo apt-get install nginx nginx-extras lua5.1 liblua5.1-dev
  2. openssl
  3. luacrypto module, which exposes openssl bindings in lua.

luacrypto was kind of a pain to install. Here’s what I did to get it working with my nginx install. It involved patching
 (thanks to this very helpful StackOverflow post for the tip):

git clone /opt/luacrypto \
  && cd /opt/luacrypto \
     # Fix package names for compatibility with Ubuntu
     echo 'diff --git a/ b/
index b6b9175..20ea20c 100644
--- a/
+++ b/
@@ -28,10 +28,10 @@ AC_CHECK_FUNCS([memset])

 # pkgconfig

 # lua libdir
-LUALIBDIR="`$PKGCONFIG --variable=libdir lua`"
+LUALIBDIR="`$PKGCONFIG --variable=libdir lua5.1`"

 # dest of headers
 CRYPTOINC="${includedir}/${PACKAGE_NAME}"' \
  | git apply \
  && autoreconf -i \
  && ./configure \
  && make \
  && sudo mkdir -p /usr/local/lib/lua/5.1 \
  && sudo cp src/.libs/ /usr/local/lib/lua/5.1/

Setting up a Certificate Authority

There are already good guides on doing this. I recommend this one. In this guide, I’m using the default_CA parameters pre-filled by openssl on my system.

Generate client certificates

I put a script in

  to make this easier:

$ cat `which create-client-ssl-cert`

function usage () {
  echo "$0 [CA section name] [username]"
  exit 1

if [ $# -ne 2 ]



mkdir -p ${USERS_DIR}

if [ -f "${USERS_DIR}/${USERNAME}.key" ]; then
  echo "Key for $USERNAME already exists! Delete it to continue."
  exit 1

# Create the Client Key and CSR
openssl genrsa -des3 -out ${USERS_DIR}/${USERNAME}.key 1024
openssl req -new -key ${USERS_DIR}/${USERNAME}.key -out ${USERS_DIR}/${USERNAME}.csr

# Sign the client certificate with our CA cert.  Unlike signing our own server cert, this is what we want to do.
openssl x509 -req -days 1095 -in ${USERS_DIR}/${USERNAME}.csr -CA $SSL_CERTS_DIR/ca.crt -CAkey $SSL_PRIVATE_DIR/ca.key -CAserial $SSL_DIR/${CA_NAME}/serial -CAcreateserial -out ${USERS_DIR}/${USERNAME}.crt

echo "making p12 file"
#browsers need P12s (contain key and cert)
openssl pkcs12 -export -clcerts -in ${USERS_DIR}/${USERNAME}.crt -inkey ${USERS_DIR}/${USERNAME}.key -out ${USERS_DIR}/${USERNAME}.p12

echo "made ${USERS_DIR}/${USERNAME}.p12"

You then run this for each device you want to grant access to:

# create-client-ssl-cert ca chris_pixel

Make sure to supply an export password. Generated certificate files will be placed in


Get the certificates on the devices

The .p12 file is the one you want. Make sure to not compromise the certificates in the process.

I rsynced the files to my laptop and attached them to a LastPass note, which I could access on my devices. On most devices, you should be able to just open the .p12 file and it’ll do what you want.

On iOS devices, I needed to serve the certificates over HTTPS on a trusted network because they needed to be “opened” by Safari in order to be recognized.

Configure nginx

Here’s my nginx config. You’ll need to substitute your domain and SSL certificate parameters:

map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;

upstream hass_backend {

server {
  listen 80;
  listen 81;
  return 301 https://$host$request_uri;

server {
  listen      443 ssl;

  error_log  /var/log/nginx/;
  access_log /var/log/nginx/;

  ssl_certificate     /etc/letsencrypt/live/;
  ssl_certificate_key /etc/letsencrypt/live/;
  ssl_dhparam         /etc/nginx/ssl/dhparams.pem;
  ssl on;

  add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_prefer_server_ciphers on;
  ssl_session_cache shared:SSL:10m;

  ssl_client_certificate /etc/ssl/ca/certs/ca.crt;
  ssl_crl                /etc/ssl/ca/private/ca.crl;
  ssl_verify_client      optional;

  proxy_buffering off;
  proxy_redirect http:// https://;
  proxy_set_header        X-Real-IP $remote_addr;
  proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header        X-Forwarded-Proto $scheme;
  proxy_http_version 1.1;
  proxy_set_header Upgrade $http_upgrade;
  proxy_set_header Connection $connection_upgrade;
  proxy_set_header Host $host;

  location / {
    #access_by_lua_file /etc/nginx/scripts/hass_access.lua;
    proxy_pass  http://hass_backend;

If this all worked, you should be able to access from a device with a client certificate installed, but not otherwise. Unfortunately if you’re using iOS or Safari, you’ll probably notice that the page loads, but you get a forever spinny wheel. If you look in the debugger console, you might see messages that look like this:

[Error] WebSocket connection to 'wss://' failed: Unexpected response code: 400 (x11)

This is because the browser isn’t sending the client certificate information when trying to connect to the WebSocket and is therefore failing.

Fixing compatibility with iOS/Safari

Safari does actually send client cert info along with the initial request. Nginx has a really cool module that allows you to insert all sorts of fancy logic with lua scripts. I added one that tags browsers supplying a valid client certificate with a cookie granting access for about an hour. This worked really well. Since this is all over HTTPS, and the access tokens are short-lived, I felt pretty comfortable.

The easiest way I could think of to create a cookie that was valid for a limited time was to use an HMAC. Basically I “sign” a hash of the client’s certificate along with an expiry timestamp. The certificate hash, expiration timestamp, and HMAC are all stored in cookies. Nginx can then validate that the expiration timestamp is in the future, and that the HMAC signature matches what’s expected.

You’ll notice the commented-out line in the nginx config above. Uncomment it:

access_by_lua_file /etc/nginx/scripts/hass_access.lua;

And add the script:

local HMAC_SECRET = "hunter2"
local crypto = require "crypto"

function ComputeHmac(msg, expires)
  return crypto.hmac.digest("sha256", string.format("%s%d", msg, expires), HMAC_SECRET)

verify_status = ngx.var.ssl_client_verify

if verify_status == "SUCCESS" then
  client = crypto.digest("sha256", ngx.var.ssl_client_cert)
  expires = ngx.time() + 3600

  ngx.header["Set-Cookie"] = {
    string.format("AccessToken=%s; path=/", ComputeHmac(client, expires)),
    string.format("ClientId=%s; path=/", client),
    string.format("AccessExpires=%d; path=/", expires)
elseif verify_status == "NONE" then
  client = ngx.var.cookie_ClientId
  client_hmac = ngx.var.cookie_AccessToken
  access_expires = ngx.var.cookie_AccessExpires

  if client ~= nil and client_hmac ~= nil and access_expires ~= nil then
    hmac = ComputeHmac(client, access_expires)

    if hmac ~= "" and hmac == client_hmac and tonumber(access_expires) > ngx.time() then



]]> 28
Reverse engineering the new Milight/LimitlessLED 2.4 GHz Protocol Sat, 18 Mar 2017 08:20:31 +0000 Continue reading ]]> My last post went over an ESP8266-based wifi gateway for Milight/LimitlessLED bulbs. This supports a few kinds of bulbs that have been around for a couple of years (e.g., this one).

About a year ago, newer bulbs and controllers started showing up that used a different 2.4 GHz protocol. This introduced some scrambling that made it difficult to emulate many devices. This was presumably done intentionally to prevent exactly the sort of thing that my last project accomplished (boo!).

The new bulbs actually have some really cool features that none of the old ones do, so there’s some incentive to figure this out. In particular, they support saturation, which allows for ~65k (2**16) colors with variable brightness instead of the 256 colors that the old one does. They also combine RGB and CCT (adjustable white temperature) in one bulb, which is super cool.

A few others have dug into this a little, but as far as I’ve been able to tell, no one has figured out (or at least shared) how to de-scramble the protocol. I think I’ve managed to do so. I should mention that I don’t have much experience doing this kind of thing, so it’s entirely possible the structure I’m imposing is a lot more complicated than what’s actually going on. But as far as I’ve been able to tell, it does work. I’ve tested with five devices – four remotes and one wifi box.

I’m going to start by detailing the structure, and I’ll follow up with some of the methodology I used to reverse the protocol.

Differences from old protocol

From a quick glance, there are a few superficial differences between the new and old protocols:

  1. Listens on a different channelset (this was true of different bulb types among the old bulbs too). The new bulbs use channels 8, 39, and 70.
  2. Different PL1167 syncword. It uses
  3. Packets have 9 bytes instead of 7.
  4. Packets are scrambled. The same command can look completely different.

The scrambling is the tricky part. As others who have stared at packet captures noticed, when the first byte of packets for the same command is held fixed, most of the other bytes stay fixed too. This suggests that the first byte is some kind of scramble key. Turns out this is the case.

Example packets for turning group 1 on with one of my remotes:

A7 45 E9 6F BB 99 9E CF 4C
E1 D6 4F 79 99 78 16 9A A7
23 D1 55 03 47 25 2E 5B E8
92 18 18 33 42 15 60 02 CE
CA E0 E0 EB 0A DD A1 CA 9F


I’ll for a packet p, I’ll use pi to refer to the 0-indexed ith byte in the packet. For example, p0 refers to the 0th byte.

I’ll use p’ to refer to the scrambled packet for a packet p.


The 9 bytes of the packet are:

p0 p1 p2 p3 p4 p5 p6 p7 p8
Key 0x20 ID1 ID2 Command Argument Sequence Group Checksum

Packet scrambling

The designer of this protocol added in quite a few things to complicate reversing it. None of them are particularly hard on their own, but with them all added together it makes it pretty tough.

The scrambling algorithm is basically:

  1. A scramble key k is computed from p0
  2. Each byte position i has a different set of four 1-byte integers A[i]. Integer A[i][j] is used when p0 ≡ j mod 4.
  3. A[i][j] is up-shifted by 0x80 when p0 is in the range [0x54, 0xD3]. This does not apply to the checksum byte.
  4. p’i = ((pik) + A[i][p0 mod 4]) mod 256, where ⊕ is a bitwise exclusive or.

The algorithm to compute k is as follows (in ruby):

def xor_key(p0)
  # Generate most significant nibble
  shift = (p0 & 0x0F) < 0x04 ? 0 : 1
  x = (((p0 & 0xF0) >> 4) + shift + 6) % 8
  msn = (((4 + x) ^ 1) & 0x0F) << 4

  # Generate least significant nibble
  lsn = ((((p0 & 0xF) + 4)^2) & 0x0F)

  msn | lsn

A values:

Position 0 1 2 3
p1 0x45 0x1F 0x14 0x5C
p2 0x2B 0xC9 0xE3 0x11
p3 0x6D 0x5F 0x8A 0x2B
p4 0xAF 0x03 0x1D 0xF3
p5 0x5A 0x22 0x30 0x11
p6 0x04 0xD8 0x71 0x42
p7 0xAF 0x04 0xDD 0x07
p8 0x61 0x13 0x38 0x64

There are probably actually several possible values for some of these. It really only matters that they line up in a particular way because of the checksum.

In addition to all of this, command arguments have different offsets from 0, and some commands (i.e., saturation and brightness) have the same p4 value with arguments spanning different ranges. For example, arguments for brightness start at 0x4F. Arguments for color start at 0x15.


The checksum byte is calculated by summing all bytes in the unscrambled packet except for the first (scramble key) and last (checksum), and k + 2.


Further detail is probably easier to communicate in code, so here is a ruby library that can encode/decode packets. The project on github also has a ton of packets I scraped from my devices.


I scraped a ton of packets with this script.

Figuring this out was mostly making assumptions, pattern recognition, and trial and error. The most helpful assumption was that sequential values for arguments in the UDP protocol had sequential values in the 2.4 GHz protocol (this turned out to be true).

I noticed that packets that had p0 values with the same remainder mod 4 followed a nearly sequential pattern. It often looked something like 0xC, 0xD, 0xA, 0xB, etc. A sequence follows this pattern when it’s xored with a constant. I wrote some cruddy ruby methods to brute-force search for constants that yielded the sequence [0, 1, …, N]. This also allowed me to find the values for the As.

Roughly the same process was repeated for each byte. Bytes that are constants were trickier because they didn’t follow a sequence. I instead brute-forced values for A given a sequence of xor keys.

Next Steps

I’ll be porting over the scrambling code to my ESP8266 milight hub to add support for the new bulbs.

UPDATE 2017-03-28: A few kind volunteers sent me packet captures from their devices, and the ID bytes were not staying fixed under decoding. Assuming my methodology is right, these should be the right values for all parameters, with the possible exception of p1 and the checksum byte.

UPDATE 2017-03-20: I found that the wifi box I was testing with supported older protocols, which transmits the unscrambled device ID. There were several possible values for the ID byte offsets, and I chose a few of them arbitrarily. The decoded ID in the scrambled protocol was not matching the ID in the unscrambled protocol. Updating the additive offset values fixed this.

]]> 10
Milight WiFi Gateway Emulator on an ESP8266 Sun, 12 Feb 2017 05:45:20 +0000 Continue reading ]]> Milight bulbs* are cheap smart bulbs that are controllable with an undocumented 2.4 GHz protocol. In order to control them, you either need a remote* (~$13), which allows you to control them directly, or a WiFi gateway* (~$30), which allows you to control them with a mobile app or a UDP protocol.

A few days ago, I posted my Arduino code to emulate a Milight WiFi gateway on an ESP8266 (link). This allows you to use an NRF24L01+ 2.4 GHz tranceiver module* and an ESP8266* to emulate a WiFi gateway, which provides the following benefits:

  1. Virtually unlimited groups. The OTS gateways are limited to four groups.
  2. Exposes a nice REST API as opposed to the clunky UDP protocol.
  3. Secure the gateway with a username/password (note that the 2.4 GHz protocol used by the bulbs is inherently insecure, so this only does so much good).

I wanted to follow up with a blog post that details how to use this. I’m going to cover:

  1. How to setup the hardware.
  2. How to install and configure the firmware.
  3. How to use the web UI and REST API to pair/unpair and control bulbs.

Shopping List

This should run you approximately ~$10, depending on where you shop, and how long you’re willing to wait for shipping. Items from Chinese sellers on ebay usually come at significant discounts, but it often takes 3-4 weeks to receive items you order.

  1. An ESP8266 module that supports SPI. I highly recommend a NodeMCU v2*.
  2. An NRF24L01+ module. You can get a pack of 10* on Amazon for $11. You can also get one that supports an external antenna if range is a concern (link*).
  3. Dupont female-to-female jumper cables (at least 7). You’ll need these to connect the ESP8266 and the NRF24L01+.
  4. Micro USB cable.

If you get a bare ESP8266 module, you’ll need to figure out how to power it (you’ll likely need a voltage regulator), and you’ll probably have to be mildly handy with soldering.

Setting up the Hardware

The only thing to do here is to connect the ESP8266 to the NRF24L01+ using the jumper cables. I found this guide pretty handy, but I’ve included some primitive instructions and photos below.

NodeMCU Pinout

NRF24L01+ Pinout


NodeMCU Pin NRF24L01+ Pin
3V (NOT Vin) VCC

Installing drivers

There are a couple of different versions of NodeMCUs (I’m not convinced they’re all actually from the same manufacturer). Depending on which one you got, you’ll need to install the corresponding USB driver in order to flash its firmware.

The two versions I’m aware of are the v2 and the v3. The v2 is smaller and has a CP2102 USB to UART module. You can identify it as the small square chip near the micro USB port:

NodeMCU v2 with CP2102 circled

Install drivers for the v2 here.

The v3 is larger and has a CH34* UART module, which thin and rectangular:

NodeMCU v3 with CH34* circled

The CH34* drivers seem more community-supported. This blog post goes over different options.

I’ve been able to use both the v2 and v3 with OS X Yosemite.

Installing Firmware

If you’re comfortable with PlatformIO, you can check out the source from Github. You should be able to build and upload the project from the PlatformIO editor.

Update – Mar 26, 2017: I highly recommend using PlatformIO to install the firmware. The below instructions are finicky and unless you get the arguments exactly right, the filesystem on your ESP will not work correctly. Using PlatformIO is a more robust way to get a fresh ESP set up. Further instructions are in the README.

Update – Feb 26, 2017: if you’ve used your ESP for other things before, it’s probably a good idea to clear the flash with --port /dev/ttyUSB0 erase_flash
 . Thanks to Richard for pointing this out in the comments.

If not, you can download a pre-compiled firmware binary here. If you’re on Windows, the NodeMCU flasher tool is probably the easiest way to get it installed.

On OS X (maybe Linux?), following the NodeMCU guide, you should:

  1. Connect the NodeMCU to your computer using a micro USB cable.
  2. Install esptool
    git clone \
      && cd esptool \
      && sudo python ./ install
  3. Flash the firmware:
    python --port /dev/cu.SLAB_USBtoUART \
      --baud 115200 write_flash -fm=dio -fs=4MB 0x00000 \

    Note that 
     should be substituted for 
     if you’re using a v3 NodeMCU. Be sure to specify the real path to the firmware file.
  4. Restart the device. To be safe, just unplug it from USB and plug it back in.

Setup firmware

Note that you’ll have to do all of these things before you can use the UI, even if you used the pre-compiled firmware:

  1. Connect the device to your WiFi. Once it’s booted, you should be able to see a WiFi network named “ESPXXXXXX”, where XXXXXX is a random identifier. Connect to this network and follow the configuration wizard that should come up.  The password will be milightHub.
  2. Find the IP address of the device. There are a bunch of ways to do this. I usually just look in my router’s client list. It should be listening on port 80, so you could use
      or something.

You should now be able to navigate to http://<ip_of_isp>.

Using the Web UI

The UI is useful for a couple of things.

If you have Milight bulbs already, you probably have them paired with an existing device. Rather than unpairing them and re-pairing with the ESP8266 gateway, you can just have the ESP8266 gateway spoof the ID of your existing gateway or remote. Just click on the “Start Sniffing” button near the bottom and push buttons in the app or on the remote. You should see packets start to appear:

The “Device ID” field shows the unique identifier assigned to that device. To have the ESP8266 gateway spoof it, scroll up to the top and enter it:

The controls should not work as expected. You can click on the “Save” button below if you want to save the identifier in the dropdown for next time.

The UI is also useful for pairing/unpairing bulbs. Just enter the gateway ID, click on the group corresponding to the bulb you wish to pair/unpair, screw in the bulb, and quickly (within ~3-5s) press the appropriate button. The bulb should flash on and off if it was successful.

Using the REST API

The UI is great for poking around and setting things up, but if you want to tie this into a home automation setup, you’ll probably want a programmatic interface. The API is fully documented in the Github readme, but here’s a quick example:

curl -vvv -X PUT \
  --data-binary '{"status": "on", "hue":0}' \

This will turn bulbs paired with device 0xCD86, group 2 on and set the color to red (hue = 0).

UPDATE – Feb 12, 2016

I realized this project would be a lot more immediately useful to people if it just supported the existing Milight UDP protocol. This would allow people to use the existing integrations others have built for OpenHab, Home Assistant, SmartThings, etc.

The Web UI has a section to manage gateway servers. Each server will need a device ID and a port.

* Amazon affiliate link.

]]> 477