Security Hardening Octoprint/Octopi

Octoprint is a great web frontend for 3D printers. Octopi is a raspbian-based image for a Raspberry Pi that comes with everything you need set up and configured.

Octoprint is an extremely convenient way to manage your 3D printer.  However, it’s capable of a lot of spooky things:

  1. If you have them, provides access to webcams showing prints
  2. Can set temperatures of both the tool and the heatbed
  3. Start whatever print you feel like
  4. Control steppers

In the best case, Octoprint gives whoever can access it the ability to see into your house and what’s going on with your printer.  In the worst case, someone with malicious intent could burn down your house, or at least wreck your printer.

The smartest approach here is probably to put Octoprint on a trusted network and refrain from poking holes in your router to allow access from the Internet.

But I’m not that smart.

In this post I’m going to outline a couple of things I did that make me feel better about exposing my Octoprint instance to the Internet.

Prior Art

First of all, Octoprint has builtin access controls.  And you should definitely use those.

I feel strongly that these are not sufficient, however:

  1. Unauthenticated users can do way too much.  Most importantly, they can view webcam feeds.  Yikes!
  2. There have been bugs with the builtin access controls.

Secondly, others have done things similar to what I’ve done.  However, there are a couple of things I’m going to do differently, and there are a few additional things I want to do.

Requirements

  1. Every interaction with Octoprint should go through a reverse proxy.  It should not be possible to access any part of Octoprint except through the reverse proxy.
  2. The last requirement should apply even if you’re on my local network.  Something about unauthenticated Webcam feeds gives me the jeebies.  Even if they’re pointed at a corner.
  3. I’m not going to run a web-facing nginx instance on Octoprint.  I want to use my main server as an entry point.
  4. Use client certificates for auth (I covered this in a previous post).
  5. TLS via letsencrypt.

Close down the ports

By default, Octopi exposes the Octoprint web interface on port 80 (via haproxy), and the webcam feed via mjpeg_streamer on port 8080.

I didn’t want these ports accessible except through loopback.  This is easy enough to change.

To shut down access to the Octoprint instance, just disable haproxy:

The Octoprint instance itself listens on port 5000 by default, and is bound to loopback.

To shut down access to mjpeg_streamer, we’ll have to fiddle with the script stored at /root/bin/webcamd :

This tells mjpeg_streamer’s http plugin to bind itself to loopback.  For it to take effect, make sure to restart the webcamd service (or just reboot the pi to be safe).

To test that this worked, try accessing http://octopi.local and http://octopi.local:8080.  You should get connection refused errors for both.

Open up the ports (on nginx server)

If you plan on running nginx on the pi, you can skip this step.  I have a different server running nginx.

In the last step, we shut down the ports to Octoprint.  Now we need to give the server running nginx a way to access them.

An easy way to accomplish this is with local SSH tunnels.  Setting this up is easy enough:

  1. Create a user on the octopi instance.  I called mine something to the effect of “ssh-proxy”
  2. Create a corresponding user on the server running nginx.  Generate an SSH key.
  3. Add the public key for ssh-proxy@nginx-server to ssh-proxy@octopi:~/.ssh/authorized_keys
  4. Set up autossh to establish a persistent SSH tunnel.  This will reestablish the tunnel when the pi reboots or connectivity is broken for any reason.  This is the command I used:
  5. Execute the above command on boot.  I accomplished this by putting it in /etc/rc.local.

Now Octoprint should be available on the nginx server via port 25000.  Same deal for the webcam feed on 28080 (I have another webcam accessible via 28081).

Note that these should be bound to loopback because of the way the tunnel is set up.  No point in all of this noise if that’s not the case.

Make ’em accessible

Now we can go about this if it were a standard reverse proxy setup.  The backends are accessible by loopback on ports local to the nginx server.

You can set up authentication however you like.  It’s probably easy and safe to use TLS, HTTP auth, and something like fail2ban.

I like client certificates, and already had them set up for other stuff I run, so I’m using those.

This is my config:

What’s this access_by_lua hocus pocus?

I covered this in a previous post.  The problem is that modern web applications don’t really play nicely with client certificates, and this seemed to include Octoprint.  There’s a bunch of wizardry with web sockets and service workers that don’t send the client cert when they’re supposed to.

The basic idea behind the solution is to instead authenticate by a couple of cookies with an HMAC.  When these cookies aren’t present, nginx redirects to a domain that requires the client certificate.  If the certificate is valid, it generates and drops the appropriate cookies, and the client is redirected to the original URL.

See the aforementioned post for more details.

Goes without saying, but…

The Raspberry Pi itself should be secured as well.  Change the default password for the pi user.

Single Sign On with client certificates

In a previous post, I detailed a trick to get complicated webapps working with client certificates.

The problem this solves is that some combination of web sockets, service workers (and perhaps some demonic magic) don’t play nicely with client certificates.  Under some circumstances, the client certificate is just not sent.

The basic idea behind the solution is to instead authenticate by a couple of cookies with an HMAC.  When these cookies aren’t present, you’re required to specify a client certificate.  When a valid client certificate is presented, HMAC cookies are generated and dropped.  If the cookies are present, you’re allowed access, even if you don’t have a client certificate.

This has worked well for me, but I still occasionally ran into issues.  Basically every time I started a new session with something requiring client certs, I’d get some sort of bizarre access error.  I dug in a little, and it seemed like the request to fetch the service worker code was failing because the browser wasn’t sending client certificates.

This led me to double down on the HMAC cookies.

Coming clean

When I call this Single Sign On, please understand that I really only have the vaguest possible understanding of what that means.  If there are standards or something that are implied by this term, I’m not following them.

What I mean is that I have a centralized lua script that I can include in arbitrary nginx server configs, and it handles auth in the same way for all of them.

The nitty gritty

Rather than using HMAC cookies as a fallback auth mechanism and having “ssl_verifiy_client” set to “optional,” I do the following:

  1. If HMAC cookies are not present, nginx redirects to a different subdomain (it’s important that it’s on the same domain).  This server config requires the client certificate.
  2. If the certificate is valid, it generates and drops the appropriate cookies, and the client is redirected to the original URL.  The cookies are configured to be sent for all subdomains of a given domain.
  3. Now that the client has HMAC cookies, it’s allowed access.  If the cookies were present to begin with, the above is skipped.

The setup has a couple of pieces:

  1. An nginx server for an “SSO” domain.  This is the piece responsible for dropping the HMAC cookies.
  2. A lua script which is included everywhere you want to auth using this mechanism.

This is the SSO server config:

And the SSO lua script:

An example of it being used:

Customizable e-Paper Information Display with 3D Printed Case

I recently finished a project which tied together a bunch of different tinkering skills I’ve had a lot of fun learning about over the last couple of years.

The finished product is this:

It shows me:

  • The time
  • Weather: current, weekly forecast, readings from an outdoor thermometer, and temperature in the city I work in.
  • Probably most usefully — the times that the next BART trains are showing up.

Obviously the same thing could be accomplished with a cheap tablet.  And probably with way less effort involved.  However, I really like the aesthetic of e-paper, and it’s kind of nice to not have yet another glowing rectangle glued to my wall.

I’m going to go into a bit of detail on the build, but keep in mind that this is still pretty rough around the edges.  This works well for me, but I would by no means call it a finished product. 🙂

Hardware

This is made up of the following components:

  1. 400×300 4.2″ Waveshare e-Paper display module *
  2. HiLetgo ESP32 Dev Board *
  3. 60x40mm prototype board *
  4. Command strips (for sticking to the wall)
  5. Some patch wires
  6. MicroUSB Cable
  7. Custom 3D Printed enclosure (details follow)

Depending on where you get the components, this will run you between $40 and $50.

Hookup

The e-Paper display module connects to the ESP32 over SPI.  Check out this guide to connecting the two.

I chose to connect using headers, sockets, and soldered wires.  This makes for a more reliable connection, and it’s easier to replace either component if need be.  I cut the female jacks from the jumper bus that came with my display and soldered the wires onto header pins.  I then put a whole mess of hot glue to prevent things from moving around.

Firmware

I’m using something I wrote called epaper_templates (clearly I was feeling very inspired when I named it).

The idea here is that you can input a JSON template defined in terms of variables bound to particular regions on the screen.  Values for those variables can be plumbed in via a REST API or to appropriately named MQTT topics.

The variables can either be displayed as text, or be used to dynamically choose a bitmap to display.

My vision for this project is to have an easy to use GUI to generate the JSON templates, but right now you have to input it by hand.  Here is the template used in the picture above as an example.

Configuration

The only variable that the firmware fills in for you is a timestamp.  Everything else must be provided externally.

I found Node-RED to be a fantastic vehicle for this.  It’s super easy to pull data from a bunch of different sources, format it appropriately, and shove it into some MQTT topics.  The flow looks like this:

Here is an export of the flow (note that some URLs reference internal services).

Enclosure

I designed a box in Fusion 360.  I’m a complete newbie at 3D design, but I was really pleased with how easy it was to create something like this.

The display mounts onto the lid of the box with the provided screws and hex nuts.  The lid sticks to the bottom of the box with two tabs.  The bottom has a hole for the USB cable and some tabs to hold the prototype board in place.

My box is printed on a Prusa i3 MK3.

3D files:

Tips on printing:

  • The top should be printed in PETG or some other slightly flexible material.  The tabs will probably break if printed in PLA.  Material does not matter much for the bottom.  Mine is PLA.
  • Both pieces should be printed with supports.  For the top, the recessed screw holes need it.  For the bottom, the tabs, lid tab holes and USB cable hold need them.

* Contains Amazon affiliate link