Security Hardening Octoprint/Octopi

Octoprint is a great web frontend for 3D printers. Octopi is a raspbian-based image for a Raspberry Pi that comes with everything you need set up and configured.

Octoprint is an extremely convenient way to manage your 3D printer.  However, it’s capable of a lot of spooky things:

  1. If you have them, provides access to webcams showing prints
  2. Can set temperatures of both the tool and the heatbed
  3. Start whatever print you feel like
  4. Control steppers

In the best case, Octoprint gives whoever can access it the ability to see into your house and what’s going on with your printer.  In the worst case, someone with malicious intent could burn down your house, or at least wreck your printer.

The smartest approach here is probably to put Octoprint on a trusted network and refrain from poking holes in your router to allow access from the Internet.

But I’m not that smart.

In this post I’m going to outline a couple of things I did that make me feel better about exposing my Octoprint instance to the Internet.

Prior Art

First of all, Octoprint has builtin access controls.  And you should definitely use those.

I feel strongly that these are not sufficient, however:

  1. Unauthenticated users can do way too much.  Most importantly, they can view webcam feeds.  Yikes!
  2. There have been bugs with the builtin access controls.

Secondly, others have done things similar to what I’ve done.  However, there are a couple of things I’m going to do differently, and there are a few additional things I want to do.

Requirements

  1. Every interaction with Octoprint should go through a reverse proxy.  It should not be possible to access any part of Octoprint except through the reverse proxy.
  2. The last requirement should apply even if you’re on my local network.  Something about unauthenticated Webcam feeds gives me the jeebies.  Even if they’re pointed at a corner.
  3. I’m not going to run a web-facing nginx instance on Octoprint.  I want to use my main server as an entry point.
  4. Use client certificates for auth (I covered this in a previous post).
  5. TLS via letsencrypt.

Close down the ports

By default, Octopi exposes the Octoprint web interface on port 80 (via haproxy), and the webcam feed via mjpeg_streamer on port 8080.

I didn’t want these ports accessible except through loopback.  This is easy enough to change.

To shut down access to the Octoprint instance, just disable haproxy:

The Octoprint instance itself listens on port 5000 by default, and is bound to loopback.

To shut down access to mjpeg_streamer, we’ll have to fiddle with the script stored at /root/bin/webcamd :

This tells mjpeg_streamer’s http plugin to bind itself to loopback.  For it to take effect, make sure to restart the webcamd service (or just reboot the pi to be safe).

To test that this worked, try accessing http://octopi.local and http://octopi.local:8080.  You should get connection refused errors for both.

Open up the ports (on nginx server)

If you plan on running nginx on the pi, you can skip this step.  I have a different server running nginx.

In the last step, we shut down the ports to Octoprint.  Now we need to give the server running nginx a way to access them.

An easy way to accomplish this is with local SSH tunnels.  Setting this up is easy enough:

  1. Create a user on the octopi instance.  I called mine something to the effect of “ssh-proxy”
  2. Create a corresponding user on the server running nginx.  Generate an SSH key.
  3. Add the public key for ssh-proxy@nginx-server to ssh-proxy@octopi:~/.ssh/authorized_keys
  4. Set up autossh to establish a persistent SSH tunnel.  This will reestablish the tunnel when the pi reboots or connectivity is broken for any reason.  This is the command I used:
  5. Execute the above command on boot.  I accomplished this by putting it in /etc/rc.local.

Now Octoprint should be available on the nginx server via port 25000.  Same deal for the webcam feed on 28080 (I have another webcam accessible via 28081).

Note that these should be bound to loopback because of the way the tunnel is set up.  No point in all of this noise if that’s not the case.

Make ’em accessible

Now we can go about this if it were a standard reverse proxy setup.  The backends are accessible by loopback on ports local to the nginx server.

You can set up authentication however you like.  It’s probably easy and safe to use TLS, HTTP auth, and something like fail2ban.

I like client certificates, and already had them set up for other stuff I run, so I’m using those.

This is my config:

What’s this access_by_lua hocus pocus?

I covered this in a previous post.  The problem is that modern web applications don’t really play nicely with client certificates, and this seemed to include Octoprint.  There’s a bunch of wizardry with web sockets and service workers that don’t send the client cert when they’re supposed to.

The basic idea behind the solution is to instead authenticate by a couple of cookies with an HMAC.  When these cookies aren’t present, nginx redirects to a domain that requires the client certificate.  If the certificate is valid, it generates and drops the appropriate cookies, and the client is redirected to the original URL.

See the aforementioned post for more details.

Goes without saying, but…

The Raspberry Pi itself should be secured as well.  Change the default password for the pi user.

Single Sign On with client certificates

In a previous post, I detailed a trick to get complicated webapps working with client certificates.

The problem this solves is that some combination of web sockets, service workers (and perhaps some demonic magic) don’t play nicely with client certificates.  Under some circumstances, the client certificate is just not sent.

The basic idea behind the solution is to instead authenticate by a couple of cookies with an HMAC.  When these cookies aren’t present, you’re required to specify a client certificate.  When a valid client certificate is presented, HMAC cookies are generated and dropped.  If the cookies are present, you’re allowed access, even if you don’t have a client certificate.

This has worked well for me, but I still occasionally ran into issues.  Basically every time I started a new session with something requiring client certs, I’d get some sort of bizarre access error.  I dug in a little, and it seemed like the request to fetch the service worker code was failing because the browser wasn’t sending client certificates.

This led me to double down on the HMAC cookies.

Coming clean

When I call this Single Sign On, please understand that I really only have the vaguest possible understanding of what that means.  If there are standards or something that are implied by this term, I’m not following them.

What I mean is that I have a centralized lua script that I can include in arbitrary nginx server configs, and it handles auth in the same way for all of them.

The nitty gritty

Rather than using HMAC cookies as a fallback auth mechanism and having “ssl_verifiy_client” set to “optional,” I do the following:

  1. If HMAC cookies are not present, nginx redirects to a different subdomain (it’s important that it’s on the same domain).  This server config requires the client certificate.
  2. If the certificate is valid, it generates and drops the appropriate cookies, and the client is redirected to the original URL.  The cookies are configured to be sent for all subdomains of a given domain.
  3. Now that the client has HMAC cookies, it’s allowed access.  If the cookies were present to begin with, the above is skipped.

The setup has a couple of pieces:

  1. An nginx server for an “SSO” domain.  This is the piece responsible for dropping the HMAC cookies.
  2. A lua script which is included everywhere you want to auth using this mechanism.

This is the SSO server config:

And the SSO lua script:

An example of it being used: