Security Hardening Octoprint/Octopi

Octoprint is a great web frontend for 3D printers. Octopi is a raspbian-based image for a Raspberry Pi that comes with everything you need set up and configured.

Octoprint is an extremely convenient way to manage your 3D printer.  However, it’s capable of a lot of spooky things:

  1. If you have them, provides access to webcams showing prints
  2. Can set temperatures of both the tool and the heatbed
  3. Start whatever print you feel like
  4. Control steppers

In the best case, Octoprint gives whoever can access it the ability to see into your house and what’s going on with your printer.  In the worst case, someone with malicious intent could burn down your house, or at least wreck your printer.

The smartest approach here is probably to put Octoprint on a trusted network and refrain from poking holes in your router to allow access from the Internet.

But I’m not that smart.

In this post I’m going to outline a couple of things I did that make me feel better about exposing my Octoprint instance to the Internet.

Prior Art

First of all, Octoprint has builtin access controls.  And you should definitely use those.

I feel strongly that these are not sufficient, however:

  1. Unauthenticated users can do way too much.  Most importantly, they can view webcam feeds.  Yikes!
  2. There have been bugs with the builtin access controls.

Secondly, others have done things similar to what I’ve done.  However, there are a couple of things I’m going to do differently, and there are a few additional things I want to do.

Requirements

  1. Every interaction with Octoprint should go through a reverse proxy.  It should not be possible to access any part of Octoprint except through the reverse proxy.
  2. The last requirement should apply even if you’re on my local network.  Something about unauthenticated Webcam feeds gives me the jeebies.  Even if they’re pointed at a corner.
  3. I’m not going to run a web-facing nginx instance on Octoprint.  I want to use my main server as an entry point.
  4. Use client certificates for auth (I covered this in a previous post).
  5. TLS via letsencrypt.

Close down the ports

By default, Octopi exposes the Octoprint web interface on port 80 (via haproxy), and the webcam feed via mjpeg_streamer on port 8080.

I didn’t want these ports accessible except through loopback.  This is easy enough to change.

To shut down access to the Octoprint instance, just disable haproxy:

The Octoprint instance itself listens on port 5000 by default, and is bound to loopback.

To shut down access to mjpeg_streamer, we’ll have to fiddle with the script stored at /root/bin/webcamd :

This tells mjpeg_streamer’s http plugin to bind itself to loopback.  For it to take effect, make sure to restart the webcamd service (or just reboot the pi to be safe).

To test that this worked, try accessing http://octopi.local and http://octopi.local:8080.  You should get connection refused errors for both.

Open up the ports (on nginx server)

If you plan on running nginx on the pi, you can skip this step.  I have a different server running nginx.

In the last step, we shut down the ports to Octoprint.  Now we need to give the server running nginx a way to access them.

An easy way to accomplish this is with local SSH tunnels.  Setting this up is easy enough:

  1. Create a user on the octopi instance.  I called mine something to the effect of “ssh-proxy”
  2. Create a corresponding user on the server running nginx.  Generate an SSH key.
  3. Add the public key for ssh-proxy@nginx-server to ssh-proxy@octopi:~/.ssh/authorized_keys
  4. Set up autossh to establish a persistent SSH tunnel.  This will reestablish the tunnel when the pi reboots or connectivity is broken for any reason.  This is the command I used:
  5. Execute the above command on boot.  I accomplished this by putting it in /etc/rc.local.

Now Octoprint should be available on the nginx server via port 25000.  Same deal for the webcam feed on 28080 (I have another webcam accessible via 28081).

Note that these should be bound to loopback because of the way the tunnel is set up.  No point in all of this noise if that’s not the case.

Make ’em accessible

Now we can go about this if it were a standard reverse proxy setup.  The backends are accessible by loopback on ports local to the nginx server.

You can set up authentication however you like.  It’s probably easy and safe to use TLS, HTTP auth, and something like fail2ban.

I like client certificates, and already had them set up for other stuff I run, so I’m using those.

This is my config:

What’s this access_by_lua hocus pocus?

I covered this in a previous post.  The problem is that modern web applications don’t really play nicely with client certificates, and this seemed to include Octoprint.  There’s a bunch of wizardry with web sockets and service workers that don’t send the client cert when they’re supposed to.

The basic idea behind the solution is to instead authenticate by a couple of cookies with an HMAC.  When these cookies aren’t present, nginx redirects to a domain that requires the client certificate.  If the certificate is valid, it generates and drops the appropriate cookies, and the client is redirected to the original URL.

See the aforementioned post for more details.

Goes without saying, but…

The Raspberry Pi itself should be secured as well.  Change the default password for the pi user.

Join the Conversation

4 Comments

Your email address will not be published. Required fields are marked *

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  1. i have a question, why do you use the tunnels, why not simply point the nginx backend to the octopi ip address directly ?

    1. See requirements numbers (2) and (3). nginx is running on another machine. Pointing nginx directly at octopi would mean binding the octopi server to 0.0.0.0 and making it accessible to anything on my network.

      There’s an argument to be made that this is fine. But I don’t like it, and it’s not hard to change. Since setting this up a year ago, it’s not broken even once, so also not much of a maintenance burden 🙂

  2. I’m curious about your thoughts on HAProxy versus NGINX.  My impression was they were similar products somebody’s rant here.  Is the main issue for you that you want the processes on separate servers?

    1. For this use-case, I don’t think it matters. If I needed a software load balancer, I’d probably be using HA Proxy. There’s so little load for anything like this, that there’s not a whole lot of reason to prefer one over the other. HA Proxy also has a LUA scripting component, so I’m sure this would be possible with it as well.