WordPress hosting, docker style (Part 1)

A whale, coming out of the water
Photo by Beanca du Toit on Unsplash

Part 1: WordPress hosting, docker style
Part 2: Cron + LetsEncrypt, docker style
Part 3: Matching socks: Nginx + php = WordPress

Those segfaults I mentioned? Yeah, they proved unsolvable. Nginx Unit seems to be having a rough time and un-extracting Nginx Unit from the install script was more difficult than expected too.

Instead, I spent more time than that setting up my own cluster of docker containers. The benefit is that I can now run a whole copy locally, test changes, and then push to production. It also allows me to track what changes to all the various .conf files I’ve been making. Today, I’ll talk about setting up a SSL-terminating reverse-proxy, and how to host it with docker-compose.

What is a reverse proxy anyways?

Honestly, it’s a dumb name. It’s just a load balancer, handling incoming requests and forwarding them to a different backend service. There’s a few benefits to this approach:

  • Since all clients connect to this tier, it’s a single source for logging (and filtering if needed)
  • SSL termination can happen at this layer, so none of your other services have to handle decrypting & encrypting your traffic
  • SSL can be expensive at scale, so it provides a scaling point (If this matters to you, then you shouldn’t be reading this blog post). e.g. it’s a bullet point on a technical coding interview
  • Redundancy point for your backend services. E.g. you can use a load balancer to upgrade your backend, or fail-over to a different stack on failure

Load Balancer w/ nginx

The Dockerfile is pretty simple. The main work needed besides a normal nginx install is to generate dhparams (diffie-hellman params). These are large prime numbers used in the context of initializing a TLS session. If you’re into math-y things, it’s not too hard to understand. Basically using large prime numbers it lets you share secrets without knowing each other’s secret key.

In short, we need to compute these primes, once (as it’s expensive). The Dockerfile creates this during the build phase and writes it to dhparams.pem

RUN openssl dhparam -dsaparam -out /etc/ssl/certs/dhparams.pem 4096

Next is to configure nginx. Here’s the whole script, and I’ll describe below what each section is for

server {
    server_name terminal.space;
    listen [::]:443 ssl http2 ipv6only=on;
    listen 443 ssl http2;

    ssl_certificate /etc/ssl/certs/private/terminal.space/fullchain.cer;
    ssl_certificate_key /etc/ssl/certs/private/terminal.space/terminal.space.key;
    ssl_trusted_certificate /etc/ssl/certs/private/terminal.space/fullchain.cer;
    ssl_dhparam /etc/ssl/certs/dhparams.pem;

    ssl_session_cache shared:le_nginx_SSL:10m;
    ssl_session_timeout 1440m;
    ssl_session_tickets off;
    ssl_protocols TLSv1.3;
    ssl_prefer_server_ciphers off;
    ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384";
    ssl_session_cache shared:MozSSL:10m;
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
    ssl_stapling on;
    ssl_stapling_verify on;
    resolver 1.1.1.1 8.8.8.8;
    resolver_timeout 5s;

    location / {
        proxy_pass http://www:8080;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_http_version 1.1;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Host $host;
    }
}

server {
    server_name terminal.space;

    listen 80 default_server;
    listen [::]:80 default_server;
    return 301 https://$host$request_uri;
}

Setting up a server on port 443

For nginx, each server{} block describes ports to listen to, and behaviors to handle the various http locations on that port. To set up a https server, we need to tell nginx what the URL is, as well as the needed SSL information to encode and decode traffic

server_name terminal.space; # TLS hostname to listen to
# Listen to ipv6 localhost, port 443 for ssl or http2 traffic
# the ipv6only flag says not to use ipv6 6to4 on this port
listen [::]:443 ssl http2 ipv6only=on;

# listen to ipv4 localhost, port 443 for ssl or http2 traffic
listen 443 ssl http2;

# ssl_certificate points to the public keys of the server chain that issued your cert (including your certificate)
ssl_certificate /etc/ssl/certs/private/terminal.space/fullchain.cer;
# This is the private information that only allows your server to issue SSL connections for the domain
ssl_certificate_key /etc/ssl/certs/private/terminal.space/terminal.space.key;
# This is not sent to the client. Instead, when the client tries to connect (passing in a certificate), nginx will validate that it trusts someone in the chain. This also allows ssl stapling
ssl_trusted_certificate /etc/ssl/certs/private/terminal.space/fullchain.cer;
# Link to the file containing the diffie-helmen primes described earlier
ssl_dhparam /etc/ssl/certs/dhparams.pem;

Making it all the securez

The next bit isn’t necessary to get SSL to work, but to harden it. Use reputable sources (not my blog) to configure the settings to your liking. You can use SSL labs to get a scorecard of your settings, once complete. Please note that a lot of the settings have tradeoffs. For example, I have the following HSTS setting: add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; which prevents downgrade attacks from https -> http. However, what it does mean is that clients will _refuse_ to connect over http to my server, even if I remove this flag later. TL;DR: Know what you’re doing with this section and don’t copy/paste without understanding

Doing the actual proxy

location / {
    proxy_pass http://www:8080;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_http_version 1.1;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Host $host;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header Host $host;
}

Now that SSL termination is complete, we need to actually forward the request to the backend. This is done with the proxy_pass http://your_address_here; line

This is all you have to do, the rest is optimizations & bookeeping. The main optimization is to use a WebSocket between the reverse proxy & the backend. Websockets are great. They support bi-directional communication, streaming without http overhead, etc. It’s not used quite as widely because it needs browser support. However, since we control the reverse proxy and the backend, it’s safe to upgrade to WebSocket. The other bit is to add headers onto the new request. This is the one, and only time in this entire post that the term “reverse proxy” makes sense. With a normal proxy, you don’t want the server to know who you are, so you have the proxy talk to your illegal streaming site. However, for this scenario, that’s a bad thing. We _do_ want to know who is trying to talk to us, further down the pipeline. Adding the extra headers keeps track of all of the transformations done.

  • X-Forwarded-For is supposed to be a running list of all proxies used. For example, if you hopped through 2 proxies, it would be a.a.a.a,b.b.b.b (where a.a.a.a is your original IP and b.b.b.b is the IP of the first proxy)
  • X-Real-IP is supposed to be the sole client IP address, not all of the hops
  • X-Forwarded-Host lets downstream clients know which server the packet is intended for (e.g. terminal.space in my case)
  • X-Forwarded-Proto will always be https, since that’s the only type of connection I allow (more below)
  • Host idk why both host and X-Forwarded-Host are set but I probably copied it from somewhere.

Red Rover Red Rover send HTTPS over

At the bottom, we have a separate server block listening to port 80 (unencrypted http). The job here is simple – to tell every client to connect over https instead

server {
    server_name terminal.space;

    listen 80 default_server;
    listen [::]:80 default_server;
    return 301 https://$host$request_uri;
}

Docker-compose

Lastly, to make all of these components work, I wired up a docker-compose configuration. I added the default.conf file as a volume, so it can be modified without rebuilding the container. Lastly, I configured the network to always use a 192.168.30.X ip address for the reverse proxy. This will help later in the backend to figure out if the sender is coming from within, or beyond the network. This also leads me to my favorite link from researching this part – what the heck is ipam?

version: '3.7'

services:
  nginx_reverse_proxy:
    build:
      context: ./nginx_reverse_proxy
    restart: always
    volumes:
      - ./nginx_reverse_proxy/default.conf:/etc/nginx/conf.d/default.conf:ro
      - ./secrets/certs:/etc/ssl/certs/private/terminal.space:ro
    networks:
      - www-network
    ports:
      - '80:80'
      - '443:443'


networks:
  www-network:
    driver: bridge
    ipam:
      driver: default
      config:
        - subnet: 192.168.32.0/24

Running the service at startup

TL;DR: Follow the steps here: https://stackoverflow.com/a/48066454. The main change I made for my configuration is to always rebuild the containers on start if need be:

[Unit]
Description=Docker Compose Application Service
Requires=docker.service
After=docker.service

[Service]
WorkingDirectory=/home/anil/www_docker
ExecStart=/usr/bin/docker-compose up --build
ExecStop=/usr/bin/docker-compose down
TimeoutStartSec=0
Restart=on-failure
StartLimitIntervalSec=60
StartLimitBurst=3

[Install]
WantedBy=multi-user.target

I will completely admit to not knowing most of what I just copy/pasta’d here but it seems to work well! On that note, stay tuned until next time! I’ll chat more about setting up cron jobs to refresh the SSL certs & perform backups.