Self-Hosting Mastodon on Nomad
For a while, I’ve been running my Mastodon instance, coleman.social, on a DigitalOcean VPS. It worked fine, but as my self-hosted infrastructure matured, it made sense to move it over to my Nomad cluster. The benefits? Faster response times, better resource utilization, and lower costs. Plus, I wanted full control over my setup.
Why Self-Host?
- Cost Efficiency – Running Mastodon on DigitalOcean meant paying for a dedicated VPS. Now, it runs on my existing infrastructure with no extra monthly costs.
- Performance – With direct access to my Synology SAN for volumes, everything runs smoother and faster.
- Full Control – Instead of relying on a managed VPS, I can tweak the stack however I need, with consistent configurations across services.
- Scalability – By moving to Nomad, I can easily scale up or down as needed without dealing with VPS limitations.
Infrastructure Overview
Nomad & Consul for Orchestration
Mastodon is now managed as a Nomad job, ensuring clean deployments and automated scheduling.
Consul Connect for Service Mesh
To enable secure service-to-service communication, I use Consul Connect as the service mesh. This ensures all traffic between Mastodon, PostgreSQL, and Redis is encrypted and authenticated without needing to expose ports unnecessarily. Consul handles service discovery and automatic TLS encryption between services, making it an essential component of my setup.
A simple example of how I enable Consul Connect in my job file:
service {
name = "mastodon"
connect {
sidecar_service {
proxy {}
}
}
}
This ensures that all connections are securely routed through the Consul service mesh.
Synology SAN for Storage
PostgreSQL, Redis, and Minio all use persistent volumes on my Synology SAN, mounted directly via iSCSI. This keeps data storage centralized and resilient.
Cloudflare Tunnels & Caddy for Access
Public access is handled through Cloudflare Tunnels, with Caddy serving as the reverse proxy. This keeps things secure while avoiding direct exposure of my home IP.
Setting Up Mastodon on Nomad
Nomad Job Configuration
Here’s a snippet of my job.nomad file that defines the Mastodon service:
job "mastodon" {
datacenters = ["dc1" ]
type = "service"
group "mastodon" {
count = 1
task "web" {
driver = "docker"
config {
image = "tootsuite/mastodon:latest"
ports = ["http"]
}
env {
DATABASE_URL = "postgres://mastodon@db:5432/mastodon"
REDIS_URL = "redis://redis: 6379"
}
}
}
}
This ensures Mastodon runs as a service in Nomad, using PostgreSQL and Redis.
PostgreSQL & Redis Volumes
My PostgreSQL and Redis instances both run in Nomad, using persistent volumes stored on my Synology SAN. Here’s an example of the PostgreSQL volume definition:
id = "mastodon-postgres"
name = "mastodon-postgres"
type = "csi"
plugin_id = "synology"
capacity_min = "30GiB"
capacity_max = "30GiB"
capability {
access_mode = "single-node-writer"
attachment_mode = "file-system"
}
mount_options {
fs_type = "ext4"
mount_flags = ["rw", "noatime"]
}
With this, even if the Nomad allocation moves, the database remains persistent.
Minio for Media Storage
Instead of relying on S3, I’m using Minio, a self-hosted S3-compatible object store. Here’s the job definition:
job "minio" {
datacenters = ["dc1"]
type = "service"
group "minio" {
count = 1
task "storage" {
driver = "docker"
config {
image = "minio/minio:latest"
command = "server /data"
}
volume "data" {
type = "csi"
read_only = false
source = "synology"
}
}
}
}
This allows Mastodon to store images, videos, and attachments locally instead of using a third-party service.
Wrapping It Up
This migration has been a great exercise in moving a live service from a VPS to a self-hosted setup. Performance has improved, and my costs have dropped to nearly zero since I already had the infrastructure in place.
If you’re looking to do something similar, you can access the full, production-ready config files in the subscriber-only section of my Patreon.
By subscribing, you’ll get:
- Full Nomad job definitions
- PostgreSQL, Redis, and Minio volume configurations
Subscribers can access the files here.
Have questions? Drop a comment, or reach out on Mastodon at @[email protected].