Right, so about that AWS bill...
We’ve all been there, haven’t we? Staring at that monthly AWS invoice, watching the numbers climb, wondering if you’re really getting value for money. It starts small, a few EC2s, an RDS instance. Then it’s S3, SQS, ElastiCache, some Lambda functions, an API Gateway here, a CloudFront distribution there. Before you realise it, you’re paying a small fortune just for the privilege of running your webdev
platform. It’s a familiar story, isn’t it?
That was exactly our situation a few months back. We had a pretty standard setup: a few Ruby on Rails
applications handling our core business logic, some Node.js
and TypeScript
API
s for specific services, and a couple of Python
microservices
doing background work, some even dabbling in a bit of AI
inference. All humming along nicely on AWS
, but the costs were getting a bit much for our scale. It felt like we were using a sledgehammer to crack a nut, especially when I saw the chatter on HackerNews about folks moving to more bare-metal providers. It really got me thinking.
Look, AWS
is brilliant for what it is – incredible scale, managed services, global reach. Don't get me wrong. But for a lot of us, especially startups or small to medium-sized tech companies, it can feel like overkill. You’re paying for a lot of things you don’t really need, or you’re spending too much time optimising obscure configurations to save a few quid. And let’s be honest, sometimes you just want more control, you know? To really get your hands dirty.
Why on earth Hetzner, Alex?
So, why Hetzner
? Well, the short answer is cost and control. The long answer is a bit more nuanced. We looked at a few different providers, but Hetzner
kept popping up. Their pricing for dedicated servers and cloud instances is incredibly aggressive. We’re talking a fraction of the cost compared to equivalent AWS
instances. For me, that’s a huge win.
Beyond the price, I was drawn to the simplicity. Hetzner
offers good quality hardware, reliable networking, and a relatively straightforward control panel. It’s not as feature-rich as AWS
, and that’s precisely the point. You get the raw compute, storage, and networking, and you build your stack on top of it. This appealed to the opensource
enthusiast in me. It’s like going from a fancy all-inclusive resort to building your own cabin in the woods – more effort, but ultimately more rewarding and tailored to your needs.
For our specific setup, which included those Ruby
apps, Node.js
API
s, and Python
microservices
, we realised we didn’t need all the managed bells and whistles of AWS
. We were comfortable managing our own databases, our own queues, our own servers. We wanted to escape the AWS
vendor lock-in a bit, too. Plus, the thought of drastically cutting our infrastructure spend was a powerful motivator. This migration
wasn’t just about saving money; it was about reclaiming simplicity and control.
The Great Inventory and Planning Expedition
Right, so deciding to move is one thing; actually doing it is another. This is where the real work started. You can’t just pick everything up and plonk it down somewhere else. You need a plan. My first step, and honestly, the most crucial, was a thorough inventory of everything we were running on AWS
. Seriously, don't skip this.
We had:
* EC2 Instances: Various sizes, running different apps.
* RDS: PostgreSQL, Redis.
* S3: For static assets, user uploads, backups.
* SQS: For background job queues.
* ElastiCache: For Redis caching.
* Route 53: Our DNS.
* Lambda/API Gateway: A few serverless functions for specific tasks.
This took us a good few weeks, I’m not kidding. We went through every terraform
file, every cloudformation
template, every Dockerfile
, and every package.json
or Gemfile
. It was a grind. We mapped each AWS
service to what it would look like on Hetzner
– either a self-hosted opensource
alternative or a direct equivalent. For example, RDS PostgreSQL became a self-managed PostgreSQL server on a dedicated Hetzner
machine. S3 became a combination of nginx
serving static files and Hetzner
Storage Box for backups and larger user uploads.
Prioritisation was key. We decided to migrate the least critical services first to iron out the kinks, then move onto the core API
s and finally the databases. This phased approach meant less risk and more learning opportunities along the way. We also set a clear cut-over plan for each service, complete with rollback strategies. Because, let’s be honest, things will go wrong.
Getting Our Feet Wet with Hetzner
Once the plan was in place, it was time to get our hands dirty with Hetzner
itself. We opted for a mix of dedicated servers for our primary databases and heavier Ruby on Rails
applications, and Hetzner
Cloud instances for our smaller Node.js
API
s and Python
microservices
.
Ordering Servers: This is pretty straightforward. You pick your hardware, your operating system (we went with Ubuntu LTS), and you’re good to go. It’s not as instantaneous as spinning up an EC2, especially for dedicated servers, but it’s quick enough.
Networking: This is where it starts to feel a bit different from AWS
VPCs. Hetzner
provides public IPs, but you can also set up private networks between your servers. This is essential for secure database connections and inter-service communication. We configured a private network and assigned private IPs to all our servers, ensuring that our database server, for example, wasn’t directly exposed to the internet. This tripped me up at first because I was used to AWS
security groups and subnets handling a lot of this automatically. Here, you’re building it up yourself.
Firewalls: Hetzner
offers a basic firewall service at the network level, which is great for blocking common ports. But for fine-grained control, we still relied on ufw
(Uncomplicated Firewall) on each server. It’s an extra layer of security and gives you granular control over what ports are open to the internet and what can communicate over the private network.
Initial Server Setup: Once the servers were provisioned, it was standard Linux administration: SSH key management, setting up basic users, updating packages, and installing monitoring agents. This isn’t as ‘click-and-deploy’ as AWS
, but you get a lot more control over the base system, which I actually prefer.
The Nitty-Gritty Migration Dance
This was the bulk of the work. Let’s break it down by component.
Databases: The Heart of it All
Migrating our PostgreSQL
and Redis
instances from AWS
RDS
and ElastiCache
to self-managed servers was probably the most nerve-wracking part. Data integrity, eh? For PostgreSQL
, we set up a new instance on a dedicated Hetzner
server, configured it for optimal performance, and then used pg_dump
and pg_restore
for the initial data transfer. For the final cutover, we stopped writes to the old AWS
database, performed a final pg_dump
, transferred it, and restored it on Hetzner
.
# On the AWS instance (or a server with access):
pg_dump -h <aws_rds_hostname> -U <username> -d <database_name> -Fc > db_backup.dump
# Transfer db_backup.dump to your Hetzner server
scp db_backup.dump user@hetzner_ip:/path/to/backup/
# On the Hetzner server:
pg_restore -h localhost -U <username> -d <database_name> -Fc db_backup.dump
We also set up wal-g
for continuous archiving and point-in-time recovery, sending backups to a Hetzner
Storage Box. For Redis
, it was simpler: a SAVE
and BGSAVE
command, copying the .rdb
file, and loading it on the new Hetzner
Redis
instance.
Application Servers: Bringing our APIs to Life
This was relatively straightforward for us, as our applications were already containerised with Docker
. We had a mix of services: some small Node.js
API
s built with TypeScript
, a couple of heavy Ruby on Rails
apps handling our main webdev
traffic, and some newer Python
services that occasionally dabble in AI
inference.
For Ruby on Rails
apps, we used Puma
behind Nginx
. For Node.js
(Express/NestJS) and Python
(Django/Flask) apps, we used PM2
or Gunicorn
/uWSGI
respectively, also behind Nginx
.
Here’s a simplified Nginx
configuration for one of our Ruby
apps:
server {
listen 80;
listen [::]:80;
server_name your_domain.com www.your_domain.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name your_domain.com www.your_domain.com;
ssl_certificate /etc/letsencrypt/live/your_domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/your_domain.com/privkey.pem;
root /var/www/your_app/public;
index index.html;
try_files $uri @puma;
location @puma {
proxy_pass http://unix:/var/www/your_app/shared/tmp/sockets/puma.sock;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
}
# Serve static assets directly
location ~* \.(css|js|gif|jpe?g|png|svg|ico|webp)$ {
expires max;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
access_log off;
# Optionally, restrict direct access to some static files
# if (!-e $request_filename) {
# return 404;
# }
}
}
Deployment involved git pull
, bundle install
(for Ruby
, rubygems
and bundler
are essential here), npm install
(for JavaScript
/TypeScript
apps), or pip install
(for Python
apps), and then restarting the application server. We used systemd
to manage our application processes, ensuring they start on boot and restart if they crash. It feels a bit old school compared to deploying to Lambda
, but it’s reliable and gives you full control.
Static Assets and CDNs
We were using S3
for static assets. On Hetzner
, we now serve most of our frontend
static files (CSS, JavaScript
bundles, images) directly from Nginx
on our application servers. For user-uploaded content or larger files, we considered MinIO (self-hosted object storage) or simply using Hetzner
Storage Boxes mounted via sftp
or rsync
. Ultimately, we decided to keep Cloudflare
in the mix. It’s a no-brainer for CDN
capabilities, DNS
management, and WAF
(Web Application Firewall) protection. So, Cloudflare
sits in front of our Hetzner
servers, caching static content and routing traffic.
Containerisation and Orchestration
We heavily use Docker
for consistency across development and production. For orchestration, we initially stuck with Docker Compose
on individual servers. Hetzner
does offer a managed Kubernetes
service, which is fantastic, but for our initial migration
, we prioritised simplicity. I've been thinking a lot about container orchestration lately, and I even mentioned some of the complexities and considerations in What's Got My Attention in Web Dev Right Now – it's a deep rabbit hole!
CI/CD and Deployments
We continued using GitHub Actions
for our CI/CD
pipelines. The deployment steps were updated to SSH
into our Hetzner
servers and execute deployment scripts. This usually involves pulling the latest Docker
images, restarting containers, and running any necessary database migrations.
Monitoring and Logging
Replacing AWS
CloudWatch
and X-Ray
meant embracing opensource
alternatives. We set up Prometheus
and Grafana
for server and application monitoring. For centralised logging, we went with Loki
and Grafana
rather than a full ELK
stack (Elasticsearch, Logstash, Kibana) because it felt lighter for our needs. This involved installing Promtail
agents on each server to collect logs and send them to Loki
.
DNS
As mentioned, we kept Cloudflare
for DNS
. It’s fast, reliable, and offers a lot of useful features that Hetzner
’s basic DNS
isn’t designed to provide.
Ouch, That Hurt! Lessons Learned and Mistakes Made
No migration
is without its bumps, and ours was no exception. Here are a few things that caught us out:
* Networking Gotchas: I mentioned this earlier, but it’s worth reiterating. Hetzner
’s networking model is simpler but requires more hands-on configuration. Setting up private networks, ensuring correct routing, and configuring ufw
on each server for inter-service communication was a learning curve. I ran into an issue last month where a Python
microservice
couldn’t connect to Redis
because I’d forgotten to open the Redis
port on the private network firewall. A quick ping
and telnet
(and a facepalm) sorted it, but man, that was frustrating.
* Managed vs. Self-hosted: You really appreciate what RDS
does for you when you’re managing PostgreSQL
yourself. Backups, updates, security patches, replication – it’s all on you now. This requires a solid understanding of the opensource
tools you’re using and a robust operational playbook. We invested time in automating these tasks with Ansible
and cron
jobs, but it’s a non-trivial amount of work.
* Backup Strategy: Beyond database dumps, we implemented full server snapshots using Hetzner
’s snapshot feature and regular rsync
backups of critical directories to Hetzner
Storage Boxes. Seriously, don't underestimate the importance of a well-tested backup and recovery plan. Just do it, save yourself the headache later. This goes for all services, from your Java
API
s to your frontend
assets.
* Security: Hardening servers is paramount. Beyond firewalls, we enforced SSH
key-only access, disabled password authentication, set up fail2ban
for SSH
brute-force protection, and regularly applied security updates. It’s a continuous process.
* Downtime: Despite meticulous planning, there was always some downtime during cutovers. We communicated this clearly to our users and scheduled it during off-peak hours. Being honest about potential disruptions is crucial.
Disk Space: I actually ran into an issue last month where a disk filled up on one of our Python
microservices
because I hadn’t properly configured log rotation. A quick df -h
and du -sh
helped me identify the culprit (a runaway log file), and logrotate
fixed it, but it was a good reminder that we’re more hands-on now and these things don’t just magically get handled like they might on AWS
.
The Aftermath and The Verdict
So, after all that, was it worth it? Absolutely. The migration
was a significant undertaking, but the benefits have been huge.
* Cost Savings: This was the primary driver, and we achieved massive savings. Our infrastructure bill went from healthy four figures down to a very agreeable three figures. That’s money we can reinvest in development, new features, or even a better coffee machine!
* Performance: Often, we’ve seen better performance. Dedicated resources on Hetzner
mean less noisy neighbour syndrome, and we have full control to tune the operating system and application servers for our specific workloads.
* Control: We own the whole stack now. This is great for customisation, debugging, and understanding exactly what’s going on under the hood. For someone who loves to tinker and understand systems, it’s incredibly satisfying. If you're interested in really getting down to the metal, you might even like some of the stuff I was musing about in Rust in the Kernel What's Got Me Thinking – it's all about understanding what's underneath.
* Complexity: It’s a mixed bag. In some ways, it’s simpler because there are fewer abstract AWS
services to navigate. In other ways, it’s more complex because you’re responsible for more operational tasks. For us, the trade-off was worth it.
Is Hetzner for You?
This kind of migration
isn’t for everyone. It makes sense if:
* You value cost control and direct server access: If your AWS
bill is hurting and you want more transparency.
* You’re comfortable with server administration: Or you have a small ops team that is. You’ll be managing more yourself.
* You don’t need all the bells and whistles of AWS
managed services: Or you can replace them with reliable opensource
alternatives.
* You have predictable workloads: While Hetzner
Cloud offers scalability, it’s not as elastic as AWS
for massive, unpredictable spikes.
It might not be for you if you’re a small team with no server administration experience, or if your application heavily relies on very specific, niche AWS
services that are hard to replicate. But for many webdev
teams, especially those running Ruby
, Python
, Node.js
, or Java
applications, it’s a seriously compelling alternative. I’ve even seen some folks using these kinds of setups for smaller AI
model training or hosting simpler frontend
apps that need a beefy backend API
.
Wrapping it Up
Migrating from AWS
to Hetzner
was a significant project for us, but it’s paid off handsomely. It forced us to really understand our infrastructure, streamline our deployments, and embrace more opensource
tools. It’s not a one-size-fits-all solution, but if you’re feeling the pinch of cloud costs and want more control, I’d strongly encourage you to explore alternatives like Hetzner
.
Don’t be afraid to challenge the status quo. Sometimes, taking a step back from the hyperscalers and getting closer to the metal can yield incredible benefits. What are your thoughts? Have you made a similar migration
? I’d love to hear your experiences in the comments!