For many years, I’ve relied on CloudFront to provide a professional level of availability for pennies a month. However, for the last two months, I had to take this blog offline to protect my wallet.

History of my Blog’s Tech Stack

When I first started blogging about 6 years ago, I was a passionate Drupal developer, and I built my content on a simple Drupal site.

Fast forward to 2014, I learned about AWS S3 and did a fair bit of work using CloudFront for some Drupal sites I running at work. Once I learned about Jekyll, I rebuilt my blog using Jekyll and used the aws cli to push that content to S3 where CloudFront served the content over HTTPS with a custom SSL certificate. Running a production website without managing servers is what made me fall in love with cloud computing.

Eventually, I figured out how to use Travis CI to automate pushing new content to S3, migrated to Gitlab/CI, and added the ability to have each branch of my blog create a new S3 bucket for me to preview the content. I added simple bash scripts to garbage collect old buckets, and do all of the tasks I felt needed it.

Then, last week, I gave up all of the power and control I had over this blog from managing CloudFront myself and ruefully switched to Gitlab Pages.

What Happened To My Blog on CloudFront

Monthly spending from July 2018 until March 2019 for this blog ranging from $0.20 to $1.80

Yes, the scale on this graph is from $0 to $2 per month. Up until March of 2019, I’d never spent more than $2 on my whole AWS account in a single month.

Then, a funny happened in April:

Monthly spending from July 2018 until April 2019 for this blog -- April is $164 whereas no previous month was more than $2

When I got this bill in May, I immediately turned off the CloudFront distribution and slowly investigated what went wrong.

Where did these costs come from?

I discovered IP addresses based in Turkey were making HTTP HEAD requests for /ping.txt (which has never existed). I’ve lost the report from CloudFront (since the reports have a 3 month retention period), but the Cost Explorer and pricing information shows:

  • $0.012 per 10,000 requests
  • $150 total costs

So, $150 * 10,000 requests / $0.012 = 125 million requests, or roughly 1 request per second. I thought that there were more like 15 billion requests, but the math does not check out for that. If my memory is correct, then CloudFront serviced close to 5000 requests per second for a month.

Why Didn’t I Notice This Earlier?

I didn’t setup billing alerts with AWS. If you’re running personal infrastructure on AWS and don’t have billing alerts, stop what you’re doing and go setup some billing alerts right now.

The upshot of all this is that CloudFront kept my blog online and available during a spike in traffic of many orders of magnitude. I didn’t have any downtime detected by my monitoring tools.

But the downside of this uptime is that I literally paid with my wallet. I’d gladly have traded some downtime (like weeks of downtime) to have that $150 back in my pocket.

How Can I Protect My Blog?

This is the really frustrating part for me. There really weren’t any great options that I could find. Here’s what looked into:

Geo-blocking inside CloudFront

CloudFront can be configured to block access to content based on the presumed physical location of visitor’s IP address. The problem is you still pay for the geo-blocked requests. The point of geo-blocking is to (a) protect your original and (b) limit access to your content. Geo-blocking isn’t designed to really save you from an attacker.

WAF and DDOS Protection

Like CloudFront geo-blocking, all that Web Application Firewall (WAF) and the various DDoS protection products try to do is restrict access to the origin, not prevent CDN costs.

DNS to the Rescue!

So, what we learned above is basically if a request goes to CloudFront, you must pay for that request. Full stop. But, what if we could selectively control which requests went to a CloudFront? What if we could use DNS to deny access to CloudFront to the attacker?

At work, we use a product called Cedexis which is just all kinds of amazing. Basically, its a programmable DNS resolver. You define several “platforms” which are CNAMEs that Cedexis could resolve, and then you feed analytics into Cedexis, and at DNS resolution, custom javascript applications have access to the analytics data to let pick different platforms based on which one is most available or performant or cheap. Unfortunately, Cedexis is only really marketed as an enterprise product, not a hobby developer product.

Fortunately, AWS Route53 has DNS policies that can do basic geo-blocking (and probably a lot more too). Sadly, each policy costs $50/month, so that doesn’t really help me keep costs down to under $2.

Ditch CDNs for VMs

This is actually a pretty reasonable idea. Getting a fixed $5/month cost and variable uptime is exactly the set of tradeoffs I’m looking for. Also, it gives me considerable flexibility to build out automation for previewing different branches of my git repository for this blog.

The downsides are that I’d only be willing to pay for one VM running in a single location, so pagespeed would likely vary based on the visitor’s distance from my VM, and I do get some percentage of my visitors from the UK and India. Also, I’d be committing to spending $5/month regardless of whether the server was idle or active during that time. Also, now instead of sitting back and letting CloudFront provision and monitor individual servers, I’d have to setup monitoring and alerting for this VM.

DIY Cedexis

I actually want to do this. After looking at BIND and CoreDNS, I didn’t see anything quite as powerful as what Cedexis OpenMix provides, but I did find this golang library for DNS that’s a promising start.

My main concerns here are about downtime and abuse of the DNS resolver…but I plan to experiment in this area. If I get this right, I could potentially keep using CloudFront + S3 + LetsEncrypt to manage this website. The endgame for this would be to automate attack detection and mitigation as a sort of fail2ban at DNS layer.

Gitlab Pages

I wound up going with Gitlab Pages for now because my time is so limited and I wanted to be back online (after weeks of downtime) as soon as possible.

The biggest things I lose here are:

  • easy way to renew LetsEncrypt certificates
  • branch previews of my markdown
  • visibility into whats going on with the delivery of my blog
  • control over custom redirects, protocols, etc
  • surprise bills for $150

Parting Thoughts

If you know of other ways to financially mitigate even simple attacks while keeping CloudFront in my stack under my control, please do tell me! Also, billing alerts are key.