Migrating from WordPress to Hugo Part 4: Securing the Site with SSL

I originally drafted this in June 2018. Following on from Sam McGeown’s recent migration to Hugo, I thought I’d finally publish this in case it’s useful for anyone rather than sitting on it until I complete the process!

Why SSL?

It’s only a blog, so why SSL? It’s going to be static content, so why SSL?

In this article I’ll deal with those questions and go through the process of requesting an SSL certificate using AWS Certificate Manager.

Let’s Go Secure!

If you recall, our finishing point is going to end up being a collection of static HTML files served out by AWS. There’s nothing particularly risky about serving up or requesting such static files, it’s how the internet started out after all. What’s different now though are people’s perception of risk and privacy and how that’s reflected in the technology we use.

Google, for example, promote SSL sites slightly higher in their search rankings than non-SSL sites and have been doing so since 2014. Some modern browsers have started flagging warnings about non-SSL sites and this will likely become more obvious over time. Users are becoming more picky and aware as a result, or perhaps they’re driving the changes to an extent. SSL is here to stay though and it’s worth setting it up, especially if it’s free!

AWS off public SSL certificates for free. Let’s go set one up!

We could either do this via the AWS console, or using the CLI. At this time I haven’t worked out how to do it completely via the CLI, but I’m going to start there. (Note: Fro CloudFront, I think that the certificate has to be in the us-east-1 region regardless.)

aws acm request-certificate --domain-name mpoore.uk --subject-alternative-names www.mpoore.uk michaelpoore.com www.michaelpoore.com --validation-method DNS --idempotency-token 201806 --region us-east-1

This command will requests a new SSL certificate with a subject name of “mpoore.uk” and alternative names of “www.mpoore.uk”, “michaelpoore.com” and “www.michaelpoore.com”. The validation method of DNS will require us to validate that we own the domain by making certain DNS entries.

What you get back, is the reference to the certificate. I’ll need that later.

Looking at the AWS Console though, you’ll see that the certificate is not yet issued and must be validated.

To validate each of the domains in the certificate, you need to get some DNS CNAMEs created. Luckily, for mpoore.uk there’s a button for that. For michaelpoore.com though, I had to do these manually as the DNS for that is still with 1&1 for the time being.

Once they’re all done, the validation will eventually complete and the certificate will be issued. Just save the certificate ARN value from earlier as it’ll be needed later.

Sadly, this is as far as I got in the process before other things (life, eh) got in the way. I will be back to revisit and complete the the process though.

Migrating from WordPress to Hugo Part 3: Hosted Zones in Route53

I originally drafted this in June 2018. Following on from Sam McGeown’s recent migration to Hugo, I thought I’d finally publish this in case it’s useful for anyone rather than sitting on it until I complete the process!

It’s Always DNS

When things go wrong in the IT world DNS misconfiguration is one of those things that often sits at the root of your problems. It’s important to get it right not only for correct functioning, but also because some of the subsequent steps depend on it.

As part of my migration of this blog to Hugo, I’m placing one of the two domains I’ll be using under the control of the AWS Route53 (Amazon’s DNS service). I’ll move the other one in time as well.

Creating a Hosted Zone

I tend to use separate providers for domain registration and hosting as I’ve found it easier to move my site(s) around when you can just update the domain’s nameserver (NS) records to point to the new provider rather than have to transfer the domain as well. Practically all of my domains (I host a couple of sites for local community interests too) are registered through FastHosts.

AWS cater for this sort of arrangement too in Route53 (their DNS service). From the Route53 dashboard, all I had to do was select “Hosted zones” from the menu and then click the “Create Hosted Zone” button.

All you need to enter is the domain name and leave the type at its default value (“Public Hosted Zone”).

The zone is created for you and helpfully tells you what namesservers need to be set:

All I then had to do was apply those nameservers to the domain in FastHosts:

Once the dust settles, DNS requests for mpoore.uk will go to AWS for resolution. Which is important as I want to set my site up with an SSL certificate (as Amazon will give you them for free) but validation requires DNS.

So let’s do that next…

Migrating from WordPress to Hugo Part 2: Basic Tooling

I originally drafted this in June 2018. Following on from Sam McGeown’s recent migration to Hugo, I thought I’d finally publish this in case it’s useful for anyone rather than sitting on it until I complete the process!

Summary of Tools Used

These are the tools I’ll be using during my migration of my WordPress blog to Hugo (in AWS):

  • Github
  • SourceTree (git client)
  • Homebrew
  • Hugo
  • Sublime (text editor)
  • AWS S3
  • AWS CloudFront
  • AWS Certificate Manager
  • AWS Route53
  • Filezilla

Building a Toolkit

I’m a Mac user. I have been for a number of years and I don’t plan to switch anytime very soon. Most of the tools that I’ll be using either have Windows / Linux versions or there are similar tools available for those OSs. I’ll try not to go in to too much OSX specific detail about any of them, and, if you’re following this process, you might have to adapt to whatever tooling works best for you.

A good number of tools listed above are web-based or cross-platform so shouldn’t present a big problem for anyone. I will be using the command line when I can, hence the inclusion of AWSCLI.

Probably the most OSX specific tool in that list is Homebrew (aka “Brew”). It’s a package manager for OSX and I’ll be using it to install Hugo and AWSCLI on my laptop. If you’re a Windows user, try Chocolatey instead. If you’re a Linux user, you should use whatever package manager comes with your distro.

Naturally, the use of AWS services means that you need an AWS account of your own. I’m going to assume that you have one and have got it to a point where you can consume the services above.

Installing AWSCLI and Hugo

Let’s assume that we’ve got Brew installed (it’s easy, the instructions are right there on the homepage). Installing AWSCLI and Hugo is straightforward too!

First, AWSCLI. Just type the following in to a terminal window:

brew install awscli

Once installed, you’ll need to execute the following command to configure AWSCLI with your Access Key ID and Secret Access Key:

aws configure

Now let’s do Hugo. Can you guess the command? (I still managed to mistype it!)

brew install hugo

Start Your Engines

So, we know how our journey will start and what we expect to find when we get there. We’ve just packed the car. Let’s get going!

Migrating from WordPress to Hugo Part 1: Overview

I originally drafted this in June 2018. Following on from Sam McGeown’s recent migration to Hugo, I thought I’d finally publish this in case it’s useful for anyone rather than sitting on it until I complete the process!

What, When and Why?

I’ve been blogging using WordPress for about 10 years. In recent months I’ve seen several respected bloggers make the move to Hugo and it has inspired me to do the same. You might ask “Why?”, and I have a few reasons:

  • For starters, I want to improve my skills and knowledge in certain areas of cloud technology. The LAMP (Linux / Apache / MySQL / PHP) stack that WordPress sits on isn’t exactly revolutionary.
  • Next, I want to simplify the site itself and reduce the chances of it being hacked.
  • Finally, the most important reason, because I can!

This series of posts will document my journey.

The Starting Point

WordPress is a great solution, don’t get me wrong about that. I’ve been using it since 2008 to host my blog through its various iterations. During that time WordPress has evolved into quite a mature solution, with a rich ecosystem of theme developers and plugins. It just works and you don’t have to have ninja skills to get your ideas shared with the world. However, every time I login to my self-hosted WordPress installation, there’s a dearth of updates waiting for me to apply them. From time-to-time you get incompatibilities come up and you have to swap out one plugin for another. Also, as the database grows it can become more of a challenge to back up the site or migrate it to a new hosting provider – something I do from time to time to keep the cost of running it down.

As it stands, the starting point looks something like this when it comes to retrieving content from michaelpoore.com (not trying to teach anyone to suck eggs here, I just fancied drawing a diagram – it also helps compare with the finishing point below):

  1. Web browser requests a page from michaelpoore.com and a DNS query is triggered that results in the nameservers for the domain being queried.
  2. The nameservers for michaelpoore.com (hosted by 1&1) are queried for the website IP address.
  3. A connection is made to the 1&1 CDN (Content Delivery Network) for the requested page. That page may be served directly by the CDN or the backend Apache server may have to provide the content.
  4. Assuming at least part of the content is not cached by the CDN, the Apache webserver receives the request and various PHP scripts are executed to render the page content. Combined with other elements such as images and javascript, the content is returned back to the requesting web browser.
  5. The aforementioned PHP scripts will make numerous queries to the MySQL database.

Now, unless you’re adding lots of dynamic content (which I’m not), and unless the CDN is caching significant portions of the returned content (which I don’t know), then there’s a lot going on each time a page is requested. Also, each plugin I add or the WordPress installation in general just represents a greater attack surface. I’m not that arrogant as to believe that anyone would want to hack my blog, but you never know.

Of course, I could migrate from a self-hosted solution to a hosted WordPress site and take away some of the issues that I have (such as applying updates to WordPress and the infrastructure (PHP and MySQL – which I have to update via the 1&1 control panel from time-to-time). I’m all for using such solutions typically, but it seems too easy 🙂

The Finishing Point

Hugo isn’t exactly a webserver. It’s actually a static site generator. It creates a structure of flat HTML files that can be hosted somewhere. As there’s no dynamic content, the pages are very easy to cache. In terms of my finishing point, much of the process looks the same as above:

(One key difference is that I’m introducing another domain name in to the mix. This is partly to help with the migration process, but also because I’ll end up redirecting one of them to the other and I wanted a domain name that matched my twitter handle.)

  1. Web browser requests a page from michaelpoore.com (or mpoore.uk) and a DNS query is triggered that results in the nameservers for the domain being queried.
  2. The nameservers for michaelpoore.com (hosted by AWS Route53) are queried for the website IP address.
  3. A connection is made to the AWS CloudFront CDN (I could also use CloudFlare) for the requested page. That page will likely be served directly by the CDN.
  4. Assuming that the page content cache has expired or perhaps has never been created, the HTML file will be served directly from the AWS S3 bucket.

That should be so much quicker. But let’s talk briefly about how the HTML files get in to S3 in the first place. That is where Hugo comes in as well as a few more pieces that I’ll cover in a later post.

The Journey

So now that I’ve mapped out the starting point and the finishing point, we’ve got the makings of a journey. Let’s get started!


Improving WordPress Blog Performance

speedo-slowI’ve been neglecting this blog a little bit recently. So far in 2014 really. But it hadn’t escaped my notice that it was running veeerrryyyyyy sssllloooooowwwwwlllllyyyy!

Initially I tried just popping CloudFlare in front of it but this site was running so slowly that configuration would fail / timeout each time.

In case it should be of any use, here’s what I did to speed it up a little.

Step 1: Make a Backup!

It’s a fundamental tenet of IT that you backup before you make a change that has the potential to break things and these changes could break things. Make sure that you have backups and you know how to use them.

I use a plugin called BackWPup to make daily backups of the database and files to a Dropbox share. One of the slow site symptoms that I’ve observed recently was that backups were failing more often than not and seemed to be taking a long time to complete. A quick look at MySQLAdmin told me that the database had grown in size to about 78MB. For a bunch of text that seemed a bit high – more on that later though.

I use 3 backup jobs to backup the site:

  • A daily database backup – this takes a complete backup of the WordPress database and uploads it to Dropbox.
  • A weekly file backup – this captures all of the files for the site except for: Plugins, unused themes, cache and older media / upload files (e.g. anything from 2008 – 2013)
  • A monthly archive backup – this captures the plugin files and older media items excluded above. (If this ends up running too long then I’ll create another job at a different time and split the files up.

(The reason that I separate out the file backups is that older media items don’t change and the plugins can be downloaded fairly easily and I only tend to update them once a month anyway. This just makes the weekly file backup go more quickly.)

Step 2: Look At Plugins

I looked at the list of plugins in use. Each plugin will slow down a WordPress site by a fraction (or more) of a second. Add it all up and the execution of plugin scripts can amount to quite a lot of time. I adopted some very simple rules and applied them to the list of installed plugins:

  • If they were active, did I use them? No, then I removed them.
  • If they were deactivated, I asked myself if I would need them anytime soon? No, then I removed them.

All good so far but I hadn’t moved the needle any noticeable amount. A bit maybe, but not so much that I noticed.

Step 3: Install New Plugins

Hang on? Why am I installing new plugins if they’re only going to slow things down?

Good question. These are diagnostic plugins however. The two in question were WP-Optimize and P3-Profiler. My intention was to do some housekeeping and analysis with these to determine if there was an issue with the whole WordPress installation or any of the plugins in use.

Step 4: Optimise

First up, I looked at WP-Optimize. When you click on WP-Optimize in the admin menu, you’re given a fairly length page detailing any database optimisations that the plugin thinks can be made.

First of all, there’s a summary of potential remedial actions for your database:

screenshot488The items in red are potentially dangerous and are exactly why you’ve made a backup already, right?

What showed up the first time that I saw this screen was over 20000 items listed under “transient options”. WordPress creates these automatically when required but they can apparently be safely removed. “Optimize database tables” will run MySQL optimisation on your database’s tables. Some web hosts don’t allow this – luckily mine does.

Below the remedial actions, is displayed a list of the tables themselves along with their size, an assessment of their optimisation and the potential saving available. As you can see from the screenshot, the 341 transient options that have accrued since I first cleaned up the database are using about 2MB of space. The plugin also believes that the table needs optimisation.

screenshot487After running this through for the first time, the database size dropped from 78MB to about 7MB (which improved the backup time considerably). Sadly, the overall performance of the site was still not great.

Step 5: Measure Plugin Performance

Enter P3-Profiler. This plugin is developed by GoDaddy (yes, the webhosting people). It measures the load and execution time of a few of your site’s pages and breaks it down by plugin. You execute the scan from the “Plugins” menu.

screenshot489The scan takes a few moments to run but then displays some useful information about the various plugins that you still have installed and active.

screenshot4906.5 seconds?!? Wow, allegedly that’s how long the numerous plugins that I had installed were adding to my page load time.

Step 6: Get Rid of the Slow Plugins

I won’t go into all of the analysis that can be done with P3-Profiler, but I did use the information to refine the list of plugins in use:

  • I disabled NextGEN Gallery as it seemed to take the longest of all.
  • I disabled any of the JetPack options that I wasn’t using / relying on.
  • I removed the CloudFlare plugin (as I couldn’t get cloudflare working anyway)

The result (according to the profiler) was about an 80% reduction in the plugin load time. JetPack still takes the longest but it’s better.

The qualitative benefit to load times was impressive. The site felt pretty quick now.

Step 7: CloudFlare

For the final step, I was now able to configure CloudFlare to provide a bit of a boost and a degree of protection. What do you think, is the site quick enough?

Hope any of this helps…