Migrating from WordPress to Hugo Part 4: Securing the Site with SSL

I originally drafted this in June 2018. Following on from Sam McGeown’s recent migration to Hugo, I thought I’d finally publish this in case it’s useful for anyone rather than sitting on it until I complete the process!

Why SSL?

It’s only a blog, so why SSL? It’s going to be static content, so why SSL?

In this article I’ll deal with those questions and go through the process of requesting an SSL certificate using AWS Certificate Manager.

Let’s Go Secure!

If you recall, our finishing point is going to end up being a collection of static HTML files served out by AWS. There’s nothing particularly risky about serving up or requesting such static files, it’s how the internet started out after all. What’s different now though are people’s perception of risk and privacy and how that’s reflected in the technology we use.

Google, for example, promote SSL sites slightly higher in their search rankings than non-SSL sites and have been doing so since 2014. Some modern browsers have started flagging warnings about non-SSL sites and this will likely become more obvious over time. Users are becoming more picky and aware as a result, or perhaps they’re driving the changes to an extent. SSL is here to stay though and it’s worth setting it up, especially if it’s free!

AWS off public SSL certificates for free. Let’s go set one up!

We could either do this via the AWS console, or using the CLI. At this time I haven’t worked out how to do it completely via the CLI, but I’m going to start there. (Note: Fro CloudFront, I think that the certificate has to be in the us-east-1 region regardless.)

This command will requests a new SSL certificate with a subject name of “mpoore.uk” and alternative names of “www.mpoore.uk”, “michaelpoore.com” and “www.michaelpoore.com”. The validation method of DNS will require us to validate that we own the domain by making certain DNS entries.

What you get back, is the reference to the certificate. I’ll need that later.

Looking at the AWS Console though, you’ll see that the certificate is not yet issued and must be validated.

To validate each of the domains in the certificate, you need to get some DNS CNAMEs created. Luckily, for mpoore.uk there’s a button for that. For michaelpoore.com though, I had to do these manually as the DNS for that is still with 1&1 for the time being.

Once they’re all done, the validation will eventually complete and the certificate will be issued. Just save the certificate ARN value from earlier as it’ll be needed later.

Sadly, this is as far as I got in the process before other things (life, eh) got in the way. I will be back to revisit and complete the the process though.

Migrating from WordPress to Hugo Part 3: Hosted Zones in Route53

I originally drafted this in June 2018. Following on from Sam McGeown’s recent migration to Hugo, I thought I’d finally publish this in case it’s useful for anyone rather than sitting on it until I complete the process!

It’s Always DNS

When things go wrong in the IT world DNS misconfiguration is one of those things that often sits at the root of your problems. It’s important to get it right not only for correct functioning, but also because some of the subsequent steps depend on it.

As part of my migration of this blog to Hugo, I’m placing one of the two domains I’ll be using under the control of the AWS Route53 (Amazon’s DNS service). I’ll move the other one in time as well.

Creating a Hosted Zone

I tend to use separate providers for domain registration and hosting as I’ve found it easier to move my site(s) around when you can just update the domain’s nameserver (NS) records to point to the new provider rather than have to transfer the domain as well. Practically all of my domains (I host a couple of sites for local community interests too) are registered through FastHosts.

AWS cater for this sort of arrangement too in Route53 (their DNS service). From the Route53 dashboard, all I had to do was select “Hosted zones” from the menu and then click the “Create Hosted Zone” button.

All you need to enter is the domain name and leave the type at its default value (“Public Hosted Zone”).

The zone is created for you and helpfully tells you what namesservers need to be set:

All I then had to do was apply those nameservers to the domain in FastHosts:

Once the dust settles, DNS requests for mpoore.uk will go to AWS for resolution. Which is important as I want to set my site up with an SSL certificate (as Amazon will give you them for free) but validation requires DNS.

So let’s do that next…

Addressing errors when using vRA 7.4 Guest Agent

Having upgraded my vRA instance in my HomeLab to 7.4 not long after it GA’d, I recently decided to create some nice, new templates as well. You know, latest patches, hardware, basic config etc.

I won’t bore you with exactly how I installed Windows, patched it or configured it. The relevant part of the process to this article was the installation of the vRA Guest Agent and subsequent testing of it.

I already had my vRA blueprint configured; just a simple Windows Server 2012 R2 template that has the vRealize LogInsight agent installed and configured on it automatically as part of the provision process:

Installing the vRA Guest Agent

This is a documented process to install the software agent on a Virtual Machine that is then subsequently turned in to a template. For vRA 7.4, the documentation can be found on VMware’s site:

Install the Guest Agent on a Windows Reference Machine

Prepare a Windows Reference Machine to Support Software

This worked without issue. I shutdown the machine, turned it in to a template and update my blueprint.

Testing the new template

This is where I ran in to an issue. Continue Reading


Howto: Creating a CA template for VMware services

Having setup my lab’s PKI infrastructure previously, one of the next steps I needed to complete was to create a template for certificates for VMware’s products to use as they require certain properties to be present in the certificates used.

There is a KB article that covers this but I wanted to run through it and use some of the specifics for my lab.

Template for VMware SSL Certificates

This template will provide certificates for ESXi hosts, vCenter, vRA, vRO etc. To create it, we first need the Certificate Templates Console. This can be opened by running certtmpl.msc.

Per the KB article, I duplicated the “Web Server” template as a starting point. My first task was to give the template a new name and set the validity to 4 years:


On the Extensions tab, although it’s possibly not required for vSphere 6 (it is for earlier versions of vSphere), I added “Client Authentication” under the Application Policies option.


Again, it may not be universally required but I’ve added the “Signature is proof of origin” option under Key Usage (also on the Extensions tab.


Depending on the use case required, it might be useful to be able to export a certificate’s private key. I haven’t worked on View for some years but this option came in handy then. It’s configured under the Request Handling tab.


On the Subject Name tab, ensure that “Supply in the request” is checked.


That’s it. Just hit OK to save it.

Template for VMware VMCA

If you want to set up the VMCA as a subordinate certificate authority on a vSphere 6 Platform Services Controller, a slightly different type of certificate is required. I don’t think that I deviated from the KB article here except with the validity period.



“Publishing” the certificate templates

This is a fairly straightforward process accomplished using the Certification Authority Manager. Templates are added one at a time by right clicking on “Certificate Templates” and selecting New > Certificate Template to Issue.


Once published, the templates are available via the CA’s web interface for new requests.



Howto: Configuring a homelab online subordinate CA

A quick recap of where I got to. I have an offline Root CA (well, it’s still online because I’ll need it in a minute) and I’ve created a website on my online subordinate CA server to host the Root CA certificate and CRL files.

The purpose of the subordinate CA is to handle certificate signing and repudiation for all services in my infrastructure that require them. It will be granted the authority to do so by the Root CA. So this post covers the remaining steps of the process, which are:

  • Installing and configuring the subordinate CA
  • Signing the subordinate CA’s certificate using the Root CA
  • Delegating control of the subordinate CA to someone other than Domain Admins

Some elements of this process are very similar to the process of setting up the Root CA in the first place but there are some differences.

Installing the subordinate CA

As with the Root CA, I’ve done a lot of this via the command line. Other methods are available.

Using a powershell prompt on the subordinate CA, create and edit a CAPolicy.inf file using this command:

Insert the following text, taking care to make sure that the URL on line 8 is specific to your environment. It shouldn’t break things if it’s wrong but it’s just the location of the Certificate Practice Statement file.

There are fewer options used than for the Root CA.

To install the CA using these settings (make sure that the file is named “CAPolicy.inf” exactly, no extra file extensions), the following commands were used:

If you’re following this, change the CA’s common name (O11N-IssuingCA-01 in my case) to suit your environment. What you’ll see at the end of it though is a big yellow warning that the installation is incomplete. This is because the CA’s certificate now needs signing and issuing by the Root CA.

Requesting and signing the subordinate CA certificate

The warning text mentioned above tells you where the certificate request has been saved to. In my case, it was saved as C:\ca-01.o11n.lab_O11N-IssuingCA-01.req. This file must be copied to the RootCA to sign and issue the relevant certificate for the subordinate CA.

On the Root CA, the certificate request can be submitted using the command:

If it works, you’ll see that the request is now pending. Take note of the request number. (It was “2” in my case.)


To sign and issue the certificate, you need the Certification Authority tool. Find the request and Issue it.


The issued certificate can easily be exported using the following command:

The certificate then needs copying back to a temporary location on the subordinate CA. Once there, it can be installed and the certificate authority service started:

As a final step, copy the contents of C:\Windows\System32\certsrv\CertEnroll to C:\pki. It now looks like this:


Configuring the subordinate CA CRLs

As with the Root CA, I wanted to set up the distribution points for the CRLs to suit my environment.

The first step is to remove the existing points:

Next, I created three new CRL distribution points.

  • The first point is to a local file on the CA server:

  • The second point is to a web URL that is hosted locally on the subordinate CA:

  • The third and final distribution point is to the “pki” virtual directory:


Configuring the subordinate CA AIA settings

As with the Root CA, configuring the AIA settings is necessary.

As before, we’re going to clear out the original settings first:

Next, the “certutil” command is used to config a few default settings:

Finally, we restart the CA service and re-publish the CRLs.

That’s the online subordinate CA configured.

Configuring CA permissions

In a lab environment, particularly mine, it’ll be used be very few people. All the same, it’s a good practise to configure proper delegated security controls. After all, I don’t want to give every account Domain Admin access just so that they can request and issue certificates. To that end, I created a couple of Active Directory security groups that I used to govern access to the CA.


To use these groups, all that is required is to open the Certification Authority manager and open the properties for the CA. On the Security tab, I just added the groups and assigned them relevant permissions.


Next steps

Now that my lab PKI infrastructure is setup, one of the key tasks is to start using it! This means generating signed and trusted certificates for vSphere and vRA for example. For that, I might need to setup some certificate templates. If the web enrollment is to be used to do this, the Subordinate CA’s IIS website needs configuring with an SSL certificate. The instructions in this Microsoft article describes how that is done.

It’s important not to forget though that there are some remedial tasks to complete. Both the Root CA and the Subordinate CA could do with being hardened. The Root CA in particular needs to be kept safe and taken offline.


Howto: Publishing offline Root CA certs and CRLs

Previously, I setup an offline Root CA in my homelab with the intention emulating a PKI setup that many enterprises seem to run.

The second stage of this process is publishing the Root CA certificate and CRL in a place that they can be accessed when the Root CA is offline. If you recall, I configured the Root CA to publish its CRL etc to a location on pki.o11n.lab. I now need to create that.

The Server

Rather than run my lab’s online CA on a domain controller, which might be tempting but causes other issues, I have a domain joined server setup that will eventually become my online subordinate CA.

It’s a vanilla Windows 2012 R2 server as before and a domain member.


The VM is called “ca-01”, but I need to have pki.o11n.lab pointed to it too. That is accomplished fairly easily with a quick DNS CNAME record:


Root CA Certificate and CRL

If this wasn’t a lab environment, I’d probably add a disk to my online CA VM to host its data. Instead, I’ve created a folder called C:\pki and copied over the Root CA certificate and CRL files.


I’ve also shared this folder and granted permissions as follows:

  • Domain Admins (Full Control)
  • SYSTEM (Full Control)
  • Cert Publishers (Change)


Publish to Active Directory

By publishing the RootCA certificate in to Active Directory, any domain joined Windows servers will automatically trust it. Standalone servers (and VMware appliances) will, of course, need to be instructed manually to trust the RootCA but I’ll take care of that as and when the need arises.

To publish the certificate to AD and trust it locally, the following commands were used:


CPS, CRL and Root CA Certificate Distribution

Before configuring HTTP distribution of the various files, I need to create the cps.txt file that was referenced in the previous article.

For me, that was simply accomplished by creating C:\pki\cps.txt and populating it with some sample text.

Next up, I installed IIS to serve up the files in C:\pki. That was completed by executing the following command on the online CA server:

Once that is completed, I created a new Virtual Directory for the PKI files:

It’s probably easier to do the next bit in the IIS Manager. The permissions for the Virtual Directory need configuring as follows:

  • Add “Cert Publishers” – allow Modify permission
  • Add “IIS AppPool\DefaultAppPool” – allow default permissions (read etc)

Note that with the second addition, it’s a server local account and can’t be found by browsing the local accounts. Sorry, you have to type it in verbatim.


The final step is to allow double escaping in the request filtering settings. With the “pki” virtual directory selected, double-click “Request Filtering” in the centre pane. Then select Edit Feature Settings. Tick the checkbox for “Allow double escaping”.


There is now a website available that hosts the cps.txt file (see below), the Root CA certificate and CRL.


The final stage is configuring the online subordinate CA. I’ll cover that in the next post.


Howto: Configuring a homelab offline Root CA

Self-signed SSL certificates are all well and good but they’re not meant to be for the real world. The trust issues they cause can be a headache on customer projects and anything that’s going in to production shouldn’t be using them.

For that reason, I thought it’d be better to change my homelab so that it uses a slightly more realistic PKI setup. The first phase of that is creating an offline Root CA as it’s something that a good number of customers use too.

Step 1: DNS

From a DNS perspective, my homelab is split up so that anything physical and fundamental to the lab (e.g. storage / NAS, physical hosts, switches etc) lives in its own DNS domain (home.lab). Everything else from vCenter and AD downwards is in one or more other DNS domains and on separate VLANs etc. Now even though I’m planning to setup my Root CA as a VM, I’m going to keep it in the same DNS domain and the physical stuff because that seems to be a better fit for it.

So before doing anything else, I needed to create DNS entries for my Root CA on my NAS (which doubles as DNS master for the home.lab domain).


Of course, the idea of an offline CA is that it’s not connected to a network. For the installation and setup of it though, I want to make sure that it’s patched etc.

Step 2: Deploy a VM

This sort of thing is bread and butter really but it’s just a vanilla Windows 2012 R2 VM that is NOT domain joined but I have set up its network identity (name, DNS name and IP details) and connected it to a portgroup that my other lab infrastructure exists on.

201511317_121103-CapturFilesStep 3: Add the CA Role

Rather than including blow-by-blow screenshots from server manager, my installation was performed through the command line as much as possible. The first step is creating a configuration file.

Using a powershell prompt, create and edit a CAPolicy.inf file using this command:

Insert the following text, taking care to make sure that the URL on line 8 is specific to your environment. It shouldn’t break things if it’s wrong but it’s just the location of the Certificate Practice Statement file.

When the CA is installed using the .inf file above, its certificate will be valid for 20 years and the validity of the CRL will be 26 weeks.

To install the CA using these settings (make sure that the file is named “CAPolicy.inf” exactly, no extra file extensions), the following commands were used:

If you’re following this, change the CA’s common name (O11NRootCA in my case) to suit your environment.

Step 4: Configure CRLs

By default, this CA will publish its Certificate Revocation List (CRL) in locations that aren’t useful as the server will be powered off for most of its life. Therefore the CRL distribution points needed reconfiguring.

The first step is to remove the existing points:

Next, I created two new CRL distribution points. For both, “%3%8” will be substituted by the CA with the CA’s name and the CRL Name Suffix values:

  • The first point is to a local file on the CA server:

  • The second point is to a web URL that I’ll host on another server later:

The second distribution point will be added to the CA’s certificate so that other servers know where to get the CRL from.

Step 5: Configure AIA

Authority Information Access locations are URLs that are added to a certificate in its authority information access extension. These URLs can be used by an application or service to retrieve the issuing CA certificate.

As before, we’re going to clear out the original settings first:

Next, the “certutil” command is used to config a few default settings:

Finally, we restart the CA service and re-publish the CRLs.

That’s the offline Root CA configured. If you look at the C:\Windows\System32\CertSrv\CertEnroll folder, you’ll see the CRL and CA certificate files. It’s a good idea to copy these off the server as they’ll be needed to configure an online subordinate CA. I’ll cover that off in the next post.



Installing SQUID on a Synology NAS

I’m not going to go into exactly why (it’s a minor networking niggle following on from a change in broadband provider) but I wanted a simple HTTP proxy in my lab so that my lab VMs could get out on to the internet. Mostly for installing updates etc.

Since my NAS is ideally placed in my network I thought that I’d use that. It’s only a short-term thing anyway.

Now in order to get a proxy service on to the NAS, I needed to setup IPKG first. This allowed me to install and configure SQUID as follows:

1. Open an SSH session to the NAS

2. Download and install SQUID

[text]ipkg install squid[/text]

3. Perform a couple of configuration commands

[text]squid -k parse
squid -z
ln -s /opt/etc/init.d/S80squid /usr/syno/etc/rc.d/[/text]

4. SQUID can now be started using /opt/etc/init.d/S80squid start

Now there may be some additional changes you might want to make. By default SQUID will accept connections from a standard set of internal networks as follows:

[text]acl localnet src     # RFC1918 possible internal network
acl localnet src  # RFC1918 possible internal network
acl localnet src # RFC1918 possible internal network[/text]

However, you may want to tie this down for one reason or another. For me, I’m not too bothered. Any changes can be done by editing /opt/etc/squid/squid.conf.

Want to check that it works? I configured the proxy config in my VM as follows:


And then monitored the proxy access file using tail:

[text]tail -f /opt/var/squid/logs/access.log
1409911439.044    141 TCP_MISS/200 405 HEAD http://download.windowsupdate.com/v9/windowsupdate/redir/muv4wuredir.cab? – DIRECT/ application/octet-stream
1409911439.138     80 TCP_MISS/200 24017 GET http://download.windowsupdate.com/v9/windowsupdate/redir/muv4wuredir.cab? – DIRECT/ application/octet-stream
1409911445.219     46 TCP_MISS/200 405 HEAD http://ds.download.windowsupdate.com/v11/3/windowsupdate/selfupdate/WSUS3/x64/Win7SP1/wsus3setup.cab? – DIRECT/ application/octet-stream
1409911445.246     16 TCP_MISS/200 34418 GET http://ds.download.windowsupdate.com/v11/3/windowsupdate/selfupdate/WSUS3/x64/Win7SP1/wsus3setup.cab? – DIRECT/ application/octet-stream[/text]


How to install IPKG on a Synology NAS

Sometimes you want to install “community” or third party packages on your Synology NAS and they require IPKG (Itsy Package Management System) to be present. Instruction about how to go about this seem to vary and are often specific for the CPU inside your NAS. The easiest method that I’ve found for getting IPKG installed is as follows…

First job is to open an SSH session to the NAS and confirm what type of processor it has. This can be done using the following command:

[text]cat /proc/cpuinfo | grep model[/text]

For my DS1513+ it returns:

[text]model: 54
model name: Intel(R) Atom(TM) CPU D2701   @ 2.13GHz
model: 54
model name: Intel(R) Atom(TM) CPU D2701   @ 2.13GHz
model: 54
model name: Intel(R) Atom(TM) CPU D2701   @ 2.13GHz
model: 54
model name: Intel(R) Atom(TM) CPU D2701   @ 2.13GHz[/text]

Next you need to dig around the site http://ipkg.nslu2-linux.org/feeds/optware/ to find the correct bootstrap for your architecture. In my case it’s at http://ipkg.nslu2-linux.org/feeds/optware/syno-i686/cross/unstable/syno-i686-bootstrap_1.2-7_i686.xsh.

To install it, there are a couple of steps…

1. Within your SSH session, change to a temporary location (note that you will probably need to be logged in as root to do all this)

[text]cd /tmp[/text]

2. Download the bootstrap script

[text]wget http://ipkg.nslu2-linux.org/feeds/optware/syno-i686/cross/unstable/syno-i686-bootstrap_1.2-7_i686.xsh[/text]

3. Make the file executable

[text] chmod +x syno-i686-bootstrap_1.2-7_i686.xsh[/text]

4. Run the script

[text]sh syno-i686-bootstrap_1.2-7_i686.xsh[/text]

5. If it all went well, remove the script

[text]rm syno-i686-bootstrap_1.2-7_i686.xsh[/text]

6. Update the package list

[text]ipkg update[/text]

Well done, you’re now ready to install custom packages via ipkg.


Windows VM Keyboard Layout in Fusion

After I rebuilt my Windows 7 VM in Fusion some months ago, I never quite got around to sorting out the keyboard layout. Having changed job roles recently, I’m now using my Windows VM a lot more and the keyboard layout is becoming a PITA! Specifically, having the @ and ” keys the wrong way round can cause plenty of authentication and email sending problems.

It’s pretty easy to sort out though. You can either:

  1. Go download the Microsoft Keyboard Layout Creator and build your own layout to install.
  2. Install the Boot Camp drivers into your VM.

Rather than reinvent the wheel (option 1), this is how to get up and running with option 2…

  1. Assuming you’re running OSX Lion or Mountain Lion, fire up the Boot Camp Assistant. Continue through the welcome screen and select only to download the support software from Apple.
    You’ll need to save it to a USB stick / drive. Make yourself some tea or do something else whilst you wait (it seems to take a while). Alternatively, you can download a zip file direct from Apple. Either way it’s in the region of 500Mb. I opted for the direct download of the zip file.
  2. Once expanded, locate the BootCamp/Drivers/Apple folder and copy it to a local Windows drive by whatever means works best.
  3. Open a command prompt as administrator (Open Start Menu, locate Command Prompt, right click it and select “Run as administrator”).
  4. Navigate to the Apple folder copied above (I put mine in c:Apple). Run BootCamp.msi.
  5. Next, Next, Finish.
  6. Restart your Windows VM.
  7. Hey Presto!

Ok, so Windows has added the annoying keyboard thing to my system tray area but that’s fixable and it’s picked up the Apple keyboard without me having to lift another finger. My @ and my ” are back the right way round again!