Migrating from WordPress to Hugo Part 4: Securing the Site with SSL

I originally drafted this in June 2018. Following on from Sam McGeown’s recent migration to Hugo, I thought I’d finally publish this in case it’s useful for anyone rather than sitting on it until I complete the process!

Why SSL?

It’s only a blog, so why SSL? It’s going to be static content, so why SSL?

In this article I’ll deal with those questions and go through the process of requesting an SSL certificate using AWS Certificate Manager.

Let’s Go Secure!

If you recall, our finishing point is going to end up being a collection of static HTML files served out by AWS. There’s nothing particularly risky about serving up or requesting such static files, it’s how the internet started out after all. What’s different now though are people’s perception of risk and privacy and how that’s reflected in the technology we use.

Google, for example, promote SSL sites slightly higher in their search rankings than non-SSL sites and have been doing so since 2014. Some modern browsers have started flagging warnings about non-SSL sites and this will likely become more obvious over time. Users are becoming more picky and aware as a result, or perhaps they’re driving the changes to an extent. SSL is here to stay though and it’s worth setting it up, especially if it’s free!

AWS off public SSL certificates for free. Let’s go set one up!

We could either do this via the AWS console, or using the CLI. At this time I haven’t worked out how to do it completely via the CLI, but I’m going to start there. (Note: Fro CloudFront, I think that the certificate has to be in the us-east-1 region regardless.)

This command will requests a new SSL certificate with a subject name of “mpoore.uk” and alternative names of “www.mpoore.uk”, “michaelpoore.com” and “www.michaelpoore.com”. The validation method of DNS will require us to validate that we own the domain by making certain DNS entries.

What you get back, is the reference to the certificate. I’ll need that later.

Looking at the AWS Console though, you’ll see that the certificate is not yet issued and must be validated.

To validate each of the domains in the certificate, you need to get some DNS CNAMEs created. Luckily, for mpoore.uk there’s a button for that. For michaelpoore.com though, I had to do these manually as the DNS for that is still with 1&1 for the time being.

Once they’re all done, the validation will eventually complete and the certificate will be issued. Just save the certificate ARN value from earlier as it’ll be needed later.

Sadly, this is as far as I got in the process before other things (life, eh) got in the way. I will be back to revisit and complete the the process though.

Migrating from WordPress to Hugo Part 3: Hosted Zones in Route53

I originally drafted this in June 2018. Following on from Sam McGeown’s recent migration to Hugo, I thought I’d finally publish this in case it’s useful for anyone rather than sitting on it until I complete the process!

It’s Always DNS

When things go wrong in the IT world DNS misconfiguration is one of those things that often sits at the root of your problems. It’s important to get it right not only for correct functioning, but also because some of the subsequent steps depend on it.

As part of my migration of this blog to Hugo, I’m placing one of the two domains I’ll be using under the control of the AWS Route53 (Amazon’s DNS service). I’ll move the other one in time as well.

Creating a Hosted Zone

I tend to use separate providers for domain registration and hosting as I’ve found it easier to move my site(s) around when you can just update the domain’s nameserver (NS) records to point to the new provider rather than have to transfer the domain as well. Practically all of my domains (I host a couple of sites for local community interests too) are registered through FastHosts.

AWS cater for this sort of arrangement too in Route53 (their DNS service). From the Route53 dashboard, all I had to do was select “Hosted zones” from the menu and then click the “Create Hosted Zone” button.

All you need to enter is the domain name and leave the type at its default value (“Public Hosted Zone”).

The zone is created for you and helpfully tells you what namesservers need to be set:

All I then had to do was apply those nameservers to the domain in FastHosts:

Once the dust settles, DNS requests for mpoore.uk will go to AWS for resolution. Which is important as I want to set my site up with an SSL certificate (as Amazon will give you them for free) but validation requires DNS.

So let’s do that next…

Migrating from WordPress to Hugo Part 2: Basic Tooling

I originally drafted this in June 2018. Following on from Sam McGeown’s recent migration to Hugo, I thought I’d finally publish this in case it’s useful for anyone rather than sitting on it until I complete the process!

Summary of Tools Used

These are the tools I’ll be using during my migration of my WordPress blog to Hugo (in AWS):

  • Github
  • SourceTree (git client)
  • Homebrew
  • Hugo
  • AWSCLI
  • Sublime (text editor)
  • AWS S3
  • AWS CloudFront
  • AWS Certificate Manager
  • AWS Route53
  • Filezilla

Building a Toolkit

I’m a Mac user. I have been for a number of years and I don’t plan to switch anytime very soon. Most of the tools that I’ll be using either have Windows / Linux versions or there are similar tools available for those OSs. I’ll try not to go in to too much OSX specific detail about any of them, and, if you’re following this process, you might have to adapt to whatever tooling works best for you.

A good number of tools listed above are web-based or cross-platform so shouldn’t present a big problem for anyone. I will be using the command line when I can, hence the inclusion of AWSCLI.

Probably the most OSX specific tool in that list is Homebrew (aka “Brew”). It’s a package manager for OSX and I’ll be using it to install Hugo and AWSCLI on my laptop. If you’re a Windows user, try Chocolatey instead. If you’re a Linux user, you should use whatever package manager comes with your distro.

Naturally, the use of AWS services means that you need an AWS account of your own. I’m going to assume that you have one and have got it to a point where you can consume the services above.

Installing AWSCLI and Hugo

Let’s assume that we’ve got Brew installed (it’s easy, the instructions are right there on the homepage). Installing AWSCLI and Hugo is straightforward too!

First, AWSCLI. Just type the following in to a terminal window:

Once installed, you’ll need to execute the following command to configure AWSCLI with your Access Key ID and Secret Access Key:

Now let’s do Hugo. Can you guess the command? (I still managed to mistype it!)

Start Your Engines

So, we know how our journey will start and what we expect to find when we get there. We’ve just packed the car. Let’s get going!

Migrating from WordPress to Hugo Part 1: Overview

I originally drafted this in June 2018. Following on from Sam McGeown’s recent migration to Hugo, I thought I’d finally publish this in case it’s useful for anyone rather than sitting on it until I complete the process!

What, When and Why?

I’ve been blogging using WordPress for about 10 years. In recent months I’ve seen several respected bloggers make the move to Hugo and it has inspired me to do the same. You might ask “Why?”, and I have a few reasons:

  • For starters, I want to improve my skills and knowledge in certain areas of cloud technology. The LAMP (Linux / Apache / MySQL / PHP) stack that WordPress sits on isn’t exactly revolutionary.
  • Next, I want to simplify the site itself and reduce the chances of it being hacked.
  • Finally, the most important reason, because I can!

This series of posts will document my journey.

The Starting Point

WordPress is a great solution, don’t get me wrong about that. I’ve been using it since 2008 to host my blog through its various iterations. During that time WordPress has evolved into quite a mature solution, with a rich ecosystem of theme developers and plugins. It just works and you don’t have to have ninja skills to get your ideas shared with the world. However, every time I login to my self-hosted WordPress installation, there’s a dearth of updates waiting for me to apply them. From time-to-time you get incompatibilities come up and you have to swap out one plugin for another. Also, as the database grows it can become more of a challenge to back up the site or migrate it to a new hosting provider – something I do from time to time to keep the cost of running it down.

As it stands, the starting point looks something like this when it comes to retrieving content from michaelpoore.com (not trying to teach anyone to suck eggs here, I just fancied drawing a diagram – it also helps compare with the finishing point below):

  1. Web browser requests a page from michaelpoore.com and a DNS query is triggered that results in the nameservers for the domain being queried.
  2. The nameservers for michaelpoore.com (hosted by 1&1) are queried for the website IP address.
  3. A connection is made to the 1&1 CDN (Content Delivery Network) for the requested page. That page may be served directly by the CDN or the backend Apache server may have to provide the content.
  4. Assuming at least part of the content is not cached by the CDN, the Apache webserver receives the request and various PHP scripts are executed to render the page content. Combined with other elements such as images and javascript, the content is returned back to the requesting web browser.
  5. The aforementioned PHP scripts will make numerous queries to the MySQL database.

Now, unless you’re adding lots of dynamic content (which I’m not), and unless the CDN is caching significant portions of the returned content (which I don’t know), then there’s a lot going on each time a page is requested. Also, each plugin I add or the WordPress installation in general just represents a greater attack surface. I’m not that arrogant as to believe that anyone would want to hack my blog, but you never know.

Of course, I could migrate from a self-hosted solution to a hosted WordPress site and take away some of the issues that I have (such as applying updates to WordPress and the infrastructure (PHP and MySQL – which I have to update via the 1&1 control panel from time-to-time). I’m all for using such solutions typically, but it seems too easy 🙂

The Finishing Point

Hugo isn’t exactly a webserver. It’s actually a static site generator. It creates a structure of flat HTML files that can be hosted somewhere. As there’s no dynamic content, the pages are very easy to cache. In terms of my finishing point, much of the process looks the same as above:

(One key difference is that I’m introducing another domain name in to the mix. This is partly to help with the migration process, but also because I’ll end up redirecting one of them to the other and I wanted a domain name that matched my twitter handle.)

  1. Web browser requests a page from michaelpoore.com (or mpoore.uk) and a DNS query is triggered that results in the nameservers for the domain being queried.
  2. The nameservers for michaelpoore.com (hosted by AWS Route53) are queried for the website IP address.
  3. A connection is made to the AWS CloudFront CDN (I could also use CloudFlare) for the requested page. That page will likely be served directly by the CDN.
  4. Assuming that the page content cache has expired or perhaps has never been created, the HTML file will be served directly from the AWS S3 bucket.

That should be so much quicker. But let’s talk briefly about how the HTML files get in to S3 in the first place. That is where Hugo comes in as well as a few more pieces that I’ll cover in a later post.

The Journey

So now that I’ve mapped out the starting point and the finishing point, we’ve got the makings of a journey. Let’s get started!

0

vRO 7.6 XaaS form support clarification

When vRealize Orchestrator 7.6 became generally available last week (and related to the simultaneous availability of vRealize Automation 7.6), an eagle-eyed VMware Partner spotted a statement in the release notes under the Feature and Support Notice section.

One of the statements in that section initially read:

vRealize Automation XAAS forms support only workflows created in the Orchestrator Legacy Client.

I asked a colleague in the Cloud Management Business Unit (CMBU) about this and was eventually put in touch with the right person in R&D who could expand on the meaning further.

The statement has now been adjusted to read:

The input parameter constraints of workflows created or edited in the vRealize Orchestrator Client do not automatically transfer to the XaaS blueprint request form in vRealize Automation. When using these workflows in XaaS operations, you must manually define the input parameters constraints in the XaaS blueprint request form. This limitation does not impact workflows created and edited exclusively in the Orchestrator Legacy Client.

It is important to note that in vRO 7.6, the term “vRealize Orchestrator Client” refers to the new HTML5 web client, whereas the term “vRealize Orchestrator Legacy Client” is used to describe the Java client that anyone used to vRO knows very well! In case you haven’t yet seen the new HTML5 client for vRO, I have a screenshot of it in my posting on the availability of vRA 7.6, and I’ll paste it here too.

Workflow schema open for editing in vRO 7.6
Workflow schema open for editing in vRO 7.6

Going back to the clarification, it means that workflows that are created / edited in the new client interface will not automatically have their presentation settings imported when adding the workflow as a XaaS blueprint or when refreshing the form design. Instead, a custom form would have to be created. The functionality is unaffected when using the legacy (Java) client.

0

vRA installation failure caused by .NET trust levels in IIS

I’ve been meaning to post this for about a year. I’m doing it now just in case it’s still relevant and useful for anyone.

I was working with a customer getting a vRA (7.3.1) PoC environment deployed for them so that they could run some penetration tests against it. For those tests to be fully representative, they understandably wanted to use their standard VM templates for the Windows-based IaaS components of vRA.

One of these templates configured IIS as part of the standard installation. Going through the vRA deployment process (not with LCM, but manually) a problem was encountered that took some time to get past. The pre-requisite checker ran through ok and the configuration (including certificates) were configured correctly, but when it came to the actual installation it failed at the Model Manager.

Continue Reading
0

VMware Exams Retiring August 30th 2019

A number of VMware exams will be retired on August 30th 2019. The table below shows the exams to be retired and the exams taking their place:

Retiring ExamReplacement Exam
2V0-620 / vSphere 6 Foundations Exam2V0-01.19 / vSphere 6.7 Foundations Exam
2V0-602 / vSphere 6.5 Foundations Exam2V0-01.19 / vSphere 6.7 Foundations Exam
2V0-631 / VCP6-CMA Exam2V0-31.19 / Professional vRealize Automation 7.6 Exam 2019
2V0-731 / VCP7-CMA Exam2V0-31.19 / Professional vRealize Automation 7.6 Exam 2019
3V0-633 / VCAP6-CMA Deploy Exam3V0-31.18 / Advanced Deploy vRealize Automation 7.3 Exam 2018
2V0-621 / VCP6-DCV Exam2V0-21.19 / Professional vSphere 6.7 Exam 2019
2V0-621D / VCP6-DCV Delta Exam2V0-21.19 / Professional vSphere 6.7 Delta Exam 2019
2V0-622 / VCP6.5-DCV Exam2V0-21.19 / Professional vSphere 6.7 Exam 2019
2V0-622D / VCP6.5-DCV Delta Exam2V0-21.19 / Professional vSphere 6.7 Delta Exam 2019
3V0-623 / VCAP6-DCV Deploy Exam3V0-21.18 / Advanced Deploy vSphere 6.5 Exam 2018
3V0-625 / VCAP6.5-DCV Deploy Exam3V0-21.18 / Advanced Deploy vSphere 6.5 Exam 2018
2V0-51.18 / Professional Horizon 7.5 Exam 20182V0-51.19 / Professional Horizon 7.7 Exam 2019
2V0-651 / VCP6-DTM Exam2V0-51.19 / Professional Horizon 7.7 Exam 2019
2V0-751 / VCP7-DTM Exam2V0-51.19 / Professional Horizon 7.7 Exam 2019
2V0-761 / VCP Digital Workspace 2018 Exam2V0-61.19 / Professional WorkspaceOne Exam 2019

These retirements are due partly to the end-of-life of some products (vRA 6.x for example), but also to the alignment of certifications with year branding.

0

Introducing VMware Cloud Automation Services (CAS)

My focus on a day-to-day basis for most of the last five years has been on cloud automation and orchestration, more specifically with VMware vRealize Automation (vRA) and VMware vRealize Orchestrator (vRO). I’ve worked with a variety of customers in different verticals (government, finance, service provider) to help them design and deploy an automation platform and create services to automate many use-cases, both common and unique.

So naturally, my interest in a software-as-a-service (SaaS) platform that does the job too was always going to manifest itself. The day has arrived though that VMware are officially launching that service. Yesterday, January 15th 2019, VMware Cloud Automation Services became generally available.

Continue Reading

My vSphere 6.7 homelab upgrade experience

vSphere 6.7 was released several months ago, and I’ve been meaning to upgrade my homelab for a while now. vSphere 6.5 has been pretty rock-solid, but it’s time for me to keep up with the Joneses. This post covers my upgrade process and experiences.

vCenter Migration

My original vCenter server was built straight on to version 6.5 when that first launched, way back when. In theory there was nothing wrong with it, except for a deployment decision that I was no longer happy with. When I deployed my vCenter previously, I configured it with an external Platform Services Controller (PSC) as I wanted to mess about with load balancing PSCs at the time. The messing around didn’t take long and I moved on to other things. Problem is, you cannot (currently) go from an external PSC to an embedded one and the external PSC was an extra piece of complexity that I just didn’t need anymore.

That pretty much left me with one option: migrate to a new vCenter.

Deploying vCenter is a doddle, and I won’t cover how that works. What I will mention though is how I moved my hosts and VMs across. The first step was to liberate one of my 6.5 ESXi hosts from the original cluster and add it to the new vCenter. At this time, I didn’t upgrade the host itself to 6.7 for reasons that will be apparent in a minute or two.

Secondly, I went through the VMs that I had registered in my original vCenter and weighed up whether or not I still needed them. Things like old distributed vRA deployments to a quick trip to the virtual bin, other things like AD, jumphosts, remote access solutions etc were powered down and removed from the inventory before being re-added to the new vCenter.

Before long, there wasn’t much left and what little is left will probably be left idle for a couple of weeks before I bin it completely.

So far, so good. Continue Reading