0

vRealize Suite Lifecycle Manager license error “Exception while loading DLF”

I was just trying to add a new license in to my vRSLCM (vRealize Suite Lifecycle Manager) v8.0.1 instance when I hit upon the following error:

Not the most helpful of messages! Of course, the first rule of technology is “turn it off and on again”. After rebooting the vRSLCM appliance I tried again and it worked without issue.

0

Trusted SSL certificates in OSX 10.15+ and iOS 13+

I deployed a new vSphere VCSA for my homelab in December 2019 (last month). By default these come with a self-signed SSL certificate that’s valid for 10 years. Of course I typically replace these with a signed certificate but it’s not always the first thing that I do.

What I found this time however is that on my Mac neither Chrome or Brave would allow me to reach the web UI. Only Firefox would. I expect security warnings for self-signed (and hence untrusted) certificates. On the former two browsers though the message suggests that the certificate is invalid in some other way:

What’s actually happening is that as of MacOS 10.15 and iOS 13 SSL certificates have to meet certain criteria to be deemed to be valid. These are documented here: https://support.apple.com/en-us/HT210176.

In the case of the vCenter VCSA, the duration (10 years) is over 825 days. Hence no dice. It would be better if Chrome was clearer about that.

0

Homelab vSphere 6.7u3 experience

About 18 months ago I wrote a post on my experience upgrading my homelab from vSphere 6.5 to vSphere 6.7. Since that time it has had a few rebuilds and reconfigurations, been off for several months, been idle, been busy and generally just worked. The one constant though was the build of ESXi that I was using. It was always 6.7 GA.

Having been switched off for some time, I brought it back up over the Christmas break after a long lunch with homelab hero @virtualhobbit. The point of this post is not the reasons why my lab has come back to life (I’ll cover that at some point), but about how I got it updated to the latest version of vSphere.

As a recap, and to save anyone reading my previous post, my lab is constructed from Dell workstations that don’t appear on the HCLs for VMware. As any homelabber will attest, sometimes compromises have to be made to run your own lab. In my case, updates aren’t as straightforward as they should be but that’s offset by the cheaper cost and age of the hardware. That said, I will most likely not be able to run the next version of vSphere owing to the deprecation of the vmklinux driver stack.

Coming back to the topic in hand, vSphere Update Manager tends to complain when I use it owing to my unsupported processors. Updating my hosts via esxcli however still works. My first attempt at an update was not successful however. I had downloaded the offline update bundle from VMware’s site and placed it on a shared VMFS datastore.

But I was met with the following error when I tried to apply it:

Maybe my esxcli is rusty. Luckily there’s another way if you have ESXI hosts that can connect to the internet. The first step is to open up the hosts’s firewall so it can connect out:

Next, to prove connectivity and get the correct update name, the following command can be use to list the available software profiles:

The “grep” command at the end can be adjusted to filter the results. You don’t want to soft through EVERY update available.

In my case that returned me four results:

The second entry is the one that I wanted. To apply it, just pop the name of the profile in to the following command:

If the update is successful (it can take a few minutes, be patient), you should see something like this (I’ve truncated the output as it’s a bit long):

All that remains in theory is to close the firewall access off and reboot the host:

That worked for two of my hosts. But the third refused to play ball. When I tried to apply the update I got an error.

This was pretty simple to solve by adjusting the system swap settings for the host through vCenter. Head to the Configure tab for the host and locate “System Swap”. Edit the settings, and enable the host to use a specific datastore (the first option below).

I retried the update and it went through. vCenter updates were applied via the normal mechanisms (https://vcenter-fqdn:5480).

Migrating from WordPress to Hugo Part 4: Securing the Site with SSL

I originally drafted this in June 2018. Following on from Sam McGeown’s recent migration to Hugo, I thought I’d finally publish this in case it’s useful for anyone rather than sitting on it until I complete the process!

Why SSL?

It’s only a blog, so why SSL? It’s going to be static content, so why SSL?

In this article I’ll deal with those questions and go through the process of requesting an SSL certificate using AWS Certificate Manager.

Let’s Go Secure!

If you recall, our finishing point is going to end up being a collection of static HTML files served out by AWS. There’s nothing particularly risky about serving up or requesting such static files, it’s how the internet started out after all. What’s different now though are people’s perception of risk and privacy and how that’s reflected in the technology we use.

Google, for example, promote SSL sites slightly higher in their search rankings than non-SSL sites and have been doing so since 2014. Some modern browsers have started flagging warnings about non-SSL sites and this will likely become more obvious over time. Users are becoming more picky and aware as a result, or perhaps they’re driving the changes to an extent. SSL is here to stay though and it’s worth setting it up, especially if it’s free!

AWS off public SSL certificates for free. Let’s go set one up!

We could either do this via the AWS console, or using the CLI. At this time I haven’t worked out how to do it completely via the CLI, but I’m going to start there. (Note: Fro CloudFront, I think that the certificate has to be in the us-east-1 region regardless.)

This command will requests a new SSL certificate with a subject name of “mpoore.uk” and alternative names of “www.mpoore.uk”, “michaelpoore.com” and “www.michaelpoore.com”. The validation method of DNS will require us to validate that we own the domain by making certain DNS entries.

What you get back, is the reference to the certificate. I’ll need that later.

Looking at the AWS Console though, you’ll see that the certificate is not yet issued and must be validated.

To validate each of the domains in the certificate, you need to get some DNS CNAMEs created. Luckily, for mpoore.uk there’s a button for that. For michaelpoore.com though, I had to do these manually as the DNS for that is still with 1&1 for the time being.

Once they’re all done, the validation will eventually complete and the certificate will be issued. Just save the certificate ARN value from earlier as it’ll be needed later.

Sadly, this is as far as I got in the process before other things (life, eh) got in the way. I will be back to revisit and complete the the process though.

Migrating from WordPress to Hugo Part 3: Hosted Zones in Route53

I originally drafted this in June 2018. Following on from Sam McGeown’s recent migration to Hugo, I thought I’d finally publish this in case it’s useful for anyone rather than sitting on it until I complete the process!

It’s Always DNS

When things go wrong in the IT world DNS misconfiguration is one of those things that often sits at the root of your problems. It’s important to get it right not only for correct functioning, but also because some of the subsequent steps depend on it.

As part of my migration of this blog to Hugo, I’m placing one of the two domains I’ll be using under the control of the AWS Route53 (Amazon’s DNS service). I’ll move the other one in time as well.

Creating a Hosted Zone

I tend to use separate providers for domain registration and hosting as I’ve found it easier to move my site(s) around when you can just update the domain’s nameserver (NS) records to point to the new provider rather than have to transfer the domain as well. Practically all of my domains (I host a couple of sites for local community interests too) are registered through FastHosts.

AWS cater for this sort of arrangement too in Route53 (their DNS service). From the Route53 dashboard, all I had to do was select “Hosted zones” from the menu and then click the “Create Hosted Zone” button.

All you need to enter is the domain name and leave the type at its default value (“Public Hosted Zone”).

The zone is created for you and helpfully tells you what namesservers need to be set:

All I then had to do was apply those nameservers to the domain in FastHosts:

Once the dust settles, DNS requests for mpoore.uk will go to AWS for resolution. Which is important as I want to set my site up with an SSL certificate (as Amazon will give you them for free) but validation requires DNS.

So let’s do that next…

Migrating from WordPress to Hugo Part 2: Basic Tooling

I originally drafted this in June 2018. Following on from Sam McGeown’s recent migration to Hugo, I thought I’d finally publish this in case it’s useful for anyone rather than sitting on it until I complete the process!

Summary of Tools Used

These are the tools I’ll be using during my migration of my WordPress blog to Hugo (in AWS):

  • Github
  • SourceTree (git client)
  • Homebrew
  • Hugo
  • AWSCLI
  • Sublime (text editor)
  • AWS S3
  • AWS CloudFront
  • AWS Certificate Manager
  • AWS Route53
  • Filezilla

Building a Toolkit

I’m a Mac user. I have been for a number of years and I don’t plan to switch anytime very soon. Most of the tools that I’ll be using either have Windows / Linux versions or there are similar tools available for those OSs. I’ll try not to go in to too much OSX specific detail about any of them, and, if you’re following this process, you might have to adapt to whatever tooling works best for you.

A good number of tools listed above are web-based or cross-platform so shouldn’t present a big problem for anyone. I will be using the command line when I can, hence the inclusion of AWSCLI.

Probably the most OSX specific tool in that list is Homebrew (aka “Brew”). It’s a package manager for OSX and I’ll be using it to install Hugo and AWSCLI on my laptop. If you’re a Windows user, try Chocolatey instead. If you’re a Linux user, you should use whatever package manager comes with your distro.

Naturally, the use of AWS services means that you need an AWS account of your own. I’m going to assume that you have one and have got it to a point where you can consume the services above.

Installing AWSCLI and Hugo

Let’s assume that we’ve got Brew installed (it’s easy, the instructions are right there on the homepage). Installing AWSCLI and Hugo is straightforward too!

First, AWSCLI. Just type the following in to a terminal window:

Once installed, you’ll need to execute the following command to configure AWSCLI with your Access Key ID and Secret Access Key:

Now let’s do Hugo. Can you guess the command? (I still managed to mistype it!)

Start Your Engines

So, we know how our journey will start and what we expect to find when we get there. We’ve just packed the car. Let’s get going!

Migrating from WordPress to Hugo Part 1: Overview

I originally drafted this in June 2018. Following on from Sam McGeown’s recent migration to Hugo, I thought I’d finally publish this in case it’s useful for anyone rather than sitting on it until I complete the process!

What, When and Why?

I’ve been blogging using WordPress for about 10 years. In recent months I’ve seen several respected bloggers make the move to Hugo and it has inspired me to do the same. You might ask “Why?”, and I have a few reasons:

  • For starters, I want to improve my skills and knowledge in certain areas of cloud technology. The LAMP (Linux / Apache / MySQL / PHP) stack that WordPress sits on isn’t exactly revolutionary.
  • Next, I want to simplify the site itself and reduce the chances of it being hacked.
  • Finally, the most important reason, because I can!

This series of posts will document my journey.

The Starting Point

WordPress is a great solution, don’t get me wrong about that. I’ve been using it since 2008 to host my blog through its various iterations. During that time WordPress has evolved into quite a mature solution, with a rich ecosystem of theme developers and plugins. It just works and you don’t have to have ninja skills to get your ideas shared with the world. However, every time I login to my self-hosted WordPress installation, there’s a dearth of updates waiting for me to apply them. From time-to-time you get incompatibilities come up and you have to swap out one plugin for another. Also, as the database grows it can become more of a challenge to back up the site or migrate it to a new hosting provider – something I do from time to time to keep the cost of running it down.

As it stands, the starting point looks something like this when it comes to retrieving content from michaelpoore.com (not trying to teach anyone to suck eggs here, I just fancied drawing a diagram – it also helps compare with the finishing point below):

  1. Web browser requests a page from michaelpoore.com and a DNS query is triggered that results in the nameservers for the domain being queried.
  2. The nameservers for michaelpoore.com (hosted by 1&1) are queried for the website IP address.
  3. A connection is made to the 1&1 CDN (Content Delivery Network) for the requested page. That page may be served directly by the CDN or the backend Apache server may have to provide the content.
  4. Assuming at least part of the content is not cached by the CDN, the Apache webserver receives the request and various PHP scripts are executed to render the page content. Combined with other elements such as images and javascript, the content is returned back to the requesting web browser.
  5. The aforementioned PHP scripts will make numerous queries to the MySQL database.

Now, unless you’re adding lots of dynamic content (which I’m not), and unless the CDN is caching significant portions of the returned content (which I don’t know), then there’s a lot going on each time a page is requested. Also, each plugin I add or the WordPress installation in general just represents a greater attack surface. I’m not that arrogant as to believe that anyone would want to hack my blog, but you never know.

Of course, I could migrate from a self-hosted solution to a hosted WordPress site and take away some of the issues that I have (such as applying updates to WordPress and the infrastructure (PHP and MySQL – which I have to update via the 1&1 control panel from time-to-time). I’m all for using such solutions typically, but it seems too easy 🙂

The Finishing Point

Hugo isn’t exactly a webserver. It’s actually a static site generator. It creates a structure of flat HTML files that can be hosted somewhere. As there’s no dynamic content, the pages are very easy to cache. In terms of my finishing point, much of the process looks the same as above:

(One key difference is that I’m introducing another domain name in to the mix. This is partly to help with the migration process, but also because I’ll end up redirecting one of them to the other and I wanted a domain name that matched my twitter handle.)

  1. Web browser requests a page from michaelpoore.com (or mpoore.uk) and a DNS query is triggered that results in the nameservers for the domain being queried.
  2. The nameservers for michaelpoore.com (hosted by AWS Route53) are queried for the website IP address.
  3. A connection is made to the AWS CloudFront CDN (I could also use CloudFlare) for the requested page. That page will likely be served directly by the CDN.
  4. Assuming that the page content cache has expired or perhaps has never been created, the HTML file will be served directly from the AWS S3 bucket.

That should be so much quicker. But let’s talk briefly about how the HTML files get in to S3 in the first place. That is where Hugo comes in as well as a few more pieces that I’ll cover in a later post.

The Journey

So now that I’ve mapped out the starting point and the finishing point, we’ve got the makings of a journey. Let’s get started!

0

vRO 7.6 XaaS form support clarification

When vRealize Orchestrator 7.6 became generally available last week (and related to the simultaneous availability of vRealize Automation 7.6), an eagle-eyed VMware Partner spotted a statement in the release notes under the Feature and Support Notice section.

One of the statements in that section initially read:

vRealize Automation XAAS forms support only workflows created in the Orchestrator Legacy Client.

I asked a colleague in the Cloud Management Business Unit (CMBU) about this and was eventually put in touch with the right person in R&D who could expand on the meaning further.

The statement has now been adjusted to read:

The input parameter constraints of workflows created or edited in the vRealize Orchestrator Client do not automatically transfer to the XaaS blueprint request form in vRealize Automation. When using these workflows in XaaS operations, you must manually define the input parameters constraints in the XaaS blueprint request form. This limitation does not impact workflows created and edited exclusively in the Orchestrator Legacy Client.

It is important to note that in vRO 7.6, the term “vRealize Orchestrator Client” refers to the new HTML5 web client, whereas the term “vRealize Orchestrator Legacy Client” is used to describe the Java client that anyone used to vRO knows very well! In case you haven’t yet seen the new HTML5 client for vRO, I have a screenshot of it in my posting on the availability of vRA 7.6, and I’ll paste it here too.

Workflow schema open for editing in vRO 7.6
Workflow schema open for editing in vRO 7.6

Going back to the clarification, it means that workflows that are created / edited in the new client interface will not automatically have their presentation settings imported when adding the workflow as a XaaS blueprint or when refreshing the form design. Instead, a custom form would have to be created. The functionality is unaffected when using the legacy (Java) client.

0

vRA installation failure caused by .NET trust levels in IIS

I’ve been meaning to post this for about a year. I’m doing it now just in case it’s still relevant and useful for anyone.

I was working with a customer getting a vRA (7.3.1) PoC environment deployed for them so that they could run some penetration tests against it. For those tests to be fully representative, they understandably wanted to use their standard VM templates for the Windows-based IaaS components of vRA.

One of these templates configured IIS as part of the standard installation. Going through the vRA deployment process (not with LCM, but manually) a problem was encountered that took some time to get past. The pre-requisite checker ran through ok and the configuration (including certificates) were configured correctly, but when it came to the actual installation it failed at the Model Manager.

Continue Reading