When vRealize Orchestrator 7.6 became generally available last week (and related to the simultaneous availability of vRealize Automation 7.6), an eagle-eyed VMware Partner spotted a statement in the release notes under the Feature and Support Notice section.
One of the statements in that section initially read:
vRealize Automation XAAS forms support only workflows created in the Orchestrator Legacy Client.
I asked a colleague in the Cloud Management Business Unit (CMBU) about this and was eventually put in touch with the right person in R&D who could expand on the meaning further.
The statement has now been adjusted to read:
The input parameter constraints of workflows created or edited in the vRealize Orchestrator Client do not automatically transfer to the XaaS blueprint request form in vRealize Automation. When using these workflows in XaaS operations, you must manually define the input parameters constraints in the XaaS blueprint request form. This limitation does not impact workflows created and edited exclusively in the Orchestrator Legacy Client.
It is important to note that in vRO 7.6, the term “vRealize Orchestrator Client” refers to the new HTML5 web client, whereas the term “vRealize Orchestrator Legacy Client” is used to describe the Java client that anyone used to vRO knows very well! In case you haven’t yet seen the new HTML5 client for vRO, I have a screenshot of it in my posting on the availability of vRA 7.6, and I’ll paste it here too.
Going back to the clarification, it means that workflows that are created / edited in the new client interface will not automatically have their presentation settings imported when adding the workflow as a XaaS blueprint or when refreshing the form design. Instead, a custom form would have to be created. The functionality is unaffected when using the legacy (Java) client.
I’ve been meaning to post this for about a year. I’m doing it now just in case it’s still relevant and useful for anyone.
I was working with a customer getting a vRA (7.3.1) PoC environment deployed for them so that they could run some penetration tests against it. For those tests to be fully representative, they understandably wanted to use their standard VM templates for the Windows-based IaaS components of vRA.
One of these templates configured IIS as part of the standard installation. Going through the vRA deployment process (not with LCM, but manually) a problem was encountered that took some time to get past. The pre-requisite checker ran through ok and the configuration (including certificates) were configured correctly, but when it came to the actual installation it failed at the Model Manager.
My focus on a day-to-day basis for most of the last five years has been on cloud automation and orchestration, more specifically with VMware vRealize Automation (vRA) and VMware vRealize Orchestrator (vRO). I’ve worked with a variety of customers in different verticals (government, finance, service provider) to help them design and deploy an automation platform and create services to automate many use-cases, both common and unique.
So naturally, my interest in a software-as-a-service (SaaS) platform that does the job too was always going to manifest itself. The day has arrived though that VMware are officially launching that service. Yesterday, January 15th 2019, VMware Cloud Automation Services became generally available.
vSphere 6.7 was released several months ago, and I’ve been meaning to upgrade my homelab for a while now. vSphere 6.5 has been pretty rock-solid, but it’s time for me to keep up with the Joneses. This post covers my upgrade process and experiences.
My original vCenter server was built straight on to version 6.5 when that first launched, way back when. In theory there was nothing wrong with it, except for a deployment decision that I was no longer happy with. When I deployed my vCenter previously, I configured it with an external Platform Services Controller (PSC) as I wanted to mess about with load balancing PSCs at the time. The messing around didn’t take long and I moved on to other things. Problem is, you cannot (currently) go from an external PSC to an embedded one and the external PSC was an extra piece of complexity that I just didn’t need anymore.
That pretty much left me with one option: migrate to a new vCenter.
Deploying vCenter is a doddle, and I won’t cover how that works. What I will mention though is how I moved my hosts and VMs across. The first step was to liberate one of my 6.5 ESXi hosts from the original cluster and add it to the new vCenter. At this time, I didn’t upgrade the host itself to 6.7 for reasons that will be apparent in a minute or two.
Secondly, I went through the VMs that I had registered in my original vCenter and weighed up whether or not I still needed them. Things like old distributed vRA deployments to a quick trip to the virtual bin, other things like AD, jumphosts, remote access solutions etc were powered down and removed from the inventory before being re-added to the new vCenter.
Before long, there wasn’t much left and what little is left will probably be left idle for a couple of weeks before I bin it completely.
Having upgraded my vRA instance in my HomeLab to 7.4 not long after it GA’d, I recently decided to create some nice, new templates as well. You know, latest patches, hardware, basic config etc.
I won’t bore you with exactly how I installed Windows, patched it or configured it. The relevant part of the process to this article was the installation of the vRA Guest Agent and subsequent testing of it.
I already had my vRA blueprint configured; just a simple Windows Server 2012 R2 template that has the vRealize LogInsight agent installed and configured on it automatically as part of the provision process:
Installing the vRA Guest Agent
This is a documented process to install the software agent on a Virtual Machine that is then subsequently turned in to a template. For vRA 7.4, the documentation can be found on VMware’s site:
If you’re working for an Enterprise with your workloads based purely on VMware vSphere, then there’s a new launch from Unitrends that you may be interested in looking at for your virtual backup / business continuity solution that I have learned about.
vBE (short for VM Backup Essentials), converges enterprise-grade backup software, ransomware detection, and cloud continuity into a powerful, easy-to-use, all-in-one platform boasting the following features:
Total Protection – No limits on the number of virtual machines that can be protected on a host
No License Tiering – No tiering of licenses based on the number of cores in the CPU socket.
Only License what you need! – Only occupied sockets require a license, but ALL occupied sockets of the host must be licensed to protect its virtual machines.
Infinite retention!– Retention is directly proportional to the amount of storage that can be provided by the customer for backup. The license has no limits on retention.
Replication to the Cloud – Site-to-site replication is not supported at this time. vBE does support replication to the cloud – both hyperscale clouds such as AWS, Google and Rackspace as well as clouds purpose-built for DRaaS services.
Advanced Ransomware Protection– New ransomware variants are emerging every day and your ransomware protection needs to evolve to keep up.
Unitrends are billing vBE as an “all-in-one solution” that provides a disruptive approach to backup. It offers complete vertical integration (including the cloud), fast time to value and an all-in-one solution provided by a single vendor with industry-leading customer service. vBE includes all the software and features you would find in an enterprise-level data protection and recovery solution. vBE includes operating system, security, backup software, WAN acceleration, replication, cloud integration, and archiving. Continue Reading →
A colleague of mine was working with a customer recently on some changes to their automated VM provisioning process (they’re not vRA customers… yet). He got stuck trying to get around a particular challenge with the automatic naming of network interfaces in certain Linux distributions.
The customer in question is using vRealize Orchestrator (vRO) to create (not clone) their Virtual Machines from a JSON structure that is supplied by an external system. In that structure there are definitions for the hardware, OS network identity (name, IP etc) and OS installation sources (ISO file for installation and floppy image for a ks.cfg (KickStart) file).
Once the JSON object is provided to the vRO workflow, the VM is created, booted and automatically starts to install and configure itself.
Customer’s Simple VM
Simple VMs have a number of disks defined (for root, opt, var, swap etc partitions) that are attached to a single ParaVirtual SCSI adapter. The VM is also equipped with a single VMXNET3 network adapter.
In this configuration, there is no problem. The installation of the OS runs through to completion and the VM is handed off to Puppet and eventually goes in to service.
Customer’s Complex VM
For the provision of Linux-based Oracle servers however, the customer wanted to be able to specify not only extra disks and partitions, but extra SCSI controllers too. Continue Reading →