VMware’s Office of the CTO (OCTO) runs an annual programme internally to appoint a limited number of outstanding individuals as CTO Ambassadors (CTOA). Broadly, the role of an ambassador is to help ensure a tight collaboration between VMware’s R&D and VMware’s customers.
CTO Ambassadors come from customer facing roles within VMware (such as SEs, PSO, TAMs and GSS). A good number of them are involved with, or are usually present at VMware’s customer focussed events (such as vForum, VMUGs and VMworld). With the new year comes a number of newly minted CTOAs. Whilst I can’t count myself amongst them, I’d just like to say a big, public “congratulations” to everyone that has been selected as a 2018 ambassador.
A long time ago, on a project far back in time, the team that I was part of was given a requirement to zero the disks of VMs before they were deleted by vRA / vRO (or vCAC and vCO as they were called back then). One of my colleagues on the project, Jonathan Medd, devised an approach for doing this using an “experimental” PowerCLI feature and wrote it up on his blog.
Fast forward nearly two and a half years and I’m looking at an upgrade for this platform and wondering if there’s a way to accomplish the same task in vRO rather than by breaking out to PowerCLI. Don’t get me wrong, I love PowerCLI. But fewer parts would mean that there’s less to go wrong. How to do it then…
Disk zeroing in PowerCLI
This is still listed as an experimental feature in the PowerCLI documentation for vSphere 6.5. The Set-HardDisk cmdlet has the -ZeroOut option and it would still be used exactly as Jonathan describes it in his article.
PowerCLI documentation for Set-HardDisk
Disk zeroing in vRO
I’m not sure when it was added (i.e. which version), but back in 2014 we couldn’t find equivalent functionality in vRO. I did a quick search of the vCenter plugin methods in my v7.2 appliance and couldn’t see it there either. It turns out though that I was having a bad typing day. Burke Azbill pointed me to the right place (thank you):
vRO API help for zeroFillVirtualDisk_Task
So, “zeroFillVirtualDisk_Task” is a method called from VcVirtualDiskManager. All we need to give it is a datastore path and a VcDataCenter object and it’ll do the rest.
Getting a datastore path is relatively straight forward. Using vSphere’s Managed Object Browser (MOB), I can pick a VM object and navigate down to the config (1) and the hardware (2), get the disk devices (3) and look at their backing (4). The fileName attribute gives me the datastore path that I need.
Example VM configuration information in the vSphere MOB
Obtaining a VcDataCenter object, if I have the VM object already is a doddle too. There’s a vCenter plugin action that will do it for me based on the information that I have available to me in the MOB above. Taking the datastore attribute, which is a reference to a datastore object, I can pass it to the action below and get the VcDataCenter back.
Library vRO action getDatacenterForDatastore
Putting it together
Starting with a vCenter VM (vm in the script below) object in vRO then, zeroing all of the VM’s disks can be achieved as follows:
Line 1 – Gets the vCenter plugin connection for the vCenter that owns the VM called vm.
Line 2 – Gets the VCVirtualDiskManager object that the zeroFillVirtualDisk_Task method is a member of.
Lines 3 to 5 – Getting the config, hardware and devices for the VM. This could be done as one line.
Line 9 – Starts a loop to run the following code for each device.
Line 10 – Checks to see if the current device is a Virtual Disk.
Line 11 – Gets the Virtual Disk filename.
Line 12 – Gets the VcDataCenter object using the Virtual Disk datastore as an input.
Line 14 – Instructs vCenter to zero the Virtual Disk.
Considerations for vRA
There are a few considerations aside from some minor tweaks to make the code more efficient or robust before we look at adding this as a workflow subscription in vRA.
Firstly, on some storage devices (my home lab included), zeroing the disks causes a thin provisioned disk to expand to its full size. If the disks are large, you can expect the task to take some time and / or consume a lot of storage space.
Secondly, if there are snapshots running on the VM, they must be removed first.
Thirdly, the VM must be powered off before this will run. If the workflow subscription takes place at the right time, this shouldn’t be a problem however.
Finally, as I’ve already mentioned, this process could take some time to complete. The zeroFillVirtualDisk_Task method returns a vCenter task reference. Ideally that should be monitored for completion rather than just firing and forgetting. If the VM has multiple disks, that’s multiple tasks.
Adding it to the vRA Event Broker
Taking the original script above and factoring in the considerations too, it’s possible to create a fairly simple workflow that can be added as an Event Broker subscription in vRA. I’ll post it up here soon.
This post was inspired by Ryan Kelly’s recent post of a similar title in which he revealed how to change the “All Services” icon in the vRA 7.2 Catalog tab. Shortly after seeing that, I noticed that Ricky El-Qasem had also posted on the subject and created a small utility to accomplish the same task.
Great posts both of them. But I wanted to take a slightly different route and accomplish the same task in vRO which, as I’m sure you know, comes bundled with vRA. I must admit that I’m not totally satisfied with the result, maybe about 95%, but I’ll explain why later.
Selecting a source image
For my initial testing, I thought I’d use the image on Ryan’s post as it clearly worked for him. However, the image in his post is JPEG formatted and contains a border. That would clearly look a bit naff so I converted it to a PNG file, removed the border and the white background.
For guidance, I’d recommend a PNG file for these icons so that if they’re not square the image background can set to be transparent, allowing the theme from vRA to show through. It just looks better.
As vRO allows images to be imported and stored within it, I thought I would try this approach. I have previously used such Resource Elements to attach images to HTML emails sent by vRO (as well as to store templates for text / emails, JSON objects and XML).
From that point, it should simply have been a case of converting the image to Base64 text within vRO and executing a REST update to vRA to replace the image. However, the methods available for extracting data from a Resource Element convert the image to a string, which munges it a bit. The resulting Base64 string is therefore corrupt and you end up with a broken image displaying in vRA:
I tried various ways to get around the issue but so far I haven’t been able to, hence my minor dissatisfaction. What this means is that like the method that Ryan outlined, you still have to convert the image to Base64 outside of vRO. To finish off the workflow that I wanted to create, I did just that using an online tool and stored the Base64 string in a different resource element in vRO.
In the RAW output for the image shown, only the actual Base64 text is required and can be placed in to a text file and added to vRO:
This changed the process in the workflow slightly. In some ways it simplified it. Let’s look at that now…
Looks fairly simple, doesn’t it? Actually it is. No messing around with tokens!
Line 2 – This returns the vCACCAFE:VCACHost connection for the default tenant in vRA (vsphere.local)
Line 4 – Creates a REST connection to the vRA Catalog Icon Service
Lines 7 & 8 – Creates a reference to the default tenant organisation in vRA
Lines 11 to 16 – Constructs a new icon object using the Base64 image data as well as the ID of the image we’re replacing
Line 19 – Execute!
In the workflow, I’ve added three inputs:
Select a resource element where the base64 image data is stored (using this overrides the field below).
Paste in the base64 image data yourself.
Set the MIME type of the image (i.e. PNG or JPG)
Running it against Ryan’s icon and the Union flag image from above, I got the change applied across all tenants in vRA as expected:
Want the workflow to try this yourself? I’ve popped it on to GitHub:
I just wanted to take this opportunity to share a few vRO actions from my library that I’ve recently tidied up. Some started life as scriptable tasks in other workflows but it made sense to strip those bits out and put them in to discreet actions to enable better re-use.
Several of these functions came from a single project. The IPAM system in use only returned an IP address for the vRA provisioned VM being worked on and a subnet mask. The gateway address had to be calculated. In another project, similar constraints existed but with the added complication of some networks having the gateway as the first address in the subnet and some having it as the last!
A couple of these functions also came in handy for some NSX automation, making sure that a new IP address was added to the correct interface of an NSX Edge Services Gateway (ESG) device.
There are 7 actions in total. I added a couple to complete some extra functionality that was missing from the original use case requirements.
areIPsInSameSubnet – Takes two IP addresses and one subnet mask as inputs and returns “true” if they’re in the same subnet or “false” if not.
areIPsInSameSubnetUsingCIDR – Takes two IP addresses and one subnet CIDR value as inputs and returns “true” if they’re in the same subnet or “false” if not.
convertCIDRToIPMask – Converts a subnet CIDR value (e.g. 24) to a subnet mask (e.g. 255.255.255.0).
convertIPMaskToCIDR – Converts a subnet mask (e.g. 255.255.254.0) to a subnet CIDR value (e.g. 23).
getIPBroadcastAddress – Takes an IP address and a subnet mask as inputs and returns the network broadcast address (e.g. 10.10.36.12 & 255.255.255.0 returns 10.10.36.255).
getIPHostAddress – Takes a network / subnet address, a subnet mask and an address index as inputs and returns the specified IP address. The address index can either be a string or a number. If it’s a number, the corresponding address in the subnet is returned (e.g. 10.10.36.0 & 255.255.255.0 & 8 returns the address 10.10.36.8). If the address index is a string, it can be one the values “first”, “last”, “broadcast”. The correct address from the range is then returned (e.g. 10.10.36.0 & 255.255.255.0 & “last” returns the address 10.10.36.254).
getIPNetworkAddress – Takes an IP address and a subnet mask as inputs and returns the network / subnet address (e.g. 10.10.36.12 & 255.255.255.0 returns 10.10.36.0).
I’ve put all 7 actions individually in a GitHub repository along with a package that contains them all. (Note: some of the actions have dependencies on the others.) They’re free to use, so help yourself.
I was absent from the VMUG scene for the entirety of 2016 for a number of reasons. I’m determined to make amends this year and I’m jumping in at the deep end!
A week from today is the first South West UK VMUG for 2017 and I’m going to be presenting a session. Registration for the event is open now; so if you’re in the area, drop by. As usual, it’s at the mShed in Bristol.
At the recent vRetreat at Silverstone, I experienced three technical presentations / Q&A sessions from the event sponsors. One of these, Cohesity, I was charged with writing a little more about. Up until that point, my experience and knowledge of Cohesity’s solutions was very limited as I’ve had my head buried in several large projects over the recent months. Ezat Dayeh‘s presentation at the vRetreat was therefore a great introduction for me to Cohesity’s mission and value proposition.
Cohesity was founded in 2013 by Mohit Aron, former co-founder of Nutanix and a Google File System lead developer. With this DNA, it’s no real surprise that Cohesity’s solutions have a storage focus. The difference with Cohesity is that its focus is not around primary storage (production virtual machines, databases etc), but secondary storage (file shares, backups, archives etc). Their mission is to redefine that secondary storage market.
What is secondary storage
Cohesity estimate that around 80% of an enterprise’s storage needs are for secondary data and that the majority of the storage market incumbents are focussed on primary storage. Obviously the picture will differ from customer to customer, but in many cases this secondary storage will be distributed across various platforms and, in some cases, may be stored more than once. This could lead to problems with regulatory compliance, operational costs and even just having a view on what data is being retained.
The Cohesity solution
Cohesity’s solution is based on a hyper-converged infrastructure platform built from commodity hardware. Of course the hardware isn’t the whole story, not even close to it. But we’ll come on to the software part of it in a minute.
The C2000 series chassis offers 4 HA nodes in 2U of rack space and there are no stated limits when it comes to scalability. The obvious advantage to this over some of the more “traditional” storage solutions is of course that you can start small and grow it. This is a model that many newer solutions are opting for and it seems to work well for them, so why not Cohesity too 🙂
Cohesity’s special sauce, its software, is where the clever stuff happens. One of of the primary use cases for Cohesity is as a backup target or to provide an alternate backup solution. Cohesity can be a backup target for your existing backup software (Veeam being one of the cited examples and another of vRetreat’s sponsors). Alternatively, Cohesity can pull in the inventory from vCenter so that it can be backed up as part of a schedule using snapshots. Protected virtual machines can be restored swiftly and even used for test and development workloads. Restoration jobs are placed and on the Cohesity platform initially and then storage vMotioned back to the correct location later.
Cohesity’s CloudArchive solution opens up the option of archiving cold data up to public cloud services like Amazon S3 or NFS based services. Once enabled, it’s all automated.
CloudReplicate is a version of Cohesity that runs in the public cloud and enables a number of interesting use cases. One is DR in the cloud, Azure is supported with AWS coming soon. Another is using such cloud services for test and development environments, particularly for geographically dispersed teams.
Another area that Cohesity are actively working on is that of data analytics. They predict that in 3 to 4 years’ time, it’ll be a huge use case. Add in deduplication, an “API first” development approach and built-in HA to the mix and you have an interesting solution emerging.
My thoughts on Cohesity? Based on Ezat’s presentation, Cohesity looks to have found an area that isn’t fully exploited yet. Most other vendors so far have been focussed on the cream at the top of the bottle (I had a manager once who raved about gold top milk) and, in some cases, happy to drink the rest too. Cohesity almost seem to be saying “You have the cream, we’ll have the rest of the bottle.” Will they be successful? I think they will. Ezat shared with us that their EMEA sales operation was doing well in the first four months of operating. But I’d wager that their successes will draw other players in to the space they’re trying to carve out.
I’d like to hear from some of Cohesity’s customers at some point to understand how it’s helped them. There’s nothing better than a good customer use case! Of course, some potential customers are going to be wedded to other vendors and some may be doing just fine managing their data with their primary storage. But it’s a big marketplace out there if the USP is right.
The day started early with breakfast at the Porsche experience centre. After a few minutes drooling over looking at the cars inside the experience centre, we all sat down to hear from each of the sponsors in turn. Each of us was assigned a sponsor to cover in more detail. I’ll be covering Cohesity in a subsequent post.
First up was Zerto’s Darren Swift. After going over the company’s history, Darren launched in to two detailed use cases for Zerto’s products. One of these, admittedly a “corner case”, dealt with the recovery from a ransomware attack using Zerto. Both use cases were interesting and Darren presented them well. He completed his session by going in to some more detail about the product architecture and its scalability. He also took our questions at the end and throughout.
I’ve not had much opportunity to work with Zerto in the recent past. Their API is of great interest to me though. Maybe later this year I’ll get some time to explore automating Zerto through vRO. There are quite a few Powershell examples out there already, but I’d like to get it working with vRO.
Fellow vExpert Michael Cade was on hand next to represent Veeam. Their current focus is on availability for the enterprise. The most recent version (9.5) of the Veeam Availability Suite includes a number of enhancements including direct restore to Microsoft Azure with the restored VM being pre-converted to the correct format. Veeam are also introducing agents to offer better interaction with public clouds and physical servers / workstations.
Veeam have been in business for a decade now. Some of their products have their origins with VI3 and vSphere 4 and have evolved from there. But they’re gradually transforming to adopt an API driven product approach. Combined with their wider coverage of the enterprise, they’re worth keeping an eye on in my opinion.
The newest of the day’s three sponsors, with their EMEA sales operation only starting up last September, was represented by Ezat Dayeh. Cohesity’s goal is to address what they see as the gap in the secondary storage market, no, to redefine the secondary storage market.
Following lunch in the experience centre’s restaurant and a quick safety briefing, we were introduced to our Porsche Driving Consultants (I think that was their title). They would be our guides to the various tracks and features and help us get the best from the selection of Caymans and 911s available to us. My PDC, Ben, spent a while talking to me first, trying to gauge my comfort level and experience before we headed out in this 718 Cayman.
For those who don’t know their Porsches (certainly not my nephew, who’s a certifiable Porsche nut at the age of 6), the Cayman is a mid-engine, 2 litre car that puts out 300BHP. I spent a few laps on the main track building up some speed around the various corners and getting a feel for the car until Ben introduced me to the kick plate. This is a section of very wet track that throws the back end of the car to the left or the right randomly to teach you how to control a skid. Great fun!
I also got to try out the car’s launch control, which was a bit brutal.
My second car of the afternoon was a 911 Carrera. Thankfully Joe Baguley hadn’t broken it 😉
As a rear-engine, 3 litre flat 6, the 911 was a bit of a different beast from the Cayman. I did quite a few laps in it and also revisited the kick plate. Also on the menu were a number of laps on the low friction circuit. By the time the light was dropping and the rain started to fall, I was wiped. But it was great fun! I’d totally recommend the experience to anyone.
Final Event Thoughts
I’m a big fan of VMUGs and other IT community events. This one was very different than any I’ve attended in the past though. Patrick put together a very informative and enjoyable event and I am very grateful for the opportunity to attend. My thanks to him for inviting me, but especially to Cohesity, Veeam and Zerto for making it possible.
I must confess that I’d never driven a Porsche before. Growing up, I had loads of Matchbox cars and a number of them were Porsches. (I actually still have one upstairs somewhere.) Ben, my PDC, and the cars themselves did a pretty good job of making me want one. The day had one final surprise in store for me though; a quick trip as a passenger in Joe Baguley’s Tesla. I want one! Ludicrous mode is well named but felt much more controlled than the Porsche launch control experience earlier.
I digress though. I think that the event was a success and it all ran very smoothly. I hope that Patrick is successful in planning more events like this, I would certainly accept without hesitation. For now though, here’s the event video that Patrick commissioned:
And here’s how not to do it. The whole Porsche team were professional and let you have fun, let you challenge yourself, but they knew what they were doing. Thank you Porsche!
This year, VMworld US and VMworld Europe are closer together. Only two weeks separate the two events. I’m interested to see how that changes the makeup of the sessions and announcements. Of course, to make the best assessment I’d have to be at both conferences. Time to talk to my manager and get some session ideas submitted!
If you’ve been using vRO (formerly vCO) for any length of time, you might have encountered a third party source code repository service called “FlowGrab” in your travels. It was a great idea and had great potential but its owners weren’t able to maintain and develop it as they wanted to. Sadly, the service was shutdown at the end of October.
Dear FlowGrab user and supporter
We notified in spring 2016 that FlowGrab is looking for a new, good owner. Today we can state that after all the search and negotiations we have not found it. Therefore we have to execute the scenario that we hoped to avoid – to shut down and unplug FlowGrab.com.
If you are an active user – please export your content from FlowGrab.com before 31st October as this is the last day for FlowGrab.com to be available.
If you consider to be a new and good owner for FlowGrab and continue what we started – continue development of a source code repository for VMware cloud automation portfolio, supporting VMware vRA and vRO customers with a solution that increases the productivity, visibility and compliance, please let us know. We will provide you all the information you need to make the decision to purchase FlowGrab in its entirety.
Regardless of further discussions 31st October 2016 FlowGrab.com will be unplugged.