For some the Solutions Exchange at VMworld is the place to get free pens and stress balls in between conference sessions. For others, myself included, it’s an opportunity to learn about products related to or supporting the virtualisation industry. I spent several hours in there talking to representatives of several vendors with products that I have either used in the past, might use on upcoming projects or that I am just curious about.
What follows is a summary or overview of some of the vendors (besides the VMware stand of course) that I visited.
I’ve included QNAP and visited them for one fairly simple reason – I own one of their devices! Ok, my QNAP NAS is fairly small (two drives in a RAID 1 configuration) and I mostly use it as a file server, media server and NFS datastore, but it’s very versatile.
Historically most of their models seemed fairly directed at small offices / homes / small businesses. This made their appearance at VMworld a little surprising in some respects. QNAP, it seems, have grander plans than the SOHO customer market. Last month they launched two newer and larger devices (the x79 series) that seem to be directed at the SME market.
QNAP’s devices have been VMware certified for a couple of years now and mine has worked well (albeit a little slowly) as an NFS / iSCSI storage location. This certification has been maintained on each of QNAP’s new models since that time and is no different now. So a single device with up to 12 drive spindles must surely be directed at the SME market. Indeed my conversation with the QNAP representative seemed to back that thought up.
Ever since Duncan Epping first blogged about Tintri I have kept half an eye on them. The Tintri’s primary object of concern is the virtual disk. That is to say, rather than being a block storage device that offers LUNs or datastores for storing files and managing VMs as a collection of files, Tintri offer a single NFS datastore that manages VMs as a deployment of one or more virtual disks.
Inside the Tintri are a mere 8 disk spindles and 2.5Tb of SSDs. In total 13.5Tb of useable space is presented to connecting ESXi hosts and the device and storage are managed through a web interface – not that much management is normally necessary. Once the device is configured it can more or less be left to handle things itself.
The design aim of the Tintri is that all hosted VMs are served from the SSDs onboard. Tintri’s software (the clever bit) employs a mixture of techniques including in-line de-duplication to achieve this aim. The management interface then shows two fairly useful gauges for use by virtual infrastructure administrators. One shows available capacity, the other shows available performance. This latter gauge is the important one. Whilst it is at less than 100%, there’s enough performance available to deploy more VMs. Use it all up though the performance will drop off significantly.
Tintri appear to have made a very logical step forward in the evolution of storage for virtual infrastructures. They have a little way to go still. Although they introduced dual controllers into their devices recently and these controllers can be upgraded independently of each other, they still lack any form of replication and are not yet VAAI capable.
The future looks bright for Tintri if they can add those two little features (rumoured to be arriving early next year). There are certainly some compelling use cases but Tintri will find themselves battling against the more established storage vendors. Bigger marketing and sales budgets vs new but exciting technology. It will be an interesting competition.
Mike Laverick also had a recent vendorwag with Tintri that’s worth a listen.
I recently spent 12 months working for a Xsigo customer. Xsigo are enjoying fairly healthy interest (and presumably sales too) in their products. Having a single, high bandwidth connection to an ESXi host for all of it’s I/O may worry some and confuse others but there are some advantages to it. Simpler cabling is one of course.
Xsigo works by presenting virtual NICs to the host over a Converged Network Adapter (CNA). These NICs can be assigned to different Standard / Distributed vSwitches as required. On the Xsigo Director the virtual NICs are connected to local uplinks (either fibre channel, 1Gb Ethernet or 10Gb Ethernet). These uplinks can be shared by more than one virtual NIC and via the various configuration options you can segregate traffic quite neatly and apply differing QoS policies.
Later versions of the software running on the Directors offer greater configuration and reporting potential than was previously possible and in previous years Xsigo have been used to drive VMworld’s Hands-on-Labs.
Competition for Xsigo comes from the established network vendors (e.g. Cisco). At present though the 20 / 40 Gb bandwidth available between hosts using Xsigo is a better headline than other vendors can offer. It won’t remain that way for long in my opinion.
I’ve been meaning to try out vOperations Suite 4 for a few weeks now. In light of VMware’s product launches at VMworld I couldn’t help but pop by and have a chat.
I’m reasonably familiar with what vKernel offer and whilst I don’t have an immediate need in that area I was keen to find out how they felt about the upcoming release of vCenter Operations Suite 5 (interesting how similar the names are). The impression that I was given (although I was hardly expecting panic anyway) was that they are fairly unconcerned. They don’t see VMware as their big competition in this area. Their product is priced differently and they see this as a big differentiator. In fact they were more concerned at promoting their new, free tool (vScope Explorer) which launches this week.
Here, like a few other vendors, vKernel are giving away a limited subset of the functionality that they offer in their full products for free. Without wishing to teach anyone to suck marketing eggs, the purpose of this is to get people used to using their tools, provide them with useful and valuable information and then be there to pick up some business afterwards. Nothing wrong with that at all and the demo that I saw whilst Eric Sloof was interviewing Jonathan Klick of vKernel was good.
Nimbula Director (coincidentally at version 1.5 like vCD) was on my list as a possible product for use in an upcoming project. Like vCD it is a Cloud Management System but it uses KVM as the hypervisor of choice. Nimbula was formed by some of the team that developed Amazon EC2 (the company’s public cloud offering) and integration with it is one of the key features. Indeed. you can either use Nimbula to manage EC2 instances, deploy and manage local hosts or a combination of the two. It also works with VMware’s Cloud Foundry. Small, presumably relatively unsupported, Nimbula deployments are free.
Nimbula have a great looking product that has plenty of automation and scalability built in. There are features a plenty and a RESTful HTTP API if you want or need it. The demo that I was shown was fairly slick at any rate. My only concern is that if you already have one hypervisor, why would you want another? Whether it proves useful for my project is another matter.
My primary reason for visiting Riverbed (who now own the Zeus Traffic Manager – a virtual appliance that offers highly configurable load balancing) was to talk to them about their APIs for a project that I’m working on. So I was told, they do have a RESTful web services API available that allows any configuration possible through the management interface to be completed programmatically. Job accomplished thanks to the nice people on the stand.
Something I’d like to look into in the future however is some sort of comparison in performance between virtual and physical load balancers. F5, who I also visited, maintain that physical is the only way to go and that virtual appliances put too much strain on virtual resources to be effective. It’d be interesting to see the two pitted against each other in some sort of objective test. I’ll have to google and see if anyone has done that already.