0

vRetreat London 2018 and Zerto Virtual Replication 6.0

I was lucky enough last year to be invited to the inaugural UK vRetreat, organised by fellow vExpert Patrick Redknap. If you’ve not encountered a vRetreat before, or are wondering what it is, it’s an event with a small delegation of bloggers invited to pick apart some presentations by the event’s sponsors.

Following on from Silverstone in 2017 I had assumed that I’d had my shot and that other bloggers would get their chance at the next event. Fast-foward to February 2018 and I again found myself sitting down with a number of quality vCommunity members to exchange stories and, most importantly for vRetreat, listen to some detailed presentations by a select delegation of IT vendors.

The Crypt - yes, it's in a church!

One key difference between the two events (I think that Barry Coombs and myself are the only two attendees of both events) was the venue and the “extra-curricular activity”. Instead of the Porsche driving experience from last year, we would all be entering the Crystal Maze. (Great fun, especially if you remember the TV game show, although the team I was on had two people carrying injuries and, to be honest, we sucked!)

The venue for the daytime, technical part of the day was familiar to me too from the numerous times that I’ve been to CloudCamp in London. Ominously named “The Crypt”, it is in fact a Church near Farringdon.

Back to the purpose of the vRetreat. Although I mentioned presentations before, the idea is that it starts out that way but, with a smaller audience, it gets a bit more interactive as the attendees ask lots of detailed questions that you might not get in a larger setting. On this particular occasion, we had the pleasure of hearing from Zerto and Cohesity. With the room divided between the two, I have the pleasure of covering Zerto. Continue Reading

0

IT Disaster Recovery Preparedness Benchmark

Disaster Recovery (and Business Continuity) were sometimes an afterthought even as recently as a few years ago. When I started out in IT the attitude was usually similar to that of an ostrich burying its head in the sand. Thankfully times have clearly moved on.

Yesterday a press-release was brought to my attention that I’d like to share. It concerns a new research advisory council that has been created to help provide IT professionals (and, by extension, businesses) with a reflective measure of how prepared they are to handle Disaster Recovery situations. The DRP Council, as it’s known, have launched an online survey that takes just a few minutes to complete:

drpbAs recent cyber-attacks and natural disaster events have shown, the need for IT disaster recovery preparedness has never been greater. However, research indicates that less than half of all companies have a disaster recovery plan in place, and even fewer have actually tested their plans to see if they will work as expected.

This need to uncover the value of disaster recovery planning and testing, as well as gain a better understanding of DR best practices to make preparedness more cost-effective and efficient was the driving force behind a recently created Disaster Recovery Preparedness (DRP) CouncilFormed by IT business, government and academic leaders to address these issues, its mission is to increase DR Preparedness awareness, and improve DR practices.

The DRP Council has developed an online Disaster Recovery Preparedness Benchmark (DRPB) Survey.  The survey is designed to give business continuity, disaster recovery, compliance audit and risk management professionals a measure of their own preparedness in recovering critical IT systems running in virtual environments.

Founding members of the DRP Council include:

  • Steve Kahan, Council Chairman, PHD Virtual
  • Dave Simpson, Sr. Analyst, 451 Group
  • Bilal Hashmi, Sr. Systems Engineer, Verizon
  • Michael Sink, Director Data Center Technologies, University of South Florida
  • Steve Lambropoulos, University of South Florida
  • Darren Hirons, Principal Systems Engineer, UK Health & Social Information Centre
  • Trystan Trenberth, CEO and Managing Director, Trenberth LTD
  • Riaan Hamman, CTO, Puleng Technologies
  • Carlos Escapa, Council Research Director , PHD Virtual
  • Anita DuBose, Council Research Director, PHD Virtual

“Users can now benchmark their own disaster recovery preparedness and find out real answers on how they would be able to get their IT systems up and running within a realistic time-frame to meet stringent business requirements,” said Steve Kahan, Chairman of the DRP Council. “Just 10 minutes of their time will provide them with some immediate feedback and a benchmark score that rates your DR preparedness with other companies that have participated.”

“I am unsure if our current best practices are the best or most efficient ways to deliver our SLA,” said Darren Hirons, Principal Systems Engineer, UK Health & Social Information Centre. “Learning about best practices through the Disaster Recovery Preparedness Benchmark could help us learn new ways to shorten the SLAs and deliver better service to our businesses.”

The DRPB survey provides a benchmarking score from 0-100 that measures the implementation of IT disaster recovery best practices. DRPB benchmarking scores parallel the grading system familiar to most students in North America whereby a score of 90-100 is an “A” or superior grade; 80-89 is a “B” or above average grade; 70-79 is a “C” or average grade and 60-69 is a “D” or unsatisfactory grade. Below 60, rates as an “F”, or failing grade.

Supporting Resources

Disaster recovery Preparedness Council:  http://drbenchmark.org/about-us/our-council/

Disaster Recovery Benchmark Test:  http://drbenchmark.org/benchmark-survey/survey-overview/

About the Disaster Recovery Preparedness Council

The DRPC is an independent research group engaged in IT disaster recovery management, research, and benchmarking in order to deliver practical guidance for how to improve Business Continuity and Disaster Recovery. www.drbenchmark.org

As a consultant, I don’t have anything but lab environments of my own that I can base responses on. If you manage a production environment though, I’d urge you to take a few minutes to complete the survey.

Cheers!

0

Synology DS1513+ Released

DS1513+The Synology DS1512 has been a popular choice for many home labs in recent years. I hoped that the company’s raft of recent product updates would reach this model eventually. Well my wish was granted as Synology have announced the DS1513+.

There are a few modifications to note. The one that stands out the most at first glance is the doubling of LAN capability.  The DS1513+ boasts no fewer than 4 RJ45 ports. That does seem like quite a lot. It does open up some interesting possibilities though…

The full specifications for the DS1513+ can be found here.

0

VMTurbo Make Monitoring Free

vmt-logoToday, VMTurbo have launched their Virtual Health Monitor tool and they are letting it loose on the world for the whopping figure of… wait for it…

$0 – That’s right, free.

The tool is an updated and evolved version of the Community Edition of VMTurbo’s Operations Manager product and comes without restrictions on where and how often you deploy it and what it monitors. Ok, that’s not so clear.

The tool is downloaded as an appliance from VMTurbo’s website in a format optimised for one of the following platforms:

  • VMware vSphere
  • Microsoft Hyper-V
  • RedHat Enterprise Virtualisation (RHEV)
  • Citrix XenServer

The format of the appliance is the only difference that you should find between the versions though as it’s capable of monitoring all of them at the same time. You just download the format that matches the virtual infrastructure where you want to host the tool.

The features that VMTurbo offer with the tool include:

  • Instant visibility to health and performance;
  • Unlimited use across virtual data centers of any size;
  • Free monitoring and reporting for any hypervisor;
  • Lowest total cost of ownership (TCO) due to innovative product architecture;
  • Weekly analysis of utilisation rates and areas to improve efficiency and reduce risk

As I’m waiting for the 428Mb appliance to download over the wet bit of string that is my broadband tonight I can’t speak to the experience of deploying it and what it looks like yet but I hope to have the time to kick the tyres on it tomorrow.

Download Virtual Health Monitor from VMTurbo’s website.

0

vOpenData – Shared Virtual Infrastructure Statistics

Whether you love or loathe VMware and their products, one area that you can’t fault is the community that’s built up around them. In that community blood, sweat, tears and a dash of brilliance have produced many amazing things. vOpenData looks like it could be one of them.

vOpenData is the brainchild of Ben Thomas and was built with William Lam and assistance from several other VMware community members. Essentially it is a public database of VMware Virtual Infrastructure statistics / configurations. Users download a script that collects some anonymous data about their infrastructure. Once uploaded and added to the database, the data contributes to a plethora of publicly available statistics.

At the time of writing there are over 50,000 VMs in the database. The average VMDK size is just over 70Gb. For me, as a techie / evangelist / consultant, this is useful information and there’s so much more there besides. Here’s a quick grab from the public dashboard:

screenshot341

As a community project, its value is huge and will get even better the more people contribute data to it. Head over to the vOpenData website and find out more.

0

The End of the VMTN Saga?

vmtn_storeIf you don’t know what VMTN is, you might be new to VMware virtualisation or the IT industry. Either way, I have an older post that covers it a bit. I posted it in November 2011 just as the campaign to get the VMTN subscription re-instated by VMware was kicking off.

Here we are though, nearly 18 months later, and it looks like it’s not going to happen. One of VMTN’s biggest proponents, Mike Laverick, posted on the VMware Communities thread related to VMTN today that it looks unlikely. In his words:

The prevailing view appears to be that other projects will be sufficient… Such as Project Nee…

Project NEE is VMware’s online learning resource that’s currently being put through its paces. If you read around what it does, you can see why VMware would consequently view the resurrection of VMTN as unnecessary. Whilst it’s a disappointment to people who run home lab setups, want to run legitimate workplace labs and prototypes etc., I don’t think that it’s necessarily the end of the world. The level of automation / orchestration possible in VMware’s suite of products means that re-installs don’t have to take an age to complete. In fact, I want to rip and rebuild my lab regularly because it’s exactly those sorts of tasks and skills that I want to hone. I don’t want my lab to sit and age like some legacy infrastructure. I appreciate though that others may not share my views or enthusiasm.

Either way, my advice is not to hold your breath in the hope of a change of heart. If it’s true that VMTN is going to stay dead, VMware have made this decision with their heads and not their hearts. My head says, keep calm and roll with it*.

* (@h0bbel, another one for your collection?)

0

Get Your Homelab in the Clouds with AutoLab

screenshot327Since we have a small but significant following of people who run home labs here on vSpecialist, I thought I’d mention a limited offer that may be of interest.

If you’re not familiar with AutoLab, it’s designed to produce a nested vSphere 5.1, 5.0 or 4.1 lab environment with minimum effort. Prebuilt Open Source VMs and the shell of other VMs are used along with automation for the installation of operating systems and applications into these VMs with the end result being a useful home lab that you can stand up from scratch in a short amount of time.

Anyway, it’s possible to get an AutoLab setup and running in the cloud and BareMetalCloud actually offer it as a service. Mike Laverick has some discount codes available (use MAGICMIKE100) to the first 100 people to take up the service. Check out his post on the topic for more details and help on getting started.

0

First Impressions: PHD Virtual Backup 6.2 (with Cloud Hook)

screenshot323As I mentioned in my recent Cloud Backups post, I’m trying out a few virtualisation backup products to help me out with a prototype infrastructure that I’ve been working on. I want to store a backup of the various VMs that I’ve setup outside the infrastructure that I’ve setup – effectively offsite.

By happy circumstance, PHD Virtual had a beta running for version 6.2 of their backup product that includes “CloudHook”. It’s a module that enables integration with cloud storage providers for the purposes of backup, archiving and disaster recovery. The 6.2 release covers the backup aspect, and future releases will add in archiving and DR functionality. Thanks to Patrick Redknap, I managed to hop onto the beta and try it out. (Note that the screenshots below come from a beta release and may have changed for GA.)

PHD’s Virtual Backup product is delivered as a Virtual Backup Appliance. I was initially wary of production services running on dedicated virtual appliances a few years ago but I’ve changed my view over time and I now really like using them. (That’s probably a subject for a different post though.) I won’t go through the mechanics of the installation in nauseating detail, but basically it breaks down to the following high-level steps:

  1. Download and unzip the virtual appliance
  2. Use the vSphere Client to import and deploy the appliance (requires 8Gb disk space, 1 vCPU, 1Gb Memory and connection to 1 Port Group in it’s default configuration)
  3. Open the VM’s console and enter some network information
  4. Reboot the appliance
  5. Install the PHD Virtual Backup Client

Configuring the appliance for use is pretty straightforward although if, like I was, you have to make multiple hops to get to your data center (RDP over RDP over VPN for complicated reasons that I can’t go into), you might find that the PHD Virtual client doesn’t play too nicely with a lack of screen space. I could only just get to the “Save” button. (Granted, it’s an unlikely situation to be in though.) The minimum required is to connect the appliance to vCenter (see the General tab of the Configuration section):

2013-03-09 21_51_36-

Normally at this point you’d expect to have to configure some disk space local to the backup appliance (or network storage space). Well, you still do really but you actually have a choice to make; where do you want to backup to? Continue Reading

0

“Cloud” Backups

An increasing number of vendors are beginning to offer backup solutions where your data ends up being stored on some cloud storage platform or other (e.g. Amazon S3). As with any new technology, some people will lap it up, some will keep a curious eye on it and others will eschew it completely. Which are you? Are you likely to adopt it or not?

dlt-tapeI think the answer to that is not cut and dried. Think for a minute about why you’d want your backups to end up on a cloud storage platform. In years past, backups ended up on tape cartridges. Most sensible organizations would then store those tapes offsite and hopefully not need them again until the data expired. Of course, if you did need to perform a restore it meant getting the tape back etc. I’ve been in this industry long enough to have had to do that.

The point anyway is that backup data conventionally got stored offsite so that it was available if the worst happened. That is the concept behind cloud backups too. The only difference is that the medium has changed. So instead of your backups ending up on tape, they end up on someone else’s server effectively. You don’t know where exactly but you rely on the resilience of your chosen cloud storage provider to safeguard that data.

Is It a Good Idea?

In my view, it’s neat solution to something that used to take up a good deal of time for me or one of my colleagues a few years ago. The whole process is automated once setup. Of course it may not be the right solution for everyone for one or more of the following reasons:

  • Available Bandwidth – If your sitting on the end of a slow link to the internet then trying to push many GBs or even TBs of data to a cloud storage provider every day is going to be a non-starter.
  • Volume of Data – Related to the above, how much data do you backup, how often and how often does it change. The first backup will typically take the longest to complete but subsequent ones will be quicker. Partly though this will depend on the mechanisms the backup vendor are using to minimise the volume of data being transmitted. Different vendors are likely to have different approaches here.
  • Legal / Compliance / Security – If you’re storing your data on someone else’s infrastructure you naturally want it to be secure. I’m not saying that the cloud isn’t secure but is it the right place for exceedingly valuable or sensitive data? You wouldn’t keep the Crown Jewels in a Big Yellow storage facility.
  • You may even have a Disaster Recovery facility and backup directly to that.

As with everything in IT, the answer is that it depends. I suspect that the majority of takers for cloud backups will be SMBs and medium sized enterprises although I’m always happy to be proved wrong about such predictions. I doubt that cloud backups are going to be a rapidly passing fad but it remains to be seen whether they will see massive adoption. Still, cool technology all the same.

So, what’s my interest? Well, I’ve been working on a project recently to create and support the infrastructure elements of a software prototype. This modest infrastructure is sitting away in a data center that I’ve never been to and could not easily access. It’s quite a simple setup, it’s documented and we have all of the installation files and source code secured offsite. The infrastructure itself though represents many hours of effort and all of the application server configurations are not completely automated. If we were to lose the infrastructure or the data center…

Of course we’re running backups locally but the backup destination is just a VMDK on the same datastore as all of the VMs – not very resilient. On a semi-regular basis I have transferred the VMDK to a cloud storage provider but it’s been a manual process so I thought I’d take this opportunity to try out a couple of different backup solutions and see how they help out. Over the next few weeks I’ll post a couple of reviews.