0

Unitrends Free – Review: Part 1

I was asked to give a new, free backup tool a quick road-test recently.

Unitrends have had an Enterprise version of their backup software for some time. And, as I’ve used it a bit in my lab with an NFR license in the past I was only too happy to give Unitrends Free a go.

Features

As a free edition, you expect a basic set of features. The goal of such offerings is normally to get you hooked, but wanting more.

Unitrends Free offers the following features:

  • Backup from VMware vSphere or Microsoft Hyper-V
  • Unlimited VMs and host CPU sockets supported
  • Instant VM recovery (allows you to run a VM directly from the backup files) – this feature also allows for recovery verification testing and use of backups for test and development purposes
  • Unlimited incremental backups (subject to storage space of course)
  • Free forum support

There are limits however. For instance, backups are scheduled daily. You can choose the time and you can choose the days but they’re once per day. Storage is space is also limited, up to 1TB of data is supported. These limitations position the product as ideal for PoCs, labs, smaller deployments (such as for small businesses) etc. For more features and dedicated support, of course there’s the Enterprise version.

Download

To download Unitrends Free, a simple registration form needs completing on the Unitrends site. The software is offered as a pre-built appliance (there’s one download for VMware and one for Hyper-V) only that is comprised of a single file. There are also users guides and release notes files available.

Installation

As you’d expect with a solution that’s based on a Virtual Appliance, there aren’t many steps involved in getting it deployed and running. In keeping with a growing number of products that provide some form of installer to deploy their solution, Unitrends Free is packaged in such a way as to make deployment straighforward. The supplied single executable (.EXE file) can be run from a Windows desktop or server as long as you can reach your virtual infrastructure from it.

1. Once the installer starts, you’re presented with a prompt for login credentials to vCenter or an ESXi server.

201504115_220400-CapturFiles

2. I pointed the installer at my vCenter server and was next asked to choose a host and a datastore and supply IP address details (note that it’s sensible to have a DNS entry created prior to deployment).

201504115_220475-CapturFiles

3. You’re given the option to create some storage for backups to reside on during deployment. It’s turned on by default but I upped the default 128Gb to something more sensible.

201504115_220411-CapturFiles

4. That’s it for now and deployment commences.

201504115_220412-CapturFiles

201504115_220452-CapturFiles

5. A quick check in vCenter reveals the created appliance.

201504115_220429-CapturFiles

That’s all that’s required to install the appliance. However, it does require some basic configuration before it can be used.

Initial Configuration

1. Clicking Finish in the installer fires up a web browser pointed at the new appliance where you’re greeted by a License Agreement.

201504115_220493-CapturFiles

2. You’re then greeted by a configuration wizard. The first stage is setting the date and time. I chose to use my local NTP server, although this later transpired to be an issue.

201504115_220413-CapturFiles

3. The second stage is setting the hostname (note that it’s set to VMware_CE_UEB on deployment) and password for the root account.

201504115_220477-CapturFiles

4. Finally, the SMTP configuration is required.

201504115_220490-CapturFiles

Once these configurations are saved, the appliance should be all set to go. Except we need to define what needs protecting and to setup some backup jobs.

Backup Protection

What use is a backup appliance without any backup jobs? When you first hit the appliance’s dashboard, there’s a popup displayed containing a couple of tasks that help you to get started. The first of these is registering a host (to protect).

201504116_120489-CapturFiles

Since we already know that I’m using vCenter, let’s protect that and all of its VMs by clicking on “Register a Host”.

201504116_120408-CapturFiles

The details required are fairly straightforward. As part of the process of adding the host, a quick inventory is performed. Now we’re ready to create a backup job.

This is accomplished either from the same popup or via the “jobs” option on the left of the dashboard.

201504116_120445-CapturFiles

Step 1 of creating a backup job is choosing what you’re going to backup. I selected my vCenter server and then excluded the Unitrends appliance – it’d be interesting to find out if it’s intelligent enough to do that by itself later.

Step 2 is defining the schedule etc. This is all fairly simple to accomplish. In theory, that’s my lab VMs protected.

Fast forward to Part 2 to find out how I got on with the backups and my thoughts on the solution as a whole.

0

Updating Crashplan on ARM based QNAP

Last month I wrote a quick overview of the process to install Crashplan on and ARM based QNAP NAS. Six weeks (ish) later and it’s still going ok. Or, at least I thought it was…

Periodically, the makers of Crashplan update their software. Nothing surprising about that except that as the ARM based QNAPs aren’t an officially supported device (that I’m aware of), the automatic update doesn’t work and the backup engine on my QNAP kept stopping.

The upgrade itself is pretty straightforward however.

On the QNAP NAS

To get the software on the NAS updated, you first have to download the QPKG file from this thread on the QNAP forum.

Once you’ve saved it onto your QNAP device (I put it in the “Public” share but the location isn’t too important), the update can be performed very simply via an SSH session to the NAS:

[text][/share/Public] # sh ./CrashPlan_3.6.4_31_arm-x19.qpkg
Install QNAP package on TS-NAS…
./
./qinstall.sh
./qpkg.cfg
./package_routines
913+1 records in
913+1 records out
19203+1 records in
19203+1 records out
CrashPlan 3.6.3_30 is already installed. Setup will now perform package upgrading.
Stopping CrashPlan…
kill: Could not kill pid ‘14842’: No such process
Link service start/stop script: crashplan.sh
Set QPKG information in /etc/config/qpkg.conf
Cleaning /tmp/*.jna files…
Cleaning /share/MD0_DATA/.qpkg/CrashPlan/tmp/ files…
Starting CrashPlan…
Using interface: eth0 (192.168.1.5) – This can be changed in the web interface!
CrashPlan 3.6.4_31 has been installed in /share/MD0_DATA/.qpkg/CrashPlan.[/text]

That’s literally all you need to do to get your backups running again.

Client Software

Of course though, to configure and manage backups etc on your headless QNAP NAS, you need a client installed somewhere and that will need updating too.

In my case, the client software is on my MacBookPro. Downloading and installing the updated client software is a doddle so I’m not going to go in to any further detail there. However, the update process does wipe out the setting that points the client at your QNAP NAS.

To remake that setting, I had to edit the file /Applications/CrashPlan.app/Contents/Resources/Java/conf/ui.properties and uncomment the line that read:

[text]#serviceHost=127.0.0.1[/text]

and replace the IP address with the one from my NAS.

201410294_211051-CapturFiles

Job done!

0

Reporting-as-a-Service – Call for beta testers

CapturFiles-201410278_201043

One of the engagements that we’ve done quite a few times in the past, and do on a fairly regular basis at Xtravirt, is a virtual infrastructure healthcheck. Whether it’s as a one-off, a regular service or as part of a larger project.

Born partly from a need for better tooling and also because of a perceived gap in the market, Xtravirt have developed Sonar – Reporting-as-a-Service for virtual datacenters. We’re in need of UK-based beta testers to provide feedback in return for free healthchecks for the duration of the beta.

For more information about Sonar or to sign up for the beta, please visit the Sonar website.

0

TechTalks Schedule at VMworld EMEA 2014

The provisional schedule for vBrownbag TechTalks at VMworld EMEA 2014 has been posted. If you don’t know what they are, the TechTalks are 10 minute presentations (hopefully broadcast live on the internet but also recorded) on any technical subject given by assorted attendees of VMworld.

Anyone can propose a TechTalk (there are a few spaces left – signup using this form) and anyone can watch (find the Community Hangspace, or watch online via the vBrownbag live page).

Among the sessions already scheduled are a couple of great sounding ones from Joerg Lew on orchestration and automation; another from Kaido Kibin about vCO collaboration; two sessions on VVols from Cormac Hogan and Nick Dyer; and a primer on the hyper-converged ecosystem from Gabriel Chapman.

0

Installing SQUID on a Synology NAS

I’m not going to go into exactly why (it’s a minor networking niggle following on from a change in broadband provider) but I wanted a simple HTTP proxy in my lab so that my lab VMs could get out on to the internet. Mostly for installing updates etc.

Since my NAS is ideally placed in my network I thought that I’d use that. It’s only a short-term thing anyway.

Now in order to get a proxy service on to the NAS, I needed to setup IPKG first. This allowed me to install and configure SQUID as follows:

1. Open an SSH session to the NAS

2. Download and install SQUID

[text]ipkg install squid[/text]

3. Perform a couple of configuration commands

[text]squid -k parse
squid -z
ln -s /opt/etc/init.d/S80squid /usr/syno/etc/rc.d/[/text]

4. SQUID can now be started using /opt/etc/init.d/S80squid start

Now there may be some additional changes you might want to make. By default SQUID will accept connections from a standard set of internal networks as follows:

[text]acl localnet src 10.0.0.0/8     # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network[/text]

However, you may want to tie this down for one reason or another. For me, I’m not too bothered. Any changes can be done by editing /opt/etc/squid/squid.conf.

Want to check that it works? I configured the proxy config in my VM as follows:

CapturFiles-201409248_110961

And then monitored the proxy access file using tail:

[text]tail -f /opt/var/squid/logs/access.log
1409911439.044    141 192.168.100.151 TCP_MISS/200 405 HEAD http://download.windowsupdate.com/v9/windowsupdate/redir/muv4wuredir.cab? – DIRECT/80.239.217.24 application/octet-stream
1409911439.138     80 192.168.100.151 TCP_MISS/200 24017 GET http://download.windowsupdate.com/v9/windowsupdate/redir/muv4wuredir.cab? – DIRECT/80.239.217.24 application/octet-stream
1409911445.219     46 192.168.100.151 TCP_MISS/200 405 HEAD http://ds.download.windowsupdate.com/v11/3/windowsupdate/selfupdate/WSUS3/x64/Win7SP1/wsus3setup.cab? – DIRECT/213.120.161.243 application/octet-stream
1409911445.246     16 192.168.100.151 TCP_MISS/200 34418 GET http://ds.download.windowsupdate.com/v11/3/windowsupdate/selfupdate/WSUS3/x64/Win7SP1/wsus3setup.cab? – DIRECT/213.120.161.243 application/octet-stream[/text]

0

How to install IPKG on a Synology NAS

Sometimes you want to install “community” or third party packages on your Synology NAS and they require IPKG (Itsy Package Management System) to be present. Instruction about how to go about this seem to vary and are often specific for the CPU inside your NAS. The easiest method that I’ve found for getting IPKG installed is as follows…

First job is to open an SSH session to the NAS and confirm what type of processor it has. This can be done using the following command:

[text]cat /proc/cpuinfo | grep model[/text]

For my DS1513+ it returns:

[text]model: 54
model name: Intel(R) Atom(TM) CPU D2701   @ 2.13GHz
model: 54
model name: Intel(R) Atom(TM) CPU D2701   @ 2.13GHz
model: 54
model name: Intel(R) Atom(TM) CPU D2701   @ 2.13GHz
model: 54
model name: Intel(R) Atom(TM) CPU D2701   @ 2.13GHz[/text]

Next you need to dig around the site http://ipkg.nslu2-linux.org/feeds/optware/ to find the correct bootstrap for your architecture. In my case it’s at http://ipkg.nslu2-linux.org/feeds/optware/syno-i686/cross/unstable/syno-i686-bootstrap_1.2-7_i686.xsh.

To install it, there are a couple of steps…

1. Within your SSH session, change to a temporary location (note that you will probably need to be logged in as root to do all this)

[text]cd /tmp[/text]

2. Download the bootstrap script

[text]wget http://ipkg.nslu2-linux.org/feeds/optware/syno-i686/cross/unstable/syno-i686-bootstrap_1.2-7_i686.xsh[/text]

3. Make the file executable

[text] chmod +x syno-i686-bootstrap_1.2-7_i686.xsh[/text]

4. Run the script

[text]sh syno-i686-bootstrap_1.2-7_i686.xsh[/text]

5. If it all went well, remove the script

[text]rm syno-i686-bootstrap_1.2-7_i686.xsh[/text]

6. Update the package list

[text]ipkg update[/text]

Well done, you’re now ready to install custom packages via ipkg.

0

Pluralsight launch their first vCAC course

If you’re lucky enough to have a Pluralsight subscription already, then you will already have access to this course. If not, maybe it could be an incentive to get one if you have an interest in vCAC (vCloud Automation Center).

Yesterday, online training provider Pluralsight launched their first course aimed at vCAC entitled “Introduction to VMware vCloud Automation Center (vCAC)“. The course is authored by Brian Tobia, who has produced a number of other courses for Pluralsight as well.

As the name of the course suggests, it’s intended as an introduction to vCAC. If you’re at all familiar with vCAC, it’s not the simplest of products to get to grips with. There are a lot of components to it and it’s undergoing a period of intensive development and change at present. That might make you wonder how long this course will be current. Without having sat through it all, I couldn’t answer that but the table of contents suggest that it deals a lot with the concepts and entities that make up vCAC rather than digging into the nuts and bolts too much. Presumably, that will come with more advanced courses.

Starting "Introduction to VMware vCloud Automation Center (vCAC)"on my iPad

Starting “Introduction to VMware vCloud Automation Center (vCAC)”on my iPad

0

Installing Crashplan on ARM based QNAP

The downside of living in the boonies in the UK is that the broadband speeds can be a little on the rubbish side. When I moved to my current house 4 years ago, I knew I’d be giving up good internet for for something a bit lacking. I just didn’t think it would take so long to get something better going. This is yesterday’s speedtest:

CapturFiles-201408243_200808

Since a nice man from BT is due to visit tomorrow to sort me out with that “something better”, I can finally take advantage of some of the cloud offerings that really haven’t been practical until now. Foremost amongst these offerings for me is the ability to backup all of the photos and files that my family and I have created or acquired over the years. They are all stored on a 4-year old QNAP NAS.

Super! I thought that I’d slip Crashplan on to the NAS and set it to backup overnight. Give it a few weeks and the backups would be up to date. A weight off my mind.

Except that getting it all running wasn’t totally straightforward…

JRE Needed

Crashplan has been packaged as a QNAP QPKG file and is available to download from the QNAP forum here. It has a dependency on Java however and so a supported JRE must be installed and enabled before it can even be installed. There is one available for ARM based QNAPs in the AppCenter. However, after installing it I couldn’t confirm that it was running for some reason. Connecting via SSH and executing “java -version” didn’t have the desired results.

It seems that I wasn’t the only one to hit this issue. Instead of installing the package directly through the AppCenter though it is possible to download the package and then install it manually. Simply select the package in AppCenter and click the download link.

CapturFiles-201408243_210869

 

Once downloaded, use the “Install Manually” option in AppCenter and select the unzipped file you just downloaded.

CapturFiles-201409244_210997

With that done, checking the java version again yielded the desired result:

[text][~] # java -version
java version "1.8.0" Java(TM) SE Embedded Runtime Environment (build 1.8.0-b132, headless)
Java HotSpot(TM) Embedded Client VM (build 25.0-b70, mixed mode)[/text]

Wrong JRE?

With the JRE installed, I thought that this would be fairly easy. Again using “Install Manually” in the AppCenter, I gave my NAS the Crashplan QPKG. It failed.

Just in case it would give me some more information, I copied the QPKG file to a share on the NAS and tried via the command line:

[text][/share/Public] # sh ./CrashPlan_3.6.3_30_arm-x19.qpkg
Install QNAP package on TS-NAS…
./
./qinstall.sh
./qpkg.cfg
./package_routines
917+1 records in
917+1 records out
19288+1 records in
19288+1 records out
CrashPlan 3.6.3_30 installation failed. The following QPKG must be installed and enabled: JRE >= 1.6.
Installation Abort.[/text]

Hmm. But we have a JRE. What’s up? It’s tenuous, but the error message suggests that the JRE QPKG can’t be found, it’s not complaining that Java itself isn’t present.

Tenuous but true as it turns out. I did a little experimentation. There is a file on the NAS that contains details about the packages installed. It can be found at /etc/config/qpkg.conf.

I used vi to edit the file and replaced this section:

[text][JRE_ARM]
Name = JRE_ARM
Version = 8.0.1
Author = Optimus
QPKG_File = JRE_ARM.qpkg
Date = 2014-08-30
Shell = /share/MD0_DATA/.qpkg/JRE_ARM/jre.sh
Install_Path = /share/MD0_DATA/.qpkg/JRE_ARM
RC_Number = 101
Enable = TRUE
Status = complete[/text]

With the following:

[text][JRE]
Name = JRE
Version = 8.0.1
Author = Optimus
QPKG_File = JRE_ARM.qpkg
Date = 2014-08-30
Shell = /share/MD0_DATA/.qpkg/JRE_ARM/jre.sh
Install_Path = /share/MD0_DATA/.qpkg/JRE_ARM
RC_Number = 101
Enable = TRUE
Status = complete[/text]

Bazinga!

Once the AppCenter page was refreshed, “JRE_ARM” had been replaced by “JRE”. And this time, Crashplan installed correctly:

[text][/etc/config] # sh /share/Public/CrashPlan_3.6.3_30_arm-x19.qpkg
Install QNAP package on TS-NAS…
./
./qinstall.sh
./qpkg.cfg
./package_routines
917+1 records in
917+1 records out
19288+1 records in
19288+1 records out
Starting CrashPlan once to generate config files…
CrashPlan is disabled.
Forcing startup…
Cleaning /tmp/*.jna files…
Cleaning /share/MD0_DATA/.qpkg/CrashPlan/tmp/ files…
Starting CrashPlan…
Link service start/stop script: crashplan.sh
Set QPKG information in /etc/config/qpkg.conf
CrashPlan 3.6.3_30 has been installed in /share/MD0_DATA/.qpkg/CrashPlan.[/text]

 

All that remained was to set the correct IP address on my NAS for Crashplan to listen on and then I could connect and configure it via the client software on my laptop.

CapturFiles-201408242_220865The only thing now is to wait for my broadband to be upgraded tomorrow…

 

0

Improving WordPress Blog Performance

speedo-slowI’ve been neglecting this blog a little bit recently. So far in 2014 really. But it hadn’t escaped my notice that it was running veeerrryyyyyy sssllloooooowwwwwlllllyyyy!

Initially I tried just popping CloudFlare in front of it but this site was running so slowly that configuration would fail / timeout each time.

In case it should be of any use, here’s what I did to speed it up a little.

Step 1: Make a Backup!

It’s a fundamental tenet of IT that you backup before you make a change that has the potential to break things and these changes could break things. Make sure that you have backups and you know how to use them.

I use a plugin called BackWPup to make daily backups of the database and files to a Dropbox share. One of the slow site symptoms that I’ve observed recently was that backups were failing more often than not and seemed to be taking a long time to complete. A quick look at MySQLAdmin told me that the database had grown in size to about 78MB. For a bunch of text that seemed a bit high – more on that later though.

I use 3 backup jobs to backup the site:

  • A daily database backup – this takes a complete backup of the WordPress database and uploads it to Dropbox.
  • A weekly file backup – this captures all of the files for the site except for: Plugins, unused themes, cache and older media / upload files (e.g. anything from 2008 – 2013)
  • A monthly archive backup – this captures the plugin files and older media items excluded above. (If this ends up running too long then I’ll create another job at a different time and split the files up.

(The reason that I separate out the file backups is that older media items don’t change and the plugins can be downloaded fairly easily and I only tend to update them once a month anyway. This just makes the weekly file backup go more quickly.)

Step 2: Look At Plugins

I looked at the list of plugins in use. Each plugin will slow down a WordPress site by a fraction (or more) of a second. Add it all up and the execution of plugin scripts can amount to quite a lot of time. I adopted some very simple rules and applied them to the list of installed plugins:

  • If they were active, did I use them? No, then I removed them.
  • If they were deactivated, I asked myself if I would need them anytime soon? No, then I removed them.

All good so far but I hadn’t moved the needle any noticeable amount. A bit maybe, but not so much that I noticed.

Step 3: Install New Plugins

Hang on? Why am I installing new plugins if they’re only going to slow things down?

Good question. These are diagnostic plugins however. The two in question were WP-Optimize and P3-Profiler. My intention was to do some housekeeping and analysis with these to determine if there was an issue with the whole WordPress installation or any of the plugins in use.

Step 4: Optimise

First up, I looked at WP-Optimize. When you click on WP-Optimize in the admin menu, you’re given a fairly length page detailing any database optimisations that the plugin thinks can be made.

First of all, there’s a summary of potential remedial actions for your database:

screenshot488The items in red are potentially dangerous and are exactly why you’ve made a backup already, right?

What showed up the first time that I saw this screen was over 20000 items listed under “transient options”. WordPress creates these automatically when required but they can apparently be safely removed. “Optimize database tables” will run MySQL optimisation on your database’s tables. Some web hosts don’t allow this – luckily mine does.

Below the remedial actions, is displayed a list of the tables themselves along with their size, an assessment of their optimisation and the potential saving available. As you can see from the screenshot, the 341 transient options that have accrued since I first cleaned up the database are using about 2MB of space. The plugin also believes that the table needs optimisation.

screenshot487After running this through for the first time, the database size dropped from 78MB to about 7MB (which improved the backup time considerably). Sadly, the overall performance of the site was still not great.

Step 5: Measure Plugin Performance

Enter P3-Profiler. This plugin is developed by GoDaddy (yes, the webhosting people). It measures the load and execution time of a few of your site’s pages and breaks it down by plugin. You execute the scan from the “Plugins” menu.

screenshot489The scan takes a few moments to run but then displays some useful information about the various plugins that you still have installed and active.

screenshot4906.5 seconds?!? Wow, allegedly that’s how long the numerous plugins that I had installed were adding to my page load time.

Step 6: Get Rid of the Slow Plugins

I won’t go into all of the analysis that can be done with P3-Profiler, but I did use the information to refine the list of plugins in use:

  • I disabled NextGEN Gallery as it seemed to take the longest of all.
  • I disabled any of the JetPack options that I wasn’t using / relying on.
  • I removed the CloudFlare plugin (as I couldn’t get cloudflare working anyway)

The result (according to the profiler) was about an 80% reduction in the plugin load time. JetPack still takes the longest but it’s better.

The qualitative benefit to load times was impressive. The site felt pretty quick now.

Step 7: CloudFlare

For the final step, I was now able to configure CloudFlare to provide a bit of a boost and a degree of protection. What do you think, is the site quick enough?

Hope any of this helps…