0

Installing SQUID on a Synology NAS

I’m not going to go into exactly why (it’s a minor networking niggle following on from a change in broadband provider) but I wanted a simple HTTP proxy in my lab so that my lab VMs could get out on to the internet. Mostly for installing updates etc.

Since my NAS is ideally placed in my network I thought that I’d use that. It’s only a short-term thing anyway.

Now in order to get a proxy service on to the NAS, I needed to setup IPKG first. This allowed me to install and configure SQUID as follows:

1. Open an SSH session to the NAS

2. Download and install SQUID

[text]ipkg install squid[/text]

3. Perform a couple of configuration commands

[text]squid -k parse
squid -z
ln -s /opt/etc/init.d/S80squid /usr/syno/etc/rc.d/[/text]

4. SQUID can now be started using /opt/etc/init.d/S80squid start

Now there may be some additional changes you might want to make. By default SQUID will accept connections from a standard set of internal networks as follows:

[text]acl localnet src 10.0.0.0/8     # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network[/text]

However, you may want to tie this down for one reason or another. For me, I’m not too bothered. Any changes can be done by editing /opt/etc/squid/squid.conf.

Want to check that it works? I configured the proxy config in my VM as follows:

CapturFiles-201409248_110961

And then monitored the proxy access file using tail:

[text]tail -f /opt/var/squid/logs/access.log
1409911439.044    141 192.168.100.151 TCP_MISS/200 405 HEAD http://download.windowsupdate.com/v9/windowsupdate/redir/muv4wuredir.cab? – DIRECT/80.239.217.24 application/octet-stream
1409911439.138     80 192.168.100.151 TCP_MISS/200 24017 GET http://download.windowsupdate.com/v9/windowsupdate/redir/muv4wuredir.cab? – DIRECT/80.239.217.24 application/octet-stream
1409911445.219     46 192.168.100.151 TCP_MISS/200 405 HEAD http://ds.download.windowsupdate.com/v11/3/windowsupdate/selfupdate/WSUS3/x64/Win7SP1/wsus3setup.cab? – DIRECT/213.120.161.243 application/octet-stream
1409911445.246     16 192.168.100.151 TCP_MISS/200 34418 GET http://ds.download.windowsupdate.com/v11/3/windowsupdate/selfupdate/WSUS3/x64/Win7SP1/wsus3setup.cab? – DIRECT/213.120.161.243 application/octet-stream[/text]

0

How to install IPKG on a Synology NAS

Sometimes you want to install “community” or third party packages on your Synology NAS and they require IPKG (Itsy Package Management System) to be present. Instruction about how to go about this seem to vary and are often specific for the CPU inside your NAS. The easiest method that I’ve found for getting IPKG installed is as follows…

First job is to open an SSH session to the NAS and confirm what type of processor it has. This can be done using the following command:

[text]cat /proc/cpuinfo | grep model[/text]

For my DS1513+ it returns:

[text]model: 54
model name: Intel(R) Atom(TM) CPU D2701   @ 2.13GHz
model: 54
model name: Intel(R) Atom(TM) CPU D2701   @ 2.13GHz
model: 54
model name: Intel(R) Atom(TM) CPU D2701   @ 2.13GHz
model: 54
model name: Intel(R) Atom(TM) CPU D2701   @ 2.13GHz[/text]

Next you need to dig around the site http://ipkg.nslu2-linux.org/feeds/optware/ to find the correct bootstrap for your architecture. In my case it’s at http://ipkg.nslu2-linux.org/feeds/optware/syno-i686/cross/unstable/syno-i686-bootstrap_1.2-7_i686.xsh.

To install it, there are a couple of steps…

1. Within your SSH session, change to a temporary location (note that you will probably need to be logged in as root to do all this)

[text]cd /tmp[/text]

2. Download the bootstrap script

[text]wget http://ipkg.nslu2-linux.org/feeds/optware/syno-i686/cross/unstable/syno-i686-bootstrap_1.2-7_i686.xsh[/text]

3. Make the file executable

[text] chmod +x syno-i686-bootstrap_1.2-7_i686.xsh[/text]

4. Run the script

[text]sh syno-i686-bootstrap_1.2-7_i686.xsh[/text]

5. If it all went well, remove the script

[text]rm syno-i686-bootstrap_1.2-7_i686.xsh[/text]

6. Update the package list

[text]ipkg update[/text]

Well done, you’re now ready to install custom packages via ipkg.

0

Pluralsight launch their first vCAC course

If you’re lucky enough to have a Pluralsight subscription already, then you will already have access to this course. If not, maybe it could be an incentive to get one if you have an interest in vCAC (vCloud Automation Center).

Yesterday, online training provider Pluralsight launched their first course aimed at vCAC entitled “Introduction to VMware vCloud Automation Center (vCAC)“. The course is authored by Brian Tobia, who has produced a number of other courses for Pluralsight as well.

As the name of the course suggests, it’s intended as an introduction to vCAC. If you’re at all familiar with vCAC, it’s not the simplest of products to get to grips with. There are a lot of components to it and it’s undergoing a period of intensive development and change at present. That might make you wonder how long this course will be current. Without having sat through it all, I couldn’t answer that but the table of contents suggest that it deals a lot with the concepts and entities that make up vCAC rather than digging into the nuts and bolts too much. Presumably, that will come with more advanced courses.

Starting "Introduction to VMware vCloud Automation Center (vCAC)"on my iPad

Starting “Introduction to VMware vCloud Automation Center (vCAC)”on my iPad

0

Installing Crashplan on ARM based QNAP

The downside of living in the boonies in the UK is that the broadband speeds can be a little on the rubbish side. When I moved to my current house 4 years ago, I knew I’d be giving up good internet for for something a bit lacking. I just didn’t think it would take so long to get something better going. This is yesterday’s speedtest:

CapturFiles-201408243_200808

Since a nice man from BT is due to visit tomorrow to sort me out with that “something better”, I can finally take advantage of some of the cloud offerings that really haven’t been practical until now. Foremost amongst these offerings for me is the ability to backup all of the photos and files that my family and I have created or acquired over the years. They are all stored on a 4-year old QNAP NAS.

Super! I thought that I’d slip Crashplan on to the NAS and set it to backup overnight. Give it a few weeks and the backups would be up to date. A weight off my mind.

Except that getting it all running wasn’t totally straightforward…

JRE Needed

Crashplan has been packaged as a QNAP QPKG file and is available to download from the QNAP forum here. It has a dependency on Java however and so a supported JRE must be installed and enabled before it can even be installed. There is one available for ARM based QNAPs in the AppCenter. However, after installing it I couldn’t confirm that it was running for some reason. Connecting via SSH and executing “java -version” didn’t have the desired results.

It seems that I wasn’t the only one to hit this issue. Instead of installing the package directly through the AppCenter though it is possible to download the package and then install it manually. Simply select the package in AppCenter and click the download link.

CapturFiles-201408243_210869

 

Once downloaded, use the “Install Manually” option in AppCenter and select the unzipped file you just downloaded.

CapturFiles-201409244_210997

With that done, checking the java version again yielded the desired result:

[text][~] # java -version
java version "1.8.0" Java(TM) SE Embedded Runtime Environment (build 1.8.0-b132, headless)
Java HotSpot(TM) Embedded Client VM (build 25.0-b70, mixed mode)[/text]

Wrong JRE?

With the JRE installed, I thought that this would be fairly easy. Again using “Install Manually” in the AppCenter, I gave my NAS the Crashplan QPKG. It failed.

Just in case it would give me some more information, I copied the QPKG file to a share on the NAS and tried via the command line:

[text][/share/Public] # sh ./CrashPlan_3.6.3_30_arm-x19.qpkg
Install QNAP package on TS-NAS…
./
./qinstall.sh
./qpkg.cfg
./package_routines
917+1 records in
917+1 records out
19288+1 records in
19288+1 records out
CrashPlan 3.6.3_30 installation failed. The following QPKG must be installed and enabled: JRE >= 1.6.
Installation Abort.[/text]

Hmm. But we have a JRE. What’s up? It’s tenuous, but the error message suggests that the JRE QPKG can’t be found, it’s not complaining that Java itself isn’t present.

Tenuous but true as it turns out. I did a little experimentation. There is a file on the NAS that contains details about the packages installed. It can be found at /etc/config/qpkg.conf.

I used vi to edit the file and replaced this section:

[text][JRE_ARM]
Name = JRE_ARM
Version = 8.0.1
Author = Optimus
QPKG_File = JRE_ARM.qpkg
Date = 2014-08-30
Shell = /share/MD0_DATA/.qpkg/JRE_ARM/jre.sh
Install_Path = /share/MD0_DATA/.qpkg/JRE_ARM
RC_Number = 101
Enable = TRUE
Status = complete[/text]

With the following:

[text][JRE]
Name = JRE
Version = 8.0.1
Author = Optimus
QPKG_File = JRE_ARM.qpkg
Date = 2014-08-30
Shell = /share/MD0_DATA/.qpkg/JRE_ARM/jre.sh
Install_Path = /share/MD0_DATA/.qpkg/JRE_ARM
RC_Number = 101
Enable = TRUE
Status = complete[/text]

Bazinga!

Once the AppCenter page was refreshed, “JRE_ARM” had been replaced by “JRE”. And this time, Crashplan installed correctly:

[text][/etc/config] # sh /share/Public/CrashPlan_3.6.3_30_arm-x19.qpkg
Install QNAP package on TS-NAS…
./
./qinstall.sh
./qpkg.cfg
./package_routines
917+1 records in
917+1 records out
19288+1 records in
19288+1 records out
Starting CrashPlan once to generate config files…
CrashPlan is disabled.
Forcing startup…
Cleaning /tmp/*.jna files…
Cleaning /share/MD0_DATA/.qpkg/CrashPlan/tmp/ files…
Starting CrashPlan…
Link service start/stop script: crashplan.sh
Set QPKG information in /etc/config/qpkg.conf
CrashPlan 3.6.3_30 has been installed in /share/MD0_DATA/.qpkg/CrashPlan.[/text]

 

All that remained was to set the correct IP address on my NAS for Crashplan to listen on and then I could connect and configure it via the client software on my laptop.

CapturFiles-201408242_220865The only thing now is to wait for my broadband to be upgraded tomorrow…

 

0

Improving WordPress Blog Performance

speedo-slowI’ve been neglecting this blog a little bit recently. So far in 2014 really. But it hadn’t escaped my notice that it was running veeerrryyyyyy sssllloooooowwwwwlllllyyyy!

Initially I tried just popping CloudFlare in front of it but this site was running so slowly that configuration would fail / timeout each time.

In case it should be of any use, here’s what I did to speed it up a little.

Step 1: Make a Backup!

It’s a fundamental tenet of IT that you backup before you make a change that has the potential to break things and these changes could break things. Make sure that you have backups and you know how to use them.

I use a plugin called BackWPup to make daily backups of the database and files to a Dropbox share. One of the slow site symptoms that I’ve observed recently was that backups were failing more often than not and seemed to be taking a long time to complete. A quick look at MySQLAdmin told me that the database had grown in size to about 78MB. For a bunch of text that seemed a bit high – more on that later though.

I use 3 backup jobs to backup the site:

  • A daily database backup – this takes a complete backup of the WordPress database and uploads it to Dropbox.
  • A weekly file backup – this captures all of the files for the site except for: Plugins, unused themes, cache and older media / upload files (e.g. anything from 2008 – 2013)
  • A monthly archive backup – this captures the plugin files and older media items excluded above. (If this ends up running too long then I’ll create another job at a different time and split the files up.

(The reason that I separate out the file backups is that older media items don’t change and the plugins can be downloaded fairly easily and I only tend to update them once a month anyway. This just makes the weekly file backup go more quickly.)

Step 2: Look At Plugins

I looked at the list of plugins in use. Each plugin will slow down a WordPress site by a fraction (or more) of a second. Add it all up and the execution of plugin scripts can amount to quite a lot of time. I adopted some very simple rules and applied them to the list of installed plugins:

  • If they were active, did I use them? No, then I removed them.
  • If they were deactivated, I asked myself if I would need them anytime soon? No, then I removed them.

All good so far but I hadn’t moved the needle any noticeable amount. A bit maybe, but not so much that I noticed.

Step 3: Install New Plugins

Hang on? Why am I installing new plugins if they’re only going to slow things down?

Good question. These are diagnostic plugins however. The two in question were WP-Optimize and P3-Profiler. My intention was to do some housekeeping and analysis with these to determine if there was an issue with the whole WordPress installation or any of the plugins in use.

Step 4: Optimise

First up, I looked at WP-Optimize. When you click on WP-Optimize in the admin menu, you’re given a fairly length page detailing any database optimisations that the plugin thinks can be made.

First of all, there’s a summary of potential remedial actions for your database:

screenshot488The items in red are potentially dangerous and are exactly why you’ve made a backup already, right?

What showed up the first time that I saw this screen was over 20000 items listed under “transient options”. WordPress creates these automatically when required but they can apparently be safely removed. “Optimize database tables” will run MySQL optimisation on your database’s tables. Some web hosts don’t allow this – luckily mine does.

Below the remedial actions, is displayed a list of the tables themselves along with their size, an assessment of their optimisation and the potential saving available. As you can see from the screenshot, the 341 transient options that have accrued since I first cleaned up the database are using about 2MB of space. The plugin also believes that the table needs optimisation.

screenshot487After running this through for the first time, the database size dropped from 78MB to about 7MB (which improved the backup time considerably). Sadly, the overall performance of the site was still not great.

Step 5: Measure Plugin Performance

Enter P3-Profiler. This plugin is developed by GoDaddy (yes, the webhosting people). It measures the load and execution time of a few of your site’s pages and breaks it down by plugin. You execute the scan from the “Plugins” menu.

screenshot489The scan takes a few moments to run but then displays some useful information about the various plugins that you still have installed and active.

screenshot4906.5 seconds?!? Wow, allegedly that’s how long the numerous plugins that I had installed were adding to my page load time.

Step 6: Get Rid of the Slow Plugins

I won’t go into all of the analysis that can be done with P3-Profiler, but I did use the information to refine the list of plugins in use:

  • I disabled NextGEN Gallery as it seemed to take the longest of all.
  • I disabled any of the JetPack options that I wasn’t using / relying on.
  • I removed the CloudFlare plugin (as I couldn’t get cloudflare working anyway)

The result (according to the profiler) was about an 80% reduction in the plugin load time. JetPack still takes the longest but it’s better.

The qualitative benefit to load times was impressive. The site felt pretty quick now.

Step 7: CloudFlare

For the final step, I was now able to configure CloudFlare to provide a bit of a boost and a degree of protection. What do you think, is the site quick enough?

Hope any of this helps…

0

First Impressions: PHD Virtual Backup 6.2 (with Cloud Hook)

screenshot323As I mentioned in my recent Cloud Backups post, I’m trying out a few virtualisation backup products to help me out with a prototype infrastructure that I’ve been working on. I want to store a backup of the various VMs that I’ve setup outside the infrastructure that I’ve setup – effectively offsite.

By happy circumstance, PHD Virtual had a beta running for version 6.2 of their backup product that includes “CloudHook”. It’s a module that enables integration with cloud storage providers for the purposes of backup, archiving and disaster recovery. The 6.2 release covers the backup aspect, and future releases will add in archiving and DR functionality. Thanks to Patrick Redknap, I managed to hop onto the beta and try it out. (Note that the screenshots below come from a beta release and may have changed for GA.)

PHD’s Virtual Backup product is delivered as a Virtual Backup Appliance. I was initially wary of production services running on dedicated virtual appliances a few years ago but I’ve changed my view over time and I now really like using them. (That’s probably a subject for a different post though.) I won’t go through the mechanics of the installation in nauseating detail, but basically it breaks down to the following high-level steps:

  1. Download and unzip the virtual appliance
  2. Use the vSphere Client to import and deploy the appliance (requires 8Gb disk space, 1 vCPU, 1Gb Memory and connection to 1 Port Group in it’s default configuration)
  3. Open the VM’s console and enter some network information
  4. Reboot the appliance
  5. Install the PHD Virtual Backup Client

Configuring the appliance for use is pretty straightforward although if, like I was, you have to make multiple hops to get to your data center (RDP over RDP over VPN for complicated reasons that I can’t go into), you might find that the PHD Virtual client doesn’t play too nicely with a lack of screen space. I could only just get to the “Save” button. (Granted, it’s an unlikely situation to be in though.) The minimum required is to connect the appliance to vCenter (see the General tab of the Configuration section):

2013-03-09 21_51_36-

Normally at this point you’d expect to have to configure some disk space local to the backup appliance (or network storage space). Well, you still do really but you actually have a choice to make; where do you want to backup to? Continue Reading