Thursday 13 September 2007

Enslaving a bind DNS server on CentOS

One thing I have been trying to accomplish ever since I commissioned our server in Manchester almost 9 months ago now is getting it set up as a secondary DNS server, this I have finally accomplished and it is an experience worth sharing. There are many howto's on the Internet which tell you how to set up a DNS server on windows or Linux but what I was after was to setup BIND on Linux as a fail over system for our main Microsoft DNS server and this is a far less well documented scenario. Also contrary to what you might read its actually really easy just don't step off the path!

For those of you who don't know, DNS is the system which marshals the traffic around the Internet, for example in the absence of DNS you would have to type in 72.21.206.5 instead of amazon.com to get to your favourite e commerce merchant :o). So DNS is important for the Internet but also as local networks are now very much modeled upon the Internet scheme, without DNS it becomes very difficult to manage your network in a user friendly manner. Which as usual is great until it breaks!

So very quickly as your office network becomes more central to the workings of your business it is natural to want a secondary system in case the first one breaks, especially when you are using a VPN as we are as the loss of our central DNS server would render our remote systems unusable as well. So that's the background, here is the solution to setting up a secondary DNS server using BIND on Linux as a slave to an Active Directory DNS server. Bear in mind this is for CentOS 4.5 (RHEL 4 equivalent) using the command line, if you are using a GUI just use the GUI tool!

1. On the Microsoft box open DNS and right click on the forward lookup zone you wish to replicate, eg, somebiz.local. Under 'Name Servers' add the IP address of your Linux box.
2. If you have already been playing, completely remove your existing BIND installation (yum remove bind), and trash any files in /var/named/chroot/var/named.
3. Run yum install bind to install a fresh one.
4. Paste the following into /var/named/chroot/var/etc/named.conf


// Red Hat BIND Configuration Tool
// Default initial "Caching Only" name server configuration

options { directory "/var/named"; };

zone "mydomain.local" IN {
type slave;
file "slaves/mydomain.local";
masters { xxx.xxx.xxx.xxx port 53;};
};

include "/etc/rndc.key";

5. substitute your domain for mydomain.local and your active directory server's IP address in the xxx.xxx.xxx.xxx space.

6. Run service named start and make a cuppa cos your done!

Obviously this is not a comprehensive look at this subject, there is an awful lot more to play with in bind but that really is all you need to do to get going. Hope it helps..

Thursday 2 August 2007

New lines vs Carriage returns

There really is not much in this post for the layman so if you are not a dyed in the wool techie your going to find this dull so I apologise to you.

I came across an interesting problem this afternoon when quietly building some PDF files using PHP (as is my want) which I would like to share. It concerns the carriage return which is an invisible character usually used to move the currently printing text onto the next line. It turns out FPDF, the PHP module I have been using to generate PDF files just doesn't like them. Instead it uses the alternative invisible character 'new line' which is great :o) hooray!

The really ticklish bit of all this is of course that these characters are both invisible so when one is having a struggle with them its really bit like wandering around in the dark. The first step when trying to sort a problem of this nature is to turn these characters into their representative codes, for this one requires the ord() function. One then needs to isolate these pesky invisible characters and decide what to turn them into.

So to cut a long story short if you are trying to print a block of text using FPDF and your line breaks are not appearing use the following little function on your text:

$text_var=str_replace(chr(13),chr(10),$text_var);

where $text_var is the variable containing your text. The function will magically strip out the ASCII character 13's (carriage return) and replace them with shiny new ASCII character 10's (new line).

Thursday 12 July 2007

Pdf Trials

As soon as a programmer creates a program which stores information, at some point someone is going to want get at it (the information that it), print it, send it to someone else or just take a copy. In other words one requires a portable document.

In our little in-house programs we have been using Excel a lot, it is ideal for presenting raw data to IT savvy managers as quite often the first thing they want to do is chop the data around, turn it upside down, rattle it and maybe analyse it against another spreadsheet. Excel begins to struggle when it comes to printing and presenting a better looking sort of document so we have begun to look at creating PDF documents for some of our reporting and printing requirements.

The first thing you find when you are looking for a program to create PDF files is a library called PDFLIB made by a company called ... PDFLIB. They have products for programmers who use C, Java, Delphi and our own favourite PHP. The upside is that it is quite comprehensive offering the vast majority of the elements defined in the PDF format, the downside is that it costs about £1000. Given the price tag I decided to dig a little deeper and turned up a product called FPDF where the F stands for Free, a much more palatable entry in the IT budget.

So Having installed the library effortlessly and tried a few of the examples I was impressed, the functions are limited but actually it seemed to cover everything I needed. The next step was to dive into probably the most complicated PDF file we have and up until the very last element it was all going so well, attempting to write vertical text knocked the wheels off the wagon however. Was vertical text going to cost a grand!?
Back to Google with a fresh cuppa and given that the PDF Format is freely published I thought maybe I should try my hand at extending the FPDF lib and adding the function for rotated text myself. The problem is it has been 15 years since my A level maths teacher force fed me matrix transformations and Adobe might have published the format and described the matrix transformations but they don't really 'throw you a bone' when it comes to programming them.

More tea and some inspiration from a tunnocks tea cake turned up a pear module which is in a slightly fragile beta 0.2 state but essentially builds on FPDF and includes a rotate_text function - File_PDF. Being a pear module, installation was easy but could I get it to work! Alas no. However could I lift the bonnet on both these php projects and force them into an unholy union, why yes sir! So this is my contribution to the open source community, if you are using FPDF and need the ability to rotate some text paste the following code into the fpdf.php file and call it using the last line of code.


function writeRotie($x,$y,$txt,$text_angle,$font_angle = 0)
{
if ($x < 0) {
$x += $this->w;
}
if ($y < 0) {
$y += $this->h;
}

/* Escape text. */
$text = $this->_escape($txt);

$font_angle += 90 + $text_angle;
$text_angle *= M_PI / 180;
$font_angle *= M_PI / 180;

$text_dx = cos($text_angle);
$text_dy = sin($text_angle);
$font_dx = cos($font_angle);
$font_dy = sin($font_angle);

$s= sprintf('BT %.2f %.2f %.2f %.2f %.2f %.2f Tm (%s) Tj ET', $text_dx, $text_dy, $font_dx, $font_dy,$x * $this->k, ($this->h-$y) * $this->k, $text);
if($this->underline && $txt!='')
$s.=' '.$this->_dounderline($x,$y,$txt);
if($this->ColorFlag)
$s='q '.$this->TextColor.' '.$s.' Q';
$this->_out($s);
}


// Write 'Soup' at grid ref 50x50 at a 90 degrees rotation
$pdf->writeRotie(50,50,"Soup",90,0);



As ever one hour of the day accomplished 95% of the job, the rest of the day was spent chasing around after a seemingly straight forward willow the wisp of a function.

Friday 22 June 2007

Fiddling around and the simple storage solution

I was invited on to the Amazon Computing Cloud yesterday - a seminal moment as I have been desperate to have a go on it for weeks now, but disaster! I was passed over because I had foolishly forgotten to open an S3 (simple storage service) account which is a prerequisite, despite my grovelling and the immediate opening of an S3 account I appear to have missed my window of opportunity.

That said it has given me a little time to investigate the simple storage service and in itself this has opened a few doors in my mind. Storage in the S3 bucket is very cheap indeed at 15 cents per Gig per month, this means for example that I could store my largest SQL database, at 2Gig and rising, on the ultimate storage system for 14p per copy per month with transfer costs of 10p. Doing a little simple maths therefore means that backup scenarios would cost:
  • 24p per month per copy
  • £1.78 per copy per month
  • £50 per month for a rolling 30 day backup
  • £650 pa. for a daily backup rolling on for 12 months (almost 800Gb of data)
The real magic here is not that I can get someone to host almost a terabyte of data for £650 pa but the infrastructure behind it, storage spanning multiple data centres in multiple countries, is fantastic. There are only 2 slight problems, there is a file size limit of 5Gb at present which hopefully will have increased by the time my bloaty database has got anywhere near that, also my backup software Red Gate does not at present support S3.

So I'll have to write a little program one of these days to bounce my off site backup from our Manchester server over to the S3 cloud on a nightly basis then I really, really will have an offsite backup I can be satisfied with. Alternatively our friendly Red Gate developers might like to take the hint, get an S3 account and crack their programming type knuckles. Utility computing has to be the future of off site backup so get in there, I wont charge for this revolutionary piece of advice but I have broken my promotional Red Gate pen so if someone wants to send me another one we'll call it quits :o)

Whilst I am on the subject of S3 there is a fantastic little add on for FireFox which I found very useful for getting things moving, the S3 organizer, its a good job the programming is better than the spelling of organiser :o)

And speaking of really useful little pieces of software anyone who does any programming of web apps should get themselves a copy of fiddler. Its a really rather sweet little program written by Microsoft (tun tun tun!) and distributed free! Maybe Bill finally has enough.

Anyhow before I get on my soap box, fiddler is a very simply http proxy server which gives you a real time readout of what traffic is moving through your Internet connection. As we are using Flash couple with AmfPHP we constantly struggle to debug in an efficient way but not anymore! I wont spoil compadre Robs review right now but just get it and watch out for the bug eyed review.

Wednesday 6 June 2007

In the queue for the ECC

There have been lots of things going on over here for the past month but nothing really blog worthy, aside from this, having a baby has somewhat curtailed my late night blogging sessions, so this is the first post since May 10th!

We have signed up for the ECC, this is not another odd-ball European thingy but in fact Amazon's new web service christened the Elastic Computing Cloud. The concept is what is known as utility computing in that you create a new Virtual Machine and only pay for the storage and processor time you use. This means that within reason your online application can scale wonderfully from being a dusty corner of the web which no-one ever uses to the latest craze with millions of users in minutes and your server wouldn't crash (as long as your code is well written of course).

We have a little online application which we will very shortly be polishing for general release and this seems like a great opportunity to keep our initial installation costs very low but have the ability to scale quickly to meet the needs of our new users :o) Then of course, when things have settled down, and we know what sort of power we are going to require long term we can make a more informed decision about buying our own hosting kit without having to wade through goat intestines with the help of a good soothsayer.

The spanner in the works of course is that everyone else want to make use of this wonderful new service as well and I have found myself in the queue. This seems to be a rather annoying trend in fact, I queued for Joost, I queued for Google applications for Domains and now I am in another queue. I suppose it allows companies to test their systems without having a big embarrassing launch followed by teething trouble but I want it now :o(

I have looked for alternatives but it seems no one else is offering such a simple, well supported and dare I say cheap service within the means of the average web applications developer. Until now utility computing was in fact the preserve of the scientist wanting to test his quantum theories or analyse what happens in the middle of a cosmic jam doughnut (think SETI) so it seems Amazon are possibly on the cusp of having a runaway success. Every applications developer who doesn't want to ask his boss for a new server and equally doesn't want potentially unstable code on one of his precious live servers will want an account for testing stuff.

I am only surprised that Amazon beat Google to it as its just their sort of thing, I look forward to using the gutility computing cloud in about 2 months (probably for free) and inevitably the Microsoft computing cloud in about 2 years which will be compliant with the utility computing standards which the IEEE will have created and ratified by then but with of course.... Microsoft extensions.

Thursday 10 May 2007

Finding extra space on a VMWare Virtual Machine

Now our Redgate SQL Server backup system is running nicely it has exposed a slight deficiency in my server setup over in Manchester, the problem now is that when I created virtual machines for all our little applications running on the VMware system I only gave then 4GB disks :o)

For the Nagios system, the Intranet applications, the source code repository and the knowledgebase these disks are perfectly adequate but for database backup obviously a bit more space is required. Having utilised a spare folder on the knowledgebase virtual server for the SQL Backup it would have been very inconvenient to erase the machine and create a new one and also given the fact that the host server is not overburdened with RAM I didn't feel it would be wise to put more than 4 machines on the system.

So we come to a less well documented feature of a VMWare virtual machine, when you set the hard disk size on creating the machine you cannot increase it in the future. So you have 2 choices, the Lego approach - smash it up and start again, or the clever approach, add on a new virtual disk. I chose to be clever and in a nice twist of fate got away with it :o)

So I think a little how-to is in order.

Begin by adding a new disk in VMWare, it was recommended that I choose a SCSI disk so I complied, bear in mind this has to be done with the virtual machine powered down. When you have added the disk just power up again and you are there, all you have to do now is to get your Linux install to make use of it.

As I have blogged before our flavour of choice when it comes to Linux is CentOS but RHEL4 or Fedora would probably work in exactly the same way because its pretty basic stuff really.

First use Fdisk to create a new partition, as this is our second disk its SDB (SCSI Disk B)
so:
fdisk sbd
and then just follow the instructions to create a new partition.

Next you need to format the new disk so:
mkfs.ext2 /dev/sdb

At this point you need to check the space on the disk so:
df
and you should see your new disk listed

Finally you need to mount the disk so:
mount -t ext2 /dev/sdb /home/samba/sql

As I was using samba to share the /home/samba folder all I needed to do was update the permissions on the folder and restart the service - job done.

Now I have a bit more space on the share the backup has worked a treat, it is one of the real strengths of the Red Gate system that you can see immediately how your backup routine is performing using the timeline GUI. As you can see from the picture here the first couple of databases backed up nicely but given that they are larger databases they are a bit snug so I might just ease them apart a little so we don't get a clash as they grow.

Tuesday 1 May 2007

SQL bakcup - a can of worms!

It seems that the SQL Backup market place is far busier and competitive than I had imagined, no sooner had I arrived in the office this morning than a nice person from Quest Software, makers of Lite Speed, finally caught up with me and set me right on the technology and the price. It seems they have a very interesting suite of SQL server products all of which make the standard Microsoft tools easier to use and in some cases adding functionality, the version I had seen yesterday happened to be the developer version at $45, a full version for our setup being £700 - a bit of a jump. In the end I spoke to 3 people there and came away with a whole heap of good advice and technical background to the product.

No sooner had I finished on the phone than I had a very pleasant representative from Red-Gate, makers of SQL Backup, on the phone wanting to discuss my recent blog post (Gulp!). Actually it turns out that instead of wanting to sue me for mentioning their software on my rather random blog he wanted to give me a full pitch for the product and was very happy to offer a cheeky discount, a good deal on support and some very good advice about which product was right for us. It also turns out that I was already out of date as version 5 had come out over night and I am just in the process of kicking that around. The benchmark looks about the same, maybe a slight improvement in performance, but it has a rather nifty new GUI which plots your backup activity on a time line graphic so you can visually see whether your backup schedules are in danger of overlapping and going into a shame spiral. (Cue - dip into Google images for shame spiral and discovering a picture which is also fitting for the genius which is the time line GUI).

My next step was to download a trial of HyperBac to evaluate their approach which shuns the extended stored procedures of the other products in favour of a totally stand alone setup for SQL backup. This in turn invited another phone conversation as I actually gave my real phone number when downloading :o) It seems the nice people who created HyperBac where originally involved with creating Lite Speed and they decided to create a new company with a new approach a while ago. Again their sales dude was a wealth of information about the strengths and weaknesses of the various approaches taken and was very helpful indeed. The cost for our setup was $499 irrespective of the number of processors we wanted to throw at it and as such it sits directly between the other 2 products. Having played with the system it performs well but I would say that both Xceleon and Quest will be interested to have a good squint at SQL Backup version 5 as they are both slightly quicker but overall I think Red-Gate have it.

I have set our demo rig up for hourly backups overnight so I will be interested to see what is waiting for me by the time I get in tomorrow morning! Hopefully a useful archive of backups which all restore perfectly.... we'll see :o)

Monday 30 April 2007

Remote SQL backup onto SAMBA Shares

Backup is a subject which comes up a lot on the rack as regular readers will already have noticed and I have already detailed several of the strategies we employ to make sure we have a myriad of copies of our essential data spread across the network, preferably as far apart as possible! Our latest investigation concerned how to get a usable copy of our most precious and bloatie database from our SQL Server 2000 installation across the VPN (and therefore Cheshire) on a regular basis.

Time for a topical and amusing dip into Google images for the word of the day - bloated, as in a very large database. This little marmot, who obviously has a small pie problem represents our database for this afternoon.

The data to be moved is 1.3 GB over a 2MB line from a Windows 2003 Server to a Linux partition on our CentOS virtual machine. Given all the other scheduled adminnie jobs we have going on overnight I cannot afford for the whole process to take more than about 30 minutes, I sometimes think the network is busier out of office hours!

The solution is of course to compress the backup before sending it and it turns out that after a little investigation there are a couple of products on the market which do this for you. After only a brief search I found SQL Backup by Red-Gate Software and Lite Speed for SQL Server by Quest which both offer on the fly compression and encryption to boot. One might have expected Microsoft to include a compression option in their rather expensive server system but one would be wrong as usual (thanks Bill). Bit less time writing the EULA and more on the software next time eh?

Moving swiftly on, the products mentioned above are relatively simple in their approach in that they add some system stored procedures to your SQL Server install which can be scheduled to run, adding compression and or encryption to a standard full or differential backup. The cost is quite manageable as well at $45 to $399, which is a lot less than the time would cost to build our own script! But incidentally if anyone is interested here is a start. Installing the programs on our test bench was easy and initially everything was very straight forward until we tried to send it to our Linux machine....

In order to share a folder on Linux one requires the cooperation of a service called SAMBA which is really quite powerful and therefore complicated, sharing a folder to any Tom Dick or Harry is very well documented and quite easy. Authenticating against our active directory and sharing with our windows machine requires the services of a good soothsayer however and is somewhat sparsely documented!

Having finally got a shared folder onto the windows network I thought the job was done but unfortunately I hadn't counted on a couple of less well documented features of SQL server.

1. SQL Server does not like backing up to network shares that are not in the same work group
2. SQL Server will not offer to re authenticate, the user account SQL Server logs in on must have explicit and full access to the network share.

I would love to give a blow by blow account of how I got the network share going but I have been chipping away at this problem sporadically and unfortunately I have sort of lost track of how I got where we are.

Having taken a while to sort this I have finally got some comparative data and I am very impressed indeed! Given that a 2MB line is really quite modest, SQL Backup managed to compress, encrypt and squeeze a 1.4 GB database down to 250 MB and ferry it across Cheshire in 20 minutes and 10 seconds! It took me a while longer to get Lite Speed up and running but at only $45 it ripped through the compression and transfer in a mere 10 minutes and shaved 30 MB of the storage requirements at 220 MB! Its my new best friend and I would recommend it to anyone looking to get their database backups as far away from them as possible. The only thing left to sort out is that I cannot afford to change the logon account for the main server as I did on the test machine so hopefully I can get SAMBA to cooperate with the existing setup this time :o)

Wednesday 18 April 2007

Nagios - The Final Word

Having posted about installing Nagios on CentOS I have finally had a few comments on BeerBytes (Hooray). I have also featured on 2 little news posts on other Linux sites as well click here for the latest and check out the blogroll right for a proper link. Given the obvious appetite for this subject (no not me - network monitoring) I suppose its only fair that I should continue dispensing nuggets of information given that its the only subject which has raised a flicker of interest.

Having got the basics set up I spent some time last week getting the network diagram straight, as we have quite a busy network with 60 nodes I wanted to monitor, the automatic function did not do it justice. Moving on from this I fell foul of having installed the incorrect plugin archive, for some reason check_ping worked fine but I wanted to start monitoring DNS, MySQL and HTTP on a couple of servers and these plugins would not run. In order to test your plugins you can simply move to the plugins directory and type ./pluginname, for example ./check_ping --help will tell you what command line parameters the ping plugin requires then try it again with these supplied. To continue the example ./check_ping -H www.yahoo.com -w 1000,10% -c 1000,10%
returns PING OK - Packet loss = 0%, RTA = 84.63 ms. The commands are already set up for the standard plugins in commands.cfg if your install was ok.


So to sort this problem out I ran off to try my old mate DAG's archive :o) I found the appropriate rpm and bingo, everything works well. If your are relativly new to Linux as I am and you are struggling to connect to the repository on yum or using rpm there is a quick and dirty work around. Simply find the link to the rpm in your web browser, copy the link, get back to your shell and type wget and paste the link, this will download it to your machine. Next type rpm -i and the name of the downloaded file and this will install everything, apologies if that was embarrisingly basic but yum can be a bit hit and miss for me.

The documentation for installing Nagios is quite good but the documentation for actually using some of the many and various standard plugins is really quite sketchy so look outside the Nagios.org site for this information. The Nagios plugins page on sourceforge is your starting point for this but its not obvious.

In order to start monitoring services a little editing of the services.cfg is required followed by creating a couple of new hostgroups for similar machines. For example I wanted to monitor Mysql on 2 machines so I declared the service in services.cfg as follows


define service{
use generic-service
name mysql-service
is_volatile 0
check_period 24x7
max_check_attempts 5
normal_check_interval 1
retry_check_interval 1
notification_interval 20
notification_period 24x7
notification_options n
check_command check-mysql-alive
service_description MYSQL
contact_groups nerds
hostgroup_name mysql_servers
}


This will check the hostgroup mysql_servers every minute, 24 hours a day and email the nerd group if there is a problem for 5 successive queries. It presumes a standard install which predefines the 24x7 time period and the check-mysql-alive command in the realvent files.

Now that I have these extra services being monitored there is some really quite useful informtion being generated, for example the number of connections connected to the MySQL servers and the response times for the HTTP servers. Another area which is being fine tuned via the timegroups.cfg file is when I want to be alerted about certain things, for example I quite like getting an email if a router goes down overnight but as some equipment is turned off overnight I dont particularly need to know. So in short just getting Nagios installed is the tip of the iceberg, the more you think about things the more instances where good network monitoring is useful become apparent. The good news is that this fine tuning is very quick and easy once you have the thing up and running.

One final point on Nagios before everyone gets bored is that on windows you can actually use active desktop to embed your live network map into your desktop making sure you never miss a trick :o) Simply go to desktop properties->customise and paste your Nagios address followed by /cgi-bin/statusmap.cgi?host=all into a new web address to embed into the desktop.

Some other things happening in our little team include a spontaneous upgrade to Adobe CS3, I haven't even got it installed yet so you will all have to wait for some views and opinion but one thing to remember is that you need bags of hard disk space, the download is about 1.5 GB and then it unpacks to nearer 2 GB then needs 5.6GB for the programs, so unless you want to be cleaning up after every stage you need about 10GB! Also the knowledge base is filling nicely and the more I use the product the more I like it, just one little point is that you have to keep going to different URL's to do different things, it does not check your security and give you all the options you are entitled to.

Friday 13 April 2007

A two pronged approach to vesion control

We had a little IT retreat earlier this week to discuss how we can more effectively work as a team, already this has spawned the knowledge base which we are diligently filling with guff, but another thing which became apparent is that we need tighter version control on the source code for the applications we are developing. The aim being to make it easier for several people to work on the applications together without constantly tripping over each other trying to edit the same files.

In a previous project myself and compadre Rob created a sports club management tool as a team and it very noticeably benefited from the contrasting styles and knowledge which was brought to bear on the task. We used subversion for source control on this project running on windows and although it was very useful it never quite delivered on all sides.

The reasons for this were mainly due to the focus of SVN being for text based source code and the merge principle of team work. For example, if two people work on the same text file simultaneously svn can very cleverly merge together the separate changes and 9 times out of 10 they will not conflict. Where the SVN system begins to come unstuck is when using non text based source files like Flash FLA files, as these use a proprietary format you cannot merge files if 2 people have simultaneously changed them, you immediately end up in conflict. So in this scenario you need to lock a file on the server when you are editing so that no one else can open it, the problem is that SVN is not very good at this, unless as can happen, I have missed something.

Given that our new systems are being developed in Flash with support from PHP files and a little MySQL thrown in I decided to look more closely at the version control on offer in Flash and Dreamweaver. It turns out that although the current system is very good at locking files using their 'Check In' 'Check Out' philosophy it is not quite so good at keeping an entire repository synchronised unless Dreamweaver is your weapon of choice. As each of us in the team prefer different HTML editors this will work very well for the Flash files but not for the project as a whole. According to the Adobe site the new version of CS3 has been greatly improved in this respect.

So the solution which seems to present itself here is in fact to use both systems in tandem with Flash taking care of its proprietary source files and subversion via tortoise in my case taking care of the text based files and having overall responsibility for the repository. Touch wood this seems to be working nicely but we have yet to get the whole team working on the project simultaneously.

If anyone else fancies having a go at this, installing subversion on a new virtual server is very straight forward and there are lots of good tutorials on the subject click here for the definitive guide for CentOS.

The Adobe site or the online help for Dreamweaver or Flash is the best place for information on how to use the current simple Macromedia version control.

And finally there is a short article here about fine tuning the setup when using both of these systems concurrently.

Wednesday 11 April 2007

The IT Brain Dump

One thing we have always struggled with in our little IT department is the sharing of important information. If I set up a new system I might note the details in a book or even on a shared document but we have never quite found a system which works for us all and as a result we cannot always put our hands on other peoples knowledge quickly and easily. Last week we decided to have another go at organising our information again and whilst wading through the available knowledge management tools came across a couple of gems.

One system which came top of the list on Google was an open source system called Twiki which I must say was my first choice for a while. They have some very big companies using the software and it looks like a very simple system, which in the tradition of wiki allows all users to contribute towards a knowledge base. I think my main gripe was that I wanted something which looked a bit more easily organised and more like an application than a simple website, although in Twiki's defence I did only look through the demo for a few minutes.

The product I found in the end was a very nice PHP application called PHPKB as in PHP Knowledge base. Although this would not be to every ones taste the fact that the application is available as a simple and very reasonable series of PHP pages suits our setup here perfectly. You have to have access to, or know how to setup, a web server and a MySQL database to serve this application but I did note that the company offer a free setup service and they do have a hosted option. We simply added a new virtual server to our main Virtual Host and had the system running in about an hour. The installation is quite simple and managing the system once its running is very straight forward, I have already started posting bits of information about systems and its surprising once you get started how many very important nuggets are stashed on emails and even in your head.

One of the other nice things about this particular system is that some categories of information can be public and some can be protected, so that in depth technical information can be cordoned off but some of the less technical information and tips can be made available to everyone in the organisation. As with all systems the usefulness is only going to become apparent when we have been using it for some time but I would say that with a little perseverance it has the potential to save an awful lot of time and stress.

On another point regular readers of 'AVFTR' will have noticed that someone actually commented on a post yesterday :o) In fact the nice gentleman concerned even Blogged about the Blog! I must add it to the Blogroll. I am now braced for a massive increase in traffic, I might call Blogspot to make sure they have capacity because in the last 4 hours of yesterday I had 30 visitors.

Thursday 5 April 2007

Nagios on Centos - a grudging union

Centos is one of the great network operating systems, it was developed by a group of people who saw Red Hat Enterprise version 4 had become super reliable but slightly bloaty, they exercised their rights under the GNU public license got the source of RHEL4, put it on a stairmaster and gave it to the people.

Likewise Nagios is a great open source network monitoring system, if you are a Linux user and run a network chances are you will have come across Nagios, short of forking out about £1000 it is in fact pretty much your only option. About 12 months ago I installed Nagios on Fedora and it was a breeze, even though Nagios is a very comprehensive system requiring lots of fiddly configuration, on Fedora if you follow the instructions you will succeed in getting going in about an hour.

Unfortunately given that Centos and Fedora have a common ancestry and are very similar, trying to install Nagios on Centos will drive you up the wall. Unless I have done something stupid without realising it, installing the system from RHEL4 RPM's seems to scatter the files from one end of the disk to the other and it takes lots of patience to track them all down and link everything up. My advice would be follow the instructions to the letter but if you don't find the files you are looking for don't be surprised. Click here for the main Nagios site, this post is not a guide to installing Nagios on Centos just an amendment to the install guide based upon my rather frustrating experience.

Just in case I forget or anyone else trips over this, the locations are as follows:

Config CFG files - /etc/nagios
Web interface files - /usr/share/nagios
Log files - /var/log/
CGI files - /usr/lib/nagios/cgi

A guy called Dag (??) has done some Centos RPM's but I couldn't subscribe to his repository, if you can it is quite possible that he has reworked the install to follow the instructions. Couldn't resist doing my 'Google Images' thing for Dag, turns out this Swedish guy is also comfortable going my the name Dag. There are some great translations for Dag on wikipedia, in Swedish it means 'Day' and in Turkish it refers to a 'Mountain'.




So now the dust has settled after our mammoth network rewire last week and Nagios is running sweetly I feel quite satisfied with everything. As expected we have had a few static routes crawl out of the woodwork and we have renewed our efforts to use DNS rather that IP addresses for routing around the network. It turns out reversing the VPN connections was not all that it promised and we have moved them all back again, it also seems that having Nagios running is actually very good for the stability of the VPN as the frequent pinging seems to keep the routers awake and the tunnels in good repair.

One job left to complete is to define a custom status map for Nagios, as we have over a hundred nodes on the network being monitored the auto generated map is a bit of a mess so I have to define the map by hand which is a bit of a pain. That said it will look very nice as we have purchased an icon library for our software development and their networking set is very sweet. See left for a sneak peek, note however that our main managed switches are not down it is just that Netgear have issued a firmware upgrade they are short of. One day I would love to do a more comprehensive Flash front-end to Nagios but frankly right now I have better things to do.

Another job I think would pay dividends would be to set up a secondary DNS server at Manchester, it is probably quite straight forward but I think I will let the dust settle before attempting this one.

Wednesday 4 April 2007

Some Excellent Manipulation

An interesting little job came up yesterday which involved formatting data on an excel spreadsheet, we have some lists which have to look pretty but are edited frequently and we were having to spend a lot of time ensuring that these lists had a reliable and consistent format. Lots of ideas spring to mind for a job like this and the temptation is to go for yet another little database application but in this case it really felt like it would be overkill.

The solution which appears to have legs is to create a rather niffty excel parser using a couple of useful PHP add ons. For those of you who don't know what a parser is the definition on wikipedia is "the process of analyzing a sequence of tokens to determine its grammatical structure", in lay mans terms think of it a s a digester of documents. You push one in one end and it reads it, digests it and magically supplies a result, or in this case a completely reformatted spreadsheet. This will allow us to keep our data in a very simple unformatted spreadsheet, but by running them through our new system, we can have a nicely formatted consistent look ready to print in a click. Its all summed up nicely by another of my random dips into Google images, this time for the word "parse" see image right.

So if anyone ever has a need of such a beast or, more likely, if in 6 months time I have forgotten what I did and need a reference, the 2 places to go are PEAR for the excel spreadsheet writer add on and Source forge for the Excel Reader add on. When these are installed and working individually there is no reason why they cannot be used in the same PHP script in a push-me pull-you sort of fashion. It only took a couple of hours to get a basic system running and you can even allow the user to specify some parameters with their spreadsheet. So for example you pass a raw sheet of data, a title, a font and a relative font size and the parser running through the writer will apply sizing's and fonts in defined ways to different columns of data, it will even specify margins and printing areas so the document is completely ready to roll.

Keep tuning in for the definitive guide to installing Nagios on Centos without 'going postal' later this week.

Tuesday 27 March 2007

Completing the VPN

For the past couple of years our VPN has been somewhat of a work in progress, although it has been providing a useful service it has never been quite finished, one of those 95% projects. Well I am sure you will all be excited to learn that tomorrow it looks like we will finally be able to stand back and say that the network is one project that is 100% complete in every way (for now).

All sites and key personnel are connected, our co location facility is connected, we have finally finished putting all the head office connections onto the Draytek 3300V and we have the RADIUS server running for mobile workers. Added to this we have upgraded the head office switches to managed Gigabit switches and tomorrow we should be activating the load balancing system so that if our leased line goes off the ADSL line will try to pick up the pieces and vice versa.

One of the major improvements has been the recently discovered need to reverse all the connections so that now both the co location router and the head office router are in charge of the connections to the remote locations. Quite why this never occured to anyone before will remain a mystery but hey ho :o) Of course the big question was which way should the connection be configured to connect between Manchester and Head office now we have 2 very clever VPN brains at each end, well the odd coincidence is that you can set then up to dial each other! It is not immediately obvious whether this will cause them to get in a knot at some point but we can always toss a coin and switch off one link if it does.

The main visible improvement has been the state of our comms cabinet or spaghetti junction as it was known, when we have disposed of our old ADSL router and installed the super VPN pass through modem we should finally have a perfectly organised rack of kit with no dangly encoutrements for once. I will take a piccie tomorrow to illustrate but unfortunately I don't have a before picture to give the true contrast.

I watched an interesting webinar last week about steelhead WAN link optimisers from riverbed which can allow network applications to be run from the datacentre so maybe this could be the way forward for the network, phase 2 if you will. Just need to find a couple on ebay running at a bit of a discount as they are a bit expensive. Maybe terminal services will be a better bet but if anyone wants to bin a couple of steelheads do call :o)

Tuesday 20 March 2007

Extracting the 2950 from the dog house

Our upgrade to the Manchester server looked like being a bit of a damp squib this morning, after a good start with our new 2950 Draytek firewall router dropping straight in as a replacement for the 2900 things were not turning out to be as reliable as I had hoped. Twice the previous day the link from my house to Manchester had locked up and then Nigel had a similar problem when his tunnel locked up, the outlook did not look good.

However after having a pootle around on the Draytek forums and not really coming up with a lot I had a moment of inspiration, the problem is that if one of the pub routers drops its connection its too stupid to realise and doesn't try to dial back into the main host every time. So the solution is surely to remove the responsibility for dialling from the slightly less clever routers and put the main super efficient router in control. In a nutshell get Manchester to dial the pubs instead of the other way round, this way Manchester should know that a connection is dropped (using the power of its dedicated VPN processor) and simply dial it up again. One other thing to be aware of if anyone is trying this is that the default timeout on Draytek 2600's is a lowly 300 seconds and should be reduced to zero :o(

So after reprogramming all of our routers and having a major rejig of the Manchester end of things it all looks a lot happier, of course its not as simple as checking they are all connected and I won't really know until a few days have passed. Tomorrow I will rejig our Nagios server to test these connections and I should start getting an idea of how good this configuration is going to be quite soon. As an aside when searching for 'Damp Squib' on Google images this is what comes up, call me a fool but that dead cow don't look too damp to me :o)

Thursday 15 March 2007

Instant gratification

No sooner had I posted about my declining traffic due to lack of blog fodder than I get 30 visitors in the same day, I am wondering whether this means that people have subscribed using RSS readers and therefore I can generate as much traffic as I want simply by bombarding you all with posts!

Our 2950 arrived last week and was immediately dispatched to Manchester where it has now been installed, its early days yet to get an idea of whether it is going to provider a better and more reliable service but so far so good. Although the 2950 is quite different to the 2900 it is replacing, and as such you cannot simply export the configuration and move it across, the setup screens are pretty similar so it was easy to copy the configuration page by page. We have about 20 VPN tunnels hard coded into the config and one thing which would be great in a further revision of Drayteks new operating system would be to duplicate an entry in the profiles page because it can be a bit of a drag, just in case anyone from Draytek ever trip over this.

One other little happening which I found very interesting is that Google have created a little program for Pocket PC's which runs Google maps. I must say that my HTC PDA is becoming more and more useful now that I can get my email, it is a phone, it syncs with my Gcal over the air thanks to goosync and now Google maps as well! I feel quite spoilt.







With all this in mind I found it very interesting to note that HTC have got a new version of my favourite toy coming out. I would guess that they are watching the IPhone with some trepidation because although apple will have a lot of work to do to create something as functional as the STC S710 I think we can all be confident that OSX on the IPhone will be slicker than Windows Mobile 6. So like the technology lemming that I am when the contracts on the phones are up for renewal I shall be torn between these two new and probably slightly flaky but 'oh so cool' toys and once again spend the duration of the contract getting it just right in time for the next one.

Tuesday 13 March 2007

Google Apps cntd.

I have been progressing with lots of odds and ends this week and last which has meant no blogging, you can't force it after all. Unfortunately this means my traffic has nosedived so its a good job I am not dependant upon the income from my blog like Rob is ;o) (see previous post)

Which illustrates a point I wanted to make - should anyone read this and think 'Hey that looks like a cool thing to do that blogging lark' the only advice I would give is to get a Google analytics account because otherwise is pretty dull. As you can see I have visitors from all over the world when I can actually work up a little blog fodder but otherwise I have had no feedback at all, so praise Google for keeping beerbytes online!

So I am sure you have all been waiting with baited breath for an update on the Google calendar situation, well we have had some movement - spanning sync is out of beta and ready for deployment. Hooray! I really must congratulate spanning sync for producing just about the only calendar syncing software which works, I have been running it now for about 4 weeks and whilst syncmycal was merrily trashingmycal, spanningsync has just worked which is all we ask.

Another piece of software which also seems to be working is a little system called goosync which is syncing my Google calendar over-the-air with my HTC Pocket PC. This means now that the Mac side of things is perfect, Ical Syncs my calendar both ways with Google, my PDA does likewise and Ical also displays the rest of the offices calendars via .ics addresses.

This last point brings me onto a bit of a gripe with Google Apps. On paper Google apps is great but I have steadily been whittling away at the features which are actually useful to an organisation like ours. 2 weeks ago I closed the premier account because it was not bringing anything to the table over and above a standard Google apps for domains account and this afternoon I have had to walk away from even this because the calendaring system on these accounts does not allow private Ical feeds! Big mistake - I have had to spend some time this afternoon manually setting up standard Google accounts because syncmycal on the PC is not very happy with the Google apps calendaring feeds which seem to be slightly different from standard Google feeds and Outlook 2007 cannot plug into the Ical feeds on Google Apps.

So I am short of only 2 things now, Goosync is struggling to work on our Nokia 6233's but I am sure we can sort this and I need to get Outlook publishing to Google which syncmycal can allegedly do as long as your not on Apps.

Other news, Joost is still very cool but the content has been suspiciously stagnant since I first went on possibly indicating an Achilles heel, is it difficult to get the established networks to part with their premium content? And/or is it difficult to circulate new content around the Joost network? Maybe they need to take at look at the youtube way of doing things and get the public to contribute, I am not talking about the 240x180 endless rips of Simpson's funnies but seriously good quality video from Jo public. something along the line of a bit torrent network where people can submit content recorded on proper video cameras to make good quality but amateur videos which will play full screen. These could be vetted and categorised by Joost thereby avoiding the lawyers making a fat buck or two and allowing a huge variety of content to be available.

Friday 2 March 2007

I've seen the future - and it's Joost

I am going off topic, as in this has nothing to do with IT and business, but for anyone else who thinks that TV in the UK is a bit poor at present I have just seen the future and its truly exciting and scary in equal measure......

A perfect Friday night, I got a pass from the missus for an hour at the local, they were in the middle of a barrel of Jennings Mountain Man which is nectar, and I get home to find out that I have been admitted to a rather exclusive club. I am a Joost beta tester!

Now for those of you who don't know (I can hardly remember quite how I ended up signing up for Joost myself) but Joost is a new online TV station. Online TV is one of those technologies which has promised so much in the past but never lived up to the hype, I am sure we have all tried some of those poor real player feeds in the past and frankly if you have real network shares take my advice and sell sell sell! Joost is a quick download, I was genuinely very pleased to find a Mac version for a change, so I installed it on the ol' mac book, logged in and just watched half an episode of fifth gear full screen with no buffering. Seriously.

Vicky Butler-Henderson never looked so good, although I suppose she never sat on my knee whilst presenting fifth gear before :o) I am such a nerd sometimes...

The choice of programming even in beta is far superior to HDTV even f the quality isn't, its full screen though, its on demand, they have programmes from national geographic, channel 5, MTV, I even heard tell of a Viacom deal yesterday and you can tell that the choice is going to be truly staggering on an international scale. I cannot emphasise this enough, sign up now because in 6 months time your digi box is going to look pretty dull, and that is the scary part. I dislike TV, I like gardening, I am a frustrated smallholder (in that I haven't got one) and I think people should be tilling the soil rather than vegging on the sofa and I quite like having an excuse to turn off the TV when nothing is on. In a Joost world there is always going to be something on you would like to watch, maybe land will become cheaper because all the smallholders will be watching Joost and then I can make my move.... sweet.

Thursday 1 March 2007

Delving further into Google Apps

Last week I set up a Google apps account to investigate the potential for using it as a replacement for Microsoft Exchange, given that we only really rely on Exchange to share calendars. Today I finally got chance to properly look into the fixtures and fittings of Google apps and I must say that my initial impression is very positive.

Although I had used Google Calendars before, and found the user experience to be a very positive one, especially for a browser based application, I had not really investigated the other aspects of the apps suite and that was the aim of this afternoon. Google apps is actually a very simple system and that is one of the things that appeals to me about it, the manual for exchange is about a foot thick and I would say that its a typical case of Microsoft over engineering. It is quite unfair in some ways to compare these 2 products because Exchange does lots of things that apps does not, however as a small office I would say Google is closer to fulfilling our requirement for information sharing.

So to the details, there are only 3 elements of apps which really interest me at present and these are in order of importance, calendars, email and finally the personalised start page. Of course each of these elements have been available on an individual basis for sometime but the new apps for domains approach allows one person to administer these systems for an entire office. The first thing you have to do is provide the domain, it would have been possible to sign over our existing brunningandprice domain and then link back to the website and use Gmail as our primary email system but I am not quite ready to take a jump like that so I got a new one for a mighty £2 pa :o)

One of the really nice features becomes apparent at this point, if you want to create user accounts for 20 odd people you can simply fill in an excel spreadsheet and Google will quite happily auto generate your user accounts. Easy. You can even set a global setting to ask all your new users to change their password when they first log on and preset their sharing options to allow everyone access to each other calendars. At this point if we were happy to use online systems the job is done, everyone has an account, email, calendar and even instant messaging.

The next thing I wanted to look at was the personalised start page, everyone in our office almost without exception uses Google as their start page anyway so to have the opportunity to add their calendar and email to this was quite appealing. It turns out that this system is again very simple to use and I simply popped our logo on the top of the page and dropped in each users calendar, email preview, to-do list and even a link our internal applications. I think we will get compadre Rob to use a bit of his design magic on this page if the idea takes off.


So far so good, the task for tomorrow is to make a decision about how best to approach using the email, should we simply have one pop3 account or route the existing mail via Gmail, my initial reaction is no however Gmail is a rather nifty web mail system compared to our existing service... tempting. Also its time to start throwing some larger calendars up and seeing what happens to performance, I'll keep you posted.

Wednesday 28 February 2007

A really slick backup idea!

And it wasn't even mine.....

Its another not very exciting post about backup, I know its not the most fun subject even for fellow nerds but its a fact that backup is very important (see the last but 1 post). One issue we have as a company with distributed sites and therefore computers is trying to make sure we get copies of the data from harassed managers. Its no use giving them a cake of Cd's and a printout which requires them to do complicated computing tasks, it simply doesn't work, so you have to think of something really slick and this is what our main mac man Nigel has done.

At the same time that I was playing with rsync to backup my Manchester installation over the VPN Nigel was using the same program, which comes as standard on the Mac, to do some simple backup tasks on his computer. Having both recently installed a nice RSS reader called Vienna we have access to a useful little blog called Mac OSX Hints which pretty much does what it says on the tin. One hint which Nigel picked up on was a rather nifty little application called "Do Something When" which very simply runs a script when a USB Flash drive is plugged into a Mac. Using the power of his brain Nigel that put these 2 ideas together resulting in "A really slick backup idea", luckily Nigel doesn't blog so I get to write all about it and reflect in the glory of the idea without doing a thing - sweet :o)

So after a small amount of programming, setting up you Flash drive and rsync script, every time you plug the Flash drive into the Mac it automatically does an incremental backup of your predefined files! This will allow us to present our managers with a backup solution which is so slick they can't fail to use it.....

The rest of this week has been programming for me, a mix of finishing our new job applicant database and moving our exisitng data into the new database, from Microsoft SQL server to MySQL which has been a bit painful. A thing to watch out for if you are trying to use DTS to move data between these databases is that text data will not copy across using an ODBC connection and I couldn't tell you why. Varchar data will move and I have checked its not a collation issue, it just won't go, answers on a postcard please. The solution I used in the end was moving data to excel spreadsheets and then pushing the data into MySQL from there, not very elegant really.

A view from the rack is the personal blog of an IT manager who works for a pub company - hence beer