Friday 9 December 2011

A farewell to SWF's and hello to gestures

Flex development for iOS is a new adventure for me, it has taken about 3 years to gain a reasonable proficiency with Flex but as with all things, nothing stands still and the pressure to keep moving with the times is ever present. It seems to me that the announcement to halt developing Flash for mobile devices represents a step forward rather than a retreat. The plugin architecture has plainly had its day and device manufacturers are demanding control of the programs which run on their hardware via the App stores which are now ubiquitous.

The problem is that my BI platform is entirely crafted on Flex and therefore relies on the flash plugin for use as I have never felt the urge to move over to AIR before now. That has all changed and it has actually been a very positive prod to investigate the new capabilities of Flash Builder and to begin to think about designing in a new way. Cross platform RIA used to mean Flash and I suppose it still does but as I understand it you are now looking at packaged Flash rather than swf files ready for the plugin and a series of GUI's crafted for each of the expected deployment platforms.

Anyhow enough of my view, its hardly an original train of thought and its well documented, on to the small nugget of code I wish to preserve for my cluttered memory. Along with the new platforms which flex supports there are also some new specific mobile features, in particular the ability to handle gestures or "finger swipes" on mobile touch screens. Its actually beautifully simple and the code follows for an empty view where I wanted to backup the back button in the action bar with a swipe feature for moving back to the previous view.

in the View tag you need the following code

gestureSwipe="handleSwipe(event)"

and then the following in the script.

import spark.transitions.SlideViewTransition;

import spark.transitions.ViewTransitionDirection;

private function handleSwipe(event:TransformGestureEvent):void{

var slideViewTransition:SlideViewTransition = new SlideViewTransition();

if (event.offsetX == 1 ) {

slideViewTransition.direction = ViewTransitionDirection.RIGHT;

navigator.popView(slideViewTransition);

}

}

Now if you deploy your app to a touch screen device you can move to the previous view simply by swiping your finger, it takes the app into the brave new world!

A breath of fresh AIR

After 3 years away I have decided to reopen the blog but with a slightly changed focus. Whereas in the past my blog was a useful journal of the progress of the IT department and concerned infrastructure, programming and Linux. Going forward although there may be the occasional Linux post I will shift focus to Flex, AIR and iOS development using Flash Builder. The IT department still exists, although I am less inclined to share the infrastructure and Linux progress, but there are so many new and interesting features coming forward with Flex development I need a journal just to keep track and so I might as well share it!

Thursday 13 September 2007

Enslaving a bind DNS server on CentOS

One thing I have been trying to accomplish ever since I commissioned our server in Manchester almost 9 months ago now is getting it set up as a secondary DNS server, this I have finally accomplished and it is an experience worth sharing. There are many howto's on the Internet which tell you how to set up a DNS server on windows or Linux but what I was after was to setup BIND on Linux as a fail over system for our main Microsoft DNS server and this is a far less well documented scenario. Also contrary to what you might read its actually really easy just don't step off the path!

For those of you who don't know, DNS is the system which marshals the traffic around the Internet, for example in the absence of DNS you would have to type in 72.21.206.5 instead of amazon.com to get to your favourite e commerce merchant :o). So DNS is important for the Internet but also as local networks are now very much modeled upon the Internet scheme, without DNS it becomes very difficult to manage your network in a user friendly manner. Which as usual is great until it breaks!

So very quickly as your office network becomes more central to the workings of your business it is natural to want a secondary system in case the first one breaks, especially when you are using a VPN as we are as the loss of our central DNS server would render our remote systems unusable as well. So that's the background, here is the solution to setting up a secondary DNS server using BIND on Linux as a slave to an Active Directory DNS server. Bear in mind this is for CentOS 4.5 (RHEL 4 equivalent) using the command line, if you are using a GUI just use the GUI tool!

1. On the Microsoft box open DNS and right click on the forward lookup zone you wish to replicate, eg, somebiz.local. Under 'Name Servers' add the IP address of your Linux box.
2. If you have already been playing, completely remove your existing BIND installation (yum remove bind), and trash any files in /var/named/chroot/var/named.
3. Run yum install bind to install a fresh one.
4. Paste the following into /var/named/chroot/var/etc/named.conf


// Red Hat BIND Configuration Tool
// Default initial "Caching Only" name server configuration

options { directory "/var/named"; };

zone "mydomain.local" IN {
type slave;
file "slaves/mydomain.local";
masters { xxx.xxx.xxx.xxx port 53;};
};

include "/etc/rndc.key";

5. substitute your domain for mydomain.local and your active directory server's IP address in the xxx.xxx.xxx.xxx space.

6. Run service named start and make a cuppa cos your done!

Obviously this is not a comprehensive look at this subject, there is an awful lot more to play with in bind but that really is all you need to do to get going. Hope it helps..

Thursday 2 August 2007

New lines vs Carriage returns

There really is not much in this post for the layman so if you are not a dyed in the wool techie your going to find this dull so I apologise to you.

I came across an interesting problem this afternoon when quietly building some PDF files using PHP (as is my want) which I would like to share. It concerns the carriage return which is an invisible character usually used to move the currently printing text onto the next line. It turns out FPDF, the PHP module I have been using to generate PDF files just doesn't like them. Instead it uses the alternative invisible character 'new line' which is great :o) hooray!

The really ticklish bit of all this is of course that these characters are both invisible so when one is having a struggle with them its really bit like wandering around in the dark. The first step when trying to sort a problem of this nature is to turn these characters into their representative codes, for this one requires the ord() function. One then needs to isolate these pesky invisible characters and decide what to turn them into.

So to cut a long story short if you are trying to print a block of text using FPDF and your line breaks are not appearing use the following little function on your text:

$text_var=str_replace(chr(13),chr(10),$text_var);

where $text_var is the variable containing your text. The function will magically strip out the ASCII character 13's (carriage return) and replace them with shiny new ASCII character 10's (new line).

Thursday 12 July 2007

Pdf Trials

As soon as a programmer creates a program which stores information, at some point someone is going to want get at it (the information that it), print it, send it to someone else or just take a copy. In other words one requires a portable document.

In our little in-house programs we have been using Excel a lot, it is ideal for presenting raw data to IT savvy managers as quite often the first thing they want to do is chop the data around, turn it upside down, rattle it and maybe analyse it against another spreadsheet. Excel begins to struggle when it comes to printing and presenting a better looking sort of document so we have begun to look at creating PDF documents for some of our reporting and printing requirements.

The first thing you find when you are looking for a program to create PDF files is a library called PDFLIB made by a company called ... PDFLIB. They have products for programmers who use C, Java, Delphi and our own favourite PHP. The upside is that it is quite comprehensive offering the vast majority of the elements defined in the PDF format, the downside is that it costs about £1000. Given the price tag I decided to dig a little deeper and turned up a product called FPDF where the F stands for Free, a much more palatable entry in the IT budget.

So Having installed the library effortlessly and tried a few of the examples I was impressed, the functions are limited but actually it seemed to cover everything I needed. The next step was to dive into probably the most complicated PDF file we have and up until the very last element it was all going so well, attempting to write vertical text knocked the wheels off the wagon however. Was vertical text going to cost a grand!?
Back to Google with a fresh cuppa and given that the PDF Format is freely published I thought maybe I should try my hand at extending the FPDF lib and adding the function for rotated text myself. The problem is it has been 15 years since my A level maths teacher force fed me matrix transformations and Adobe might have published the format and described the matrix transformations but they don't really 'throw you a bone' when it comes to programming them.

More tea and some inspiration from a tunnocks tea cake turned up a pear module which is in a slightly fragile beta 0.2 state but essentially builds on FPDF and includes a rotate_text function - File_PDF. Being a pear module, installation was easy but could I get it to work! Alas no. However could I lift the bonnet on both these php projects and force them into an unholy union, why yes sir! So this is my contribution to the open source community, if you are using FPDF and need the ability to rotate some text paste the following code into the fpdf.php file and call it using the last line of code.


function writeRotie($x,$y,$txt,$text_angle,$font_angle = 0)
{
if ($x < 0) {
$x += $this->w;
}
if ($y < 0) {
$y += $this->h;
}

/* Escape text. */
$text = $this->_escape($txt);

$font_angle += 90 + $text_angle;
$text_angle *= M_PI / 180;
$font_angle *= M_PI / 180;

$text_dx = cos($text_angle);
$text_dy = sin($text_angle);
$font_dx = cos($font_angle);
$font_dy = sin($font_angle);

$s= sprintf('BT %.2f %.2f %.2f %.2f %.2f %.2f Tm (%s) Tj ET', $text_dx, $text_dy, $font_dx, $font_dy,$x * $this->k, ($this->h-$y) * $this->k, $text);
if($this->underline && $txt!='')
$s.=' '.$this->_dounderline($x,$y,$txt);
if($this->ColorFlag)
$s='q '.$this->TextColor.' '.$s.' Q';
$this->_out($s);
}


// Write 'Soup' at grid ref 50x50 at a 90 degrees rotation
$pdf->writeRotie(50,50,"Soup",90,0);



As ever one hour of the day accomplished 95% of the job, the rest of the day was spent chasing around after a seemingly straight forward willow the wisp of a function.

Friday 22 June 2007

Fiddling around and the simple storage solution

I was invited on to the Amazon Computing Cloud yesterday - a seminal moment as I have been desperate to have a go on it for weeks now, but disaster! I was passed over because I had foolishly forgotten to open an S3 (simple storage service) account which is a prerequisite, despite my grovelling and the immediate opening of an S3 account I appear to have missed my window of opportunity.

That said it has given me a little time to investigate the simple storage service and in itself this has opened a few doors in my mind. Storage in the S3 bucket is very cheap indeed at 15 cents per Gig per month, this means for example that I could store my largest SQL database, at 2Gig and rising, on the ultimate storage system for 14p per copy per month with transfer costs of 10p. Doing a little simple maths therefore means that backup scenarios would cost:
  • 24p per month per copy
  • £1.78 per copy per month
  • £50 per month for a rolling 30 day backup
  • £650 pa. for a daily backup rolling on for 12 months (almost 800Gb of data)
The real magic here is not that I can get someone to host almost a terabyte of data for £650 pa but the infrastructure behind it, storage spanning multiple data centres in multiple countries, is fantastic. There are only 2 slight problems, there is a file size limit of 5Gb at present which hopefully will have increased by the time my bloaty database has got anywhere near that, also my backup software Red Gate does not at present support S3.

So I'll have to write a little program one of these days to bounce my off site backup from our Manchester server over to the S3 cloud on a nightly basis then I really, really will have an offsite backup I can be satisfied with. Alternatively our friendly Red Gate developers might like to take the hint, get an S3 account and crack their programming type knuckles. Utility computing has to be the future of off site backup so get in there, I wont charge for this revolutionary piece of advice but I have broken my promotional Red Gate pen so if someone wants to send me another one we'll call it quits :o)

Whilst I am on the subject of S3 there is a fantastic little add on for FireFox which I found very useful for getting things moving, the S3 organizer, its a good job the programming is better than the spelling of organiser :o)

And speaking of really useful little pieces of software anyone who does any programming of web apps should get themselves a copy of fiddler. Its a really rather sweet little program written by Microsoft (tun tun tun!) and distributed free! Maybe Bill finally has enough.

Anyhow before I get on my soap box, fiddler is a very simply http proxy server which gives you a real time readout of what traffic is moving through your Internet connection. As we are using Flash couple with AmfPHP we constantly struggle to debug in an efficient way but not anymore! I wont spoil compadre Robs review right now but just get it and watch out for the bug eyed review.

Wednesday 6 June 2007

In the queue for the ECC

There have been lots of things going on over here for the past month but nothing really blog worthy, aside from this, having a baby has somewhat curtailed my late night blogging sessions, so this is the first post since May 10th!

We have signed up for the ECC, this is not another odd-ball European thingy but in fact Amazon's new web service christened the Elastic Computing Cloud. The concept is what is known as utility computing in that you create a new Virtual Machine and only pay for the storage and processor time you use. This means that within reason your online application can scale wonderfully from being a dusty corner of the web which no-one ever uses to the latest craze with millions of users in minutes and your server wouldn't crash (as long as your code is well written of course).

We have a little online application which we will very shortly be polishing for general release and this seems like a great opportunity to keep our initial installation costs very low but have the ability to scale quickly to meet the needs of our new users :o) Then of course, when things have settled down, and we know what sort of power we are going to require long term we can make a more informed decision about buying our own hosting kit without having to wade through goat intestines with the help of a good soothsayer.

The spanner in the works of course is that everyone else want to make use of this wonderful new service as well and I have found myself in the queue. This seems to be a rather annoying trend in fact, I queued for Joost, I queued for Google applications for Domains and now I am in another queue. I suppose it allows companies to test their systems without having a big embarrassing launch followed by teething trouble but I want it now :o(

I have looked for alternatives but it seems no one else is offering such a simple, well supported and dare I say cheap service within the means of the average web applications developer. Until now utility computing was in fact the preserve of the scientist wanting to test his quantum theories or analyse what happens in the middle of a cosmic jam doughnut (think SETI) so it seems Amazon are possibly on the cusp of having a runaway success. Every applications developer who doesn't want to ask his boss for a new server and equally doesn't want potentially unstable code on one of his precious live servers will want an account for testing stuff.

I am only surprised that Amazon beat Google to it as its just their sort of thing, I look forward to using the gutility computing cloud in about 2 months (probably for free) and inevitably the Microsoft computing cloud in about 2 years which will be compliant with the utility computing standards which the IEEE will have created and ratified by then but with of course.... Microsoft extensions.

Thursday 10 May 2007

Finding extra space on a VMWare Virtual Machine

Now our Redgate SQL Server backup system is running nicely it has exposed a slight deficiency in my server setup over in Manchester, the problem now is that when I created virtual machines for all our little applications running on the VMware system I only gave then 4GB disks :o)

For the Nagios system, the Intranet applications, the source code repository and the knowledgebase these disks are perfectly adequate but for database backup obviously a bit more space is required. Having utilised a spare folder on the knowledgebase virtual server for the SQL Backup it would have been very inconvenient to erase the machine and create a new one and also given the fact that the host server is not overburdened with RAM I didn't feel it would be wise to put more than 4 machines on the system.

So we come to a less well documented feature of a VMWare virtual machine, when you set the hard disk size on creating the machine you cannot increase it in the future. So you have 2 choices, the Lego approach - smash it up and start again, or the clever approach, add on a new virtual disk. I chose to be clever and in a nice twist of fate got away with it :o)

So I think a little how-to is in order.

Begin by adding a new disk in VMWare, it was recommended that I choose a SCSI disk so I complied, bear in mind this has to be done with the virtual machine powered down. When you have added the disk just power up again and you are there, all you have to do now is to get your Linux install to make use of it.

As I have blogged before our flavour of choice when it comes to Linux is CentOS but RHEL4 or Fedora would probably work in exactly the same way because its pretty basic stuff really.

First use Fdisk to create a new partition, as this is our second disk its SDB (SCSI Disk B)
so:
fdisk sbd
and then just follow the instructions to create a new partition.

Next you need to format the new disk so:
mkfs.ext2 /dev/sdb

At this point you need to check the space on the disk so:
df
and you should see your new disk listed

Finally you need to mount the disk so:
mount -t ext2 /dev/sdb /home/samba/sql

As I was using samba to share the /home/samba folder all I needed to do was update the permissions on the folder and restart the service - job done.

Now I have a bit more space on the share the backup has worked a treat, it is one of the real strengths of the Red Gate system that you can see immediately how your backup routine is performing using the timeline GUI. As you can see from the picture here the first couple of databases backed up nicely but given that they are larger databases they are a bit snug so I might just ease them apart a little so we don't get a clash as they grow.

Tuesday 1 May 2007

SQL bakcup - a can of worms!

It seems that the SQL Backup market place is far busier and competitive than I had imagined, no sooner had I arrived in the office this morning than a nice person from Quest Software, makers of Lite Speed, finally caught up with me and set me right on the technology and the price. It seems they have a very interesting suite of SQL server products all of which make the standard Microsoft tools easier to use and in some cases adding functionality, the version I had seen yesterday happened to be the developer version at $45, a full version for our setup being £700 - a bit of a jump. In the end I spoke to 3 people there and came away with a whole heap of good advice and technical background to the product.

No sooner had I finished on the phone than I had a very pleasant representative from Red-Gate, makers of SQL Backup, on the phone wanting to discuss my recent blog post (Gulp!). Actually it turns out that instead of wanting to sue me for mentioning their software on my rather random blog he wanted to give me a full pitch for the product and was very happy to offer a cheeky discount, a good deal on support and some very good advice about which product was right for us. It also turns out that I was already out of date as version 5 had come out over night and I am just in the process of kicking that around. The benchmark looks about the same, maybe a slight improvement in performance, but it has a rather nifty new GUI which plots your backup activity on a time line graphic so you can visually see whether your backup schedules are in danger of overlapping and going into a shame spiral. (Cue - dip into Google images for shame spiral and discovering a picture which is also fitting for the genius which is the time line GUI).

My next step was to download a trial of HyperBac to evaluate their approach which shuns the extended stored procedures of the other products in favour of a totally stand alone setup for SQL backup. This in turn invited another phone conversation as I actually gave my real phone number when downloading :o) It seems the nice people who created HyperBac where originally involved with creating Lite Speed and they decided to create a new company with a new approach a while ago. Again their sales dude was a wealth of information about the strengths and weaknesses of the various approaches taken and was very helpful indeed. The cost for our setup was $499 irrespective of the number of processors we wanted to throw at it and as such it sits directly between the other 2 products. Having played with the system it performs well but I would say that both Xceleon and Quest will be interested to have a good squint at SQL Backup version 5 as they are both slightly quicker but overall I think Red-Gate have it.

I have set our demo rig up for hourly backups overnight so I will be interested to see what is waiting for me by the time I get in tomorrow morning! Hopefully a useful archive of backups which all restore perfectly.... we'll see :o)

Monday 30 April 2007

Remote SQL backup onto SAMBA Shares

Backup is a subject which comes up a lot on the rack as regular readers will already have noticed and I have already detailed several of the strategies we employ to make sure we have a myriad of copies of our essential data spread across the network, preferably as far apart as possible! Our latest investigation concerned how to get a usable copy of our most precious and bloatie database from our SQL Server 2000 installation across the VPN (and therefore Cheshire) on a regular basis.

Time for a topical and amusing dip into Google images for the word of the day - bloated, as in a very large database. This little marmot, who obviously has a small pie problem represents our database for this afternoon.

The data to be moved is 1.3 GB over a 2MB line from a Windows 2003 Server to a Linux partition on our CentOS virtual machine. Given all the other scheduled adminnie jobs we have going on overnight I cannot afford for the whole process to take more than about 30 minutes, I sometimes think the network is busier out of office hours!

The solution is of course to compress the backup before sending it and it turns out that after a little investigation there are a couple of products on the market which do this for you. After only a brief search I found SQL Backup by Red-Gate Software and Lite Speed for SQL Server by Quest which both offer on the fly compression and encryption to boot. One might have expected Microsoft to include a compression option in their rather expensive server system but one would be wrong as usual (thanks Bill). Bit less time writing the EULA and more on the software next time eh?

Moving swiftly on, the products mentioned above are relatively simple in their approach in that they add some system stored procedures to your SQL Server install which can be scheduled to run, adding compression and or encryption to a standard full or differential backup. The cost is quite manageable as well at $45 to $399, which is a lot less than the time would cost to build our own script! But incidentally if anyone is interested here is a start. Installing the programs on our test bench was easy and initially everything was very straight forward until we tried to send it to our Linux machine....

In order to share a folder on Linux one requires the cooperation of a service called SAMBA which is really quite powerful and therefore complicated, sharing a folder to any Tom Dick or Harry is very well documented and quite easy. Authenticating against our active directory and sharing with our windows machine requires the services of a good soothsayer however and is somewhat sparsely documented!

Having finally got a shared folder onto the windows network I thought the job was done but unfortunately I hadn't counted on a couple of less well documented features of SQL server.

1. SQL Server does not like backing up to network shares that are not in the same work group
2. SQL Server will not offer to re authenticate, the user account SQL Server logs in on must have explicit and full access to the network share.

I would love to give a blow by blow account of how I got the network share going but I have been chipping away at this problem sporadically and unfortunately I have sort of lost track of how I got where we are.

Having taken a while to sort this I have finally got some comparative data and I am very impressed indeed! Given that a 2MB line is really quite modest, SQL Backup managed to compress, encrypt and squeeze a 1.4 GB database down to 250 MB and ferry it across Cheshire in 20 minutes and 10 seconds! It took me a while longer to get Lite Speed up and running but at only $45 it ripped through the compression and transfer in a mere 10 minutes and shaved 30 MB of the storage requirements at 220 MB! Its my new best friend and I would recommend it to anyone looking to get their database backups as far away from them as possible. The only thing left to sort out is that I cannot afford to change the logon account for the main server as I did on the test machine so hopefully I can get SAMBA to cooperate with the existing setup this time :o)

Wednesday 18 April 2007

Nagios - The Final Word

Having posted about installing Nagios on CentOS I have finally had a few comments on BeerBytes (Hooray). I have also featured on 2 little news posts on other Linux sites as well click here for the latest and check out the blogroll right for a proper link. Given the obvious appetite for this subject (no not me - network monitoring) I suppose its only fair that I should continue dispensing nuggets of information given that its the only subject which has raised a flicker of interest.

Having got the basics set up I spent some time last week getting the network diagram straight, as we have quite a busy network with 60 nodes I wanted to monitor, the automatic function did not do it justice. Moving on from this I fell foul of having installed the incorrect plugin archive, for some reason check_ping worked fine but I wanted to start monitoring DNS, MySQL and HTTP on a couple of servers and these plugins would not run. In order to test your plugins you can simply move to the plugins directory and type ./pluginname, for example ./check_ping --help will tell you what command line parameters the ping plugin requires then try it again with these supplied. To continue the example ./check_ping -H www.yahoo.com -w 1000,10% -c 1000,10%
returns PING OK - Packet loss = 0%, RTA = 84.63 ms. The commands are already set up for the standard plugins in commands.cfg if your install was ok.


So to sort this problem out I ran off to try my old mate DAG's archive :o) I found the appropriate rpm and bingo, everything works well. If your are relativly new to Linux as I am and you are struggling to connect to the repository on yum or using rpm there is a quick and dirty work around. Simply find the link to the rpm in your web browser, copy the link, get back to your shell and type wget and paste the link, this will download it to your machine. Next type rpm -i and the name of the downloaded file and this will install everything, apologies if that was embarrisingly basic but yum can be a bit hit and miss for me.

The documentation for installing Nagios is quite good but the documentation for actually using some of the many and various standard plugins is really quite sketchy so look outside the Nagios.org site for this information. The Nagios plugins page on sourceforge is your starting point for this but its not obvious.

In order to start monitoring services a little editing of the services.cfg is required followed by creating a couple of new hostgroups for similar machines. For example I wanted to monitor Mysql on 2 machines so I declared the service in services.cfg as follows


define service{
use generic-service
name mysql-service
is_volatile 0
check_period 24x7
max_check_attempts 5
normal_check_interval 1
retry_check_interval 1
notification_interval 20
notification_period 24x7
notification_options n
check_command check-mysql-alive
service_description MYSQL
contact_groups nerds
hostgroup_name mysql_servers
}


This will check the hostgroup mysql_servers every minute, 24 hours a day and email the nerd group if there is a problem for 5 successive queries. It presumes a standard install which predefines the 24x7 time period and the check-mysql-alive command in the realvent files.

Now that I have these extra services being monitored there is some really quite useful informtion being generated, for example the number of connections connected to the MySQL servers and the response times for the HTTP servers. Another area which is being fine tuned via the timegroups.cfg file is when I want to be alerted about certain things, for example I quite like getting an email if a router goes down overnight but as some equipment is turned off overnight I dont particularly need to know. So in short just getting Nagios installed is the tip of the iceberg, the more you think about things the more instances where good network monitoring is useful become apparent. The good news is that this fine tuning is very quick and easy once you have the thing up and running.

One final point on Nagios before everyone gets bored is that on windows you can actually use active desktop to embed your live network map into your desktop making sure you never miss a trick :o) Simply go to desktop properties->customise and paste your Nagios address followed by /cgi-bin/statusmap.cgi?host=all into a new web address to embed into the desktop.

Some other things happening in our little team include a spontaneous upgrade to Adobe CS3, I haven't even got it installed yet so you will all have to wait for some views and opinion but one thing to remember is that you need bags of hard disk space, the download is about 1.5 GB and then it unpacks to nearer 2 GB then needs 5.6GB for the programs, so unless you want to be cleaning up after every stage you need about 10GB! Also the knowledge base is filling nicely and the more I use the product the more I like it, just one little point is that you have to keep going to different URL's to do different things, it does not check your security and give you all the options you are entitled to.

Friday 13 April 2007

A two pronged approach to vesion control

We had a little IT retreat earlier this week to discuss how we can more effectively work as a team, already this has spawned the knowledge base which we are diligently filling with guff, but another thing which became apparent is that we need tighter version control on the source code for the applications we are developing. The aim being to make it easier for several people to work on the applications together without constantly tripping over each other trying to edit the same files.

In a previous project myself and compadre Rob created a sports club management tool as a team and it very noticeably benefited from the contrasting styles and knowledge which was brought to bear on the task. We used subversion for source control on this project running on windows and although it was very useful it never quite delivered on all sides.

The reasons for this were mainly due to the focus of SVN being for text based source code and the merge principle of team work. For example, if two people work on the same text file simultaneously svn can very cleverly merge together the separate changes and 9 times out of 10 they will not conflict. Where the SVN system begins to come unstuck is when using non text based source files like Flash FLA files, as these use a proprietary format you cannot merge files if 2 people have simultaneously changed them, you immediately end up in conflict. So in this scenario you need to lock a file on the server when you are editing so that no one else can open it, the problem is that SVN is not very good at this, unless as can happen, I have missed something.

Given that our new systems are being developed in Flash with support from PHP files and a little MySQL thrown in I decided to look more closely at the version control on offer in Flash and Dreamweaver. It turns out that although the current system is very good at locking files using their 'Check In' 'Check Out' philosophy it is not quite so good at keeping an entire repository synchronised unless Dreamweaver is your weapon of choice. As each of us in the team prefer different HTML editors this will work very well for the Flash files but not for the project as a whole. According to the Adobe site the new version of CS3 has been greatly improved in this respect.

So the solution which seems to present itself here is in fact to use both systems in tandem with Flash taking care of its proprietary source files and subversion via tortoise in my case taking care of the text based files and having overall responsibility for the repository. Touch wood this seems to be working nicely but we have yet to get the whole team working on the project simultaneously.

If anyone else fancies having a go at this, installing subversion on a new virtual server is very straight forward and there are lots of good tutorials on the subject click here for the definitive guide for CentOS.

The Adobe site or the online help for Dreamweaver or Flash is the best place for information on how to use the current simple Macromedia version control.

And finally there is a short article here about fine tuning the setup when using both of these systems concurrently.

Wednesday 11 April 2007

The IT Brain Dump

One thing we have always struggled with in our little IT department is the sharing of important information. If I set up a new system I might note the details in a book or even on a shared document but we have never quite found a system which works for us all and as a result we cannot always put our hands on other peoples knowledge quickly and easily. Last week we decided to have another go at organising our information again and whilst wading through the available knowledge management tools came across a couple of gems.

One system which came top of the list on Google was an open source system called Twiki which I must say was my first choice for a while. They have some very big companies using the software and it looks like a very simple system, which in the tradition of wiki allows all users to contribute towards a knowledge base. I think my main gripe was that I wanted something which looked a bit more easily organised and more like an application than a simple website, although in Twiki's defence I did only look through the demo for a few minutes.

The product I found in the end was a very nice PHP application called PHPKB as in PHP Knowledge base. Although this would not be to every ones taste the fact that the application is available as a simple and very reasonable series of PHP pages suits our setup here perfectly. You have to have access to, or know how to setup, a web server and a MySQL database to serve this application but I did note that the company offer a free setup service and they do have a hosted option. We simply added a new virtual server to our main Virtual Host and had the system running in about an hour. The installation is quite simple and managing the system once its running is very straight forward, I have already started posting bits of information about systems and its surprising once you get started how many very important nuggets are stashed on emails and even in your head.

One of the other nice things about this particular system is that some categories of information can be public and some can be protected, so that in depth technical information can be cordoned off but some of the less technical information and tips can be made available to everyone in the organisation. As with all systems the usefulness is only going to become apparent when we have been using it for some time but I would say that with a little perseverance it has the potential to save an awful lot of time and stress.

On another point regular readers of 'AVFTR' will have noticed that someone actually commented on a post yesterday :o) In fact the nice gentleman concerned even Blogged about the Blog! I must add it to the Blogroll. I am now braced for a massive increase in traffic, I might call Blogspot to make sure they have capacity because in the last 4 hours of yesterday I had 30 visitors.

Thursday 5 April 2007

Nagios on Centos - a grudging union

Centos is one of the great network operating systems, it was developed by a group of people who saw Red Hat Enterprise version 4 had become super reliable but slightly bloaty, they exercised their rights under the GNU public license got the source of RHEL4, put it on a stairmaster and gave it to the people.

Likewise Nagios is a great open source network monitoring system, if you are a Linux user and run a network chances are you will have come across Nagios, short of forking out about £1000 it is in fact pretty much your only option. About 12 months ago I installed Nagios on Fedora and it was a breeze, even though Nagios is a very comprehensive system requiring lots of fiddly configuration, on Fedora if you follow the instructions you will succeed in getting going in about an hour.

Unfortunately given that Centos and Fedora have a common ancestry and are very similar, trying to install Nagios on Centos will drive you up the wall. Unless I have done something stupid without realising it, installing the system from RHEL4 RPM's seems to scatter the files from one end of the disk to the other and it takes lots of patience to track them all down and link everything up. My advice would be follow the instructions to the letter but if you don't find the files you are looking for don't be surprised. Click here for the main Nagios site, this post is not a guide to installing Nagios on Centos just an amendment to the install guide based upon my rather frustrating experience.

Just in case I forget or anyone else trips over this, the locations are as follows:

Config CFG files - /etc/nagios
Web interface files - /usr/share/nagios
Log files - /var/log/
CGI files - /usr/lib/nagios/cgi

A guy called Dag (??) has done some Centos RPM's but I couldn't subscribe to his repository, if you can it is quite possible that he has reworked the install to follow the instructions. Couldn't resist doing my 'Google Images' thing for Dag, turns out this Swedish guy is also comfortable going my the name Dag. There are some great translations for Dag on wikipedia, in Swedish it means 'Day' and in Turkish it refers to a 'Mountain'.




So now the dust has settled after our mammoth network rewire last week and Nagios is running sweetly I feel quite satisfied with everything. As expected we have had a few static routes crawl out of the woodwork and we have renewed our efforts to use DNS rather that IP addresses for routing around the network. It turns out reversing the VPN connections was not all that it promised and we have moved them all back again, it also seems that having Nagios running is actually very good for the stability of the VPN as the frequent pinging seems to keep the routers awake and the tunnels in good repair.

One job left to complete is to define a custom status map for Nagios, as we have over a hundred nodes on the network being monitored the auto generated map is a bit of a mess so I have to define the map by hand which is a bit of a pain. That said it will look very nice as we have purchased an icon library for our software development and their networking set is very sweet. See left for a sneak peek, note however that our main managed switches are not down it is just that Netgear have issued a firmware upgrade they are short of. One day I would love to do a more comprehensive Flash front-end to Nagios but frankly right now I have better things to do.

Another job I think would pay dividends would be to set up a secondary DNS server at Manchester, it is probably quite straight forward but I think I will let the dust settle before attempting this one.

Wednesday 4 April 2007

Some Excellent Manipulation

An interesting little job came up yesterday which involved formatting data on an excel spreadsheet, we have some lists which have to look pretty but are edited frequently and we were having to spend a lot of time ensuring that these lists had a reliable and consistent format. Lots of ideas spring to mind for a job like this and the temptation is to go for yet another little database application but in this case it really felt like it would be overkill.

The solution which appears to have legs is to create a rather niffty excel parser using a couple of useful PHP add ons. For those of you who don't know what a parser is the definition on wikipedia is "the process of analyzing a sequence of tokens to determine its grammatical structure", in lay mans terms think of it a s a digester of documents. You push one in one end and it reads it, digests it and magically supplies a result, or in this case a completely reformatted spreadsheet. This will allow us to keep our data in a very simple unformatted spreadsheet, but by running them through our new system, we can have a nicely formatted consistent look ready to print in a click. Its all summed up nicely by another of my random dips into Google images, this time for the word "parse" see image right.

So if anyone ever has a need of such a beast or, more likely, if in 6 months time I have forgotten what I did and need a reference, the 2 places to go are PEAR for the excel spreadsheet writer add on and Source forge for the Excel Reader add on. When these are installed and working individually there is no reason why they cannot be used in the same PHP script in a push-me pull-you sort of fashion. It only took a couple of hours to get a basic system running and you can even allow the user to specify some parameters with their spreadsheet. So for example you pass a raw sheet of data, a title, a font and a relative font size and the parser running through the writer will apply sizing's and fonts in defined ways to different columns of data, it will even specify margins and printing areas so the document is completely ready to roll.

Keep tuning in for the definitive guide to installing Nagios on Centos without 'going postal' later this week.

A view from the rack

A view from the rack is the personal blog of an IT manager who works for a pub company - hence beer