Wednesday 31 January 2007

So you edited the slave.......

When syncronising 2 databases via replication one thing you do not want to do is touch the slave in any way. The system is clever you see, it knows where its up to and really doesn't expect a helping hand. Unfortunatly this afternoon I went to edit my little utility meter reading database, a very simple and efficient little system, I just needed to add a column but I chose the slave by mistake. In the immortal words of homer simpson - Doh.

At this point everything falls apart, the slave process has a fit because it is trying to add a column which is already there and stops taking orders from the master. So at this point if you are also a bit of a dunce with 2 similarly named databases you need to follow these instructions to get your little donkeys back on the straight and narrow.

1. Stop your slave with 'slave stop'
2. Back up your master
3. Restore it onto your slave
4. On the master run the 'show master staus command' to get your log file and position
5. On the slave run the following rather longer command.
mysql> CHANGE MASTER TO
-> MASTER_HOST='master_host_name',
-> MASTER_USER='replication_user_name',
-> MASTER_PASSWORD='replication_password',
-> MASTER_LOG_FILE='recorded_log_file_name',
-> MASTER_LOG_POS=recorded_log_position;
6. On the slave restart the slave process with 'slave start';
7. Check its all OK again with 'show slave status\G'
8. Make a nice cup of tea and congratulate yourself on getting your codling's out of the fire :o)

More from the rack next week as I am getting educated tomorrow and Friday, brushing up on my SQL skills and seeing if Microsofts SQL server 2005 has anything to offer. Without being cynical however given the fact that Microsoft's licensing model is getting more and more complicated and expensive they better have something really good up their sleeves to regain my attention given how positive my recent MYSQL experiences have been.


Friday 26 January 2007

Using AMF in Anger

So no sooner had I found a slinky new piece of software to try than an ideal opportunity to test it out comes along! How lucky am I :o)

A request for a quick application to share a list of tasks landed in my in ear this morning, something simple to basically mimic an excel spreadsheet which needed to be shared between several people. Bear in mind we are cross platform so stop shouting "use a shared task list in exchange" because Entourage on the mac is pants, also this might eventually be shared all over the company and we don't let just anyone onto exchange (not because we don't love everyone it just to blinkin slow). So I added a new module to our internal application suite (codenamed Fattie) and set to work creating a new service in AMFPHP.

Now it has to be said that of the 2 main benefits to this new approach the speed of development over XML is huge, I literally needed to declare the table of methods of which there are only 4 and write a very short function for each. So the application broke down to -
1. Fetch a list of all records
2. Fetch a full Entry
3. Fetch a blank entry
4. Save Entry

in all 80 commented lines which I am sure with a bit of inspiration could be halved which would have taken about an hour of unbroken typing if I were ever to get one :o) Add to this 2 tables in the database for the tasks and contractors and the server side is done.

One thing I am still getting used to in Flash is the recordset class, if you are using AMF you really have to use the recordset and this has required a change in my approach to interfaces.


Up to now I have been a big fan of Flashloaded's BIT components for Flash but unfortunately they do not support the recordset as a dataprovider so I have had to dust off the standard Flash version 2 components and bone up on the recordset. That said throwing a quick interface together took about 20 minutes:







Add to this another 20 minutes to create the only dialog box required in the application, that is used to add or edit an entry.





And all that left is a bit of programming to string it all together. As I mentioned before the main difference between how I used to use the BIT Components and the Flash Components is how the data provider is organised. When using a standard array based data provider you can access data using something like component.dataProvider[0].data whereas when dealing with recordsets you have to use component.dataProvider.getItemAt(0).columnName. Seems simple but it probably wasted an hour today in one way or another. Of course now I have dutifully blogged it I shan't forget.

The only thing left to finish off this little app was to take the existing excel worksheet and use the previously blogged technique to force the data into the new MySQL table - sweet :o)

Once finished the other main benefit of AMF kicks in and I can just tell its faster than an XML based equivalent would have been, can't tell you how but I just know.

One improvement which I would have loved to try but I will save for another day is to try and finally get my head around the data binding functions within Flash and using Delta packets to remove the requirement for the dialog box. It would have required some groovy cell renderer's to add the combo boxes to an editable cell but I am sure it could be done.

Wednesday 24 January 2007

Revisiting AMFPHP

I had a pleasant surprise this morning after I decided to pop in and see how the AMFPHP was getting on, it turns out very nicely thank you and I wish I had popped in 8 months ago when it came out of beta!

For those of you that don't know AMFPHP is an open source project which is very like Flash Remoting Server in that it allows you to more tightly bind a flash application to an application server. There are several reasons why this is a good thing but there are also several reasons why we have never used it in the past even though, as Flash users, it is a powerful tool to have around.

In the beginning there was Flash Remoting Server, it cost a lot of money (-1), it tied you to Cold Fusion as your application server (-2) and it tied you to Flash as a front end(-3). Now I like Flash as a web app programming tool for several reasons, its cross platform, its very readily available, its quite quick and the run time is free, so as a means for creating small web based or intranet based apps its very nice. The problem I have always had with Flash is that it may not be around for ever and I don't want to get tied to it just in case it is superseded also the server side kit from Macromedia as was very expensive. So when we started developing these applications we used Flash for the front end but PHP on the server and always used XML to communicate data backwards and forwards leaving the door open to AJAX, JAVA or anything else we might want to use as a front end.

The only downside to using XML is speed, one very nice thing which I have been using more recently is the XML formatting functions on Microsoft SQL server which meant that there was no need to assemble an XML document from the query data using PHP, simply run the query and send it off to the client. However once this arrived often there are quite complicated functions used to extract the data from XML and display it which Flash was not so hot at sometimes.

So long story short I found that AMFPHP is on release 1.2 with a 1.9 in Beta, installed it in about 5 minutes and had a basic application running in 10. The real change from my point of view is that it is now compatible to a greater or lesser extent with AJAX and Javascript which is great news. So I have spent the rest of the day looking at how much more efficiently this approach can be used to send data backwards and forward between client applications and the server than XML.

The answer is much quicker in 2 respects, firstly the data is serialised at both ends and transmitted as binary data so the latency is much reduced, also as the data arrives at the client side as native data types processing is quicker and much more straight forward. Suddenly the data binding in Flash standard components makes sense! Writing the code is much more straight forward when you get your head around the way that server side classes are constructed and how the information arrives, so much so that a page of programming 250 lines long becomes a far nicer 40 lines long with another 30 odd on the server side. If you program in Flash you need this! I have taken the liberty of linking to the showcase page which lists some great sites apparently using AMFPHP in the wild :o)

Tuesday 23 January 2007

Bulk Import to MySQL

2 posts in a day! Ring the bells.

Just a quick post because I have found a nice solution to a problem I have encountered a lot and that is that when you move from MS SQL server to MySQL you tend to miss some of the nice quick data import systems in DTS. A typical scenario is how do I bulk import into MySQL from an excel spreadsheet which someone has given me?

The solution is scary but quick and described nicely here. Essentially save your spreadsheet as a csv file, FTP it up to your MySQL server and start an SSL session. Log into MySQL on the command line and switch to your intended target database with "USE database name" then use the command "load data local infile 'Path to csv' into table table name fields terminated by ',' lines terminated by '\n' (field name,field name...........);". Hey Presto its in as long as your fields are in the correct order, one nice aspect is it even allows you to import into auto increment fields.

Sweet.

Rip Poblems and the Legendry Second Tunnel

No posts for a few days but that's just because there has not been anything to say, not because I have gone off blogging :o)

Its taken a few days to move one of our applications to Manchester and it has taken a while to get over a sticky routing problem, in that my router at home which is IPSEC tunnelled to the office would not allow me to access the server at Manchester even though RIP was enabled. So in a nutshell from home I could get onto anything on the head office sub net but not access any further sub nets of which we have one each for the pubs and one for Manchester. After looking into static routes which didn't help and trying to investigate RIP I realised that this was a waste of time because of course the grand plan is to have Manchester as a self reliant application server in its own right and by routing through the head office I was therefore reliant upon head office to get to Manchester.

So the quest now became setting up a second IPSEC tunnel to Manchester alongside the existing tunnel to Head Office, a model which would eventually be rolled out to all the remote locations. Now on paper setting up a second tunnel should be really straight forward but if anyone else has tried it you will know that sometimes they work and sometimes they don't, the most frustrating part being that unless you go the whole hog and set up an event log server you get no feedback so when it doesn't connect you just have to keep trying. 10pm last night I was beginning to loose my rag, deleted the tunnel profile from my router, set up another identical profile and bingo - worked first time.

So now the model is as illustrated below which means that when the applications are mirrored properly and all the data is replicating we will have a nice fail over system without breaking the bank. The only trade off being that if Manchester went offline moving users to head office is simple but going back means a backup/restore of the data and a bit of a faff due to the choice of replicating data rather than clustering.



Monday 15 January 2007

More MySQL replication and dare I mention the I-Phone

I was not surprised to log into the MySQL slave this morning and find out that it had completely failed to replicate over the weekend but what was disappointing was how easy it was to fix! I just found another how-to ran through the same information in a slightly different order and hey presto! So if anyone is unfortunate enough to have ended up on this blog looking for MySQL replication info click here because the last link I posted didn't work.

Surfing around over the weekend and I must confess I am getting a bit bored of the I-Phone hype. Obviously I would be foolish to comment too negatively lest I be abused by the faithful as has happened after a rather frank article on the register which gave a few things you could do with a dead I-Phone. I also won't mention the fact that I am on my second HTC smart phone both of which have had touch screens and although I am sure the I-Phones is better the general concept is hardly revolutionary.

No the thing I would like to mention is the rather cool Linux based smart phone from FIC obviously it being Linux based and open source it will not be quite as polished as the I-Phone but I think when we are talking in terms of a revolutionary product having a phone which is so flexible and 'open' this seems to be a truly new approach. It may only be a toy for techies at the moment but I will certainly be getting one when they are available :o)

Friday 12 January 2007

Muses on Replication

So the Manchester server is working nicely 2 weeks on, the reporting system is beginning to be useful and its time to develop the infrastructure a little more. The ideal scenario with this new off site system would be that if it were to fail for whatever reason we could bring on another to replace it very quickly. In this way if we begin to move the pubs everyday applications over we know that we could always turn on an up-to-date backup within a few minutes thereby increasing the peace as Chuck D might say.

The best approach for this in my opinion is not to go all out and create an automatic fail over system with database clustering etc etc as its just not that much of a nightmare if the systems are unavailable for 5 minutes. Instead I am going to use replication which creates a live backup of the data but sends changes over periodically rather than trying to keep total synchronisation. MySQL has a very nice replication system which operates in a master slave relationship - Manchester being the Master and head office being the slave.

There are lots of benefits to this approach but ironically backup is not one of them. Having a database to backup from on the local network is obviously a benefit but one of the things with replication is that if you dismember your master it will automatically dismember your slave right on time :o)

So you still have to backup in the traditional sense but we can do it here instead of in Manchester and if we need to take one of the databases down for maintenance we have a way we can do this in a controlled manner rather than getting up at 2AM and hoping no one wants to use the systems!

I will not go into a 'How To' about replication on MySQL because I just used someone else's blog to accomplish it so here is the link. Also having set it up I now have to go home and see if my changes filter through by Monday morning, speaking of which did anyone else see the BBC thing with the tiny camera last night? Eugh!

Scheduled for next week - I really must get round to playing with Net Beans, found it last week and it really looks like somebody finally made a nice use able web app development tool for Java which must be worth a couple of hours.

Monday 8 January 2007

Virtual Jiggery Pokery

Now that our Manchester server is up and running the main thrust of the work this week has been on moving our company wide reporting system over to this new facility. After tests last week the performance of the system running over the VPN has been very pleasing and even large queries over the accounts database for invoice reports is perfectly acceptable.

A little background on the reporting system would be useful at this point I suppose or this won't mean a whole lot!

Basically the system is a PHP web application with a Flash front end, the key aspect of this system is the excel spreadsheet writer pear module for PHP. The reason for this approach is that most reporting systems, crystal, jasper etc etc can create very nice reports but the emphasis is on each report being a presentable document. Although you can export or generate reports in excel the approach we have taken is to allow a shopping basket of reports to generate a whole workbook full of sub reports which can then be analysed in a much more convenient manner without having to amalgamate reports or remove excess formatting.

One problem we have found today was an odd quirk with our virtual servers. We currently have 4 virtual CentOS machines running on a VMware server installation and the original intention was to have a separate machine for each function ie. one for MySQL, one for the web application, one for NAGIOS (after the problems with our thin client) and one for stuff :o)

The problem we found, which is quite probably my fault, is that accessing the MySQL database on the virtual host from the web application server on another virtual host was sooooo slow it was useless. Therefore we have now had to install MySQL on the same host and all is well but I am not sure that this is how we will want our system long term.

Thursday 4 January 2007

Note to self - stop forgetting things!

One problem I find with skipping around from one system to another is that one forgets things, I have just been setting up our new virtual server to house our cross company reporting system and half way through wading around in RPM packages I got a real sense of deja vu. When you install linux it sets itself up very nicely to have a web server and a scripting language (PHP) and even its own database server, however one thing that Linux does not like is Microsoft SQL server so the idea of adding an extension to query this sort of database would seem simple.

How wrong can you be!

So just in case anyone else comes across this problem follow these steps.

1: Install the FreeTDS support library
rpm -i ftp://fr2.rpmfind.net/linux/conectiva/snapshot/i386/RPMS.database/libsybdb4-0.62.4-65381cl.i386.rpm

2: Install FreeTDS
rpm -i http://belnet.dl.sourceforge.net/sourceforge/phprpms/freetds-0.63-2.i386.rpm

3: Install the PHP MSSQL extension
rpm -i http://mesh.dl.sourceforge.net/sourceforge/phprpms/php-mssql-4.3.9-3.2.1.i386.rpm

Seems simple but it just took an hour to find all those bits and pieces - enjoy.

As an aside the virtual servers are performing well, ping times are obviously higher than querying the servers on the local network but 20ms for the round trip should mean that the reporting system still works relatively quickly. The server is now querying one of our MYSQL servers and the master MSSQL server which is nice however for some reason the Auth package in PHP has decided to play hardball - 2 more hours left in the day so here we go.....

Wednesday 3 January 2007

A mancky extension to the VPN

Our VPN now extends all the way from Tunbridge Wells to Manchester thanks to our new node which appeared on the network this afternoon!

Sounds easier than it was mainly because in setting up the hardware to go to manchester, I was so focused on making sure all the little numbers were correctly set to allow it to take the place of the server, I missed the little button which allows me to actually talk to it once it is in place. Not to worry, the nice technical people sorted it out and it is now ready to form the basis of our company reporting system. One other interesting thing which we have done with this new setup is to put a toe in the virtualisation pool. We have one physical server in the rack but our man Rob has managed to convince it that in fact it is 2 separate servers and in fact we may make a 3rd if we decide we need another separate system. The bonus of this approach for those of you who don't know is that if one 'virtual machine' has a fit and falls over the others continue on as if nothing happened. That is of course as long as the hardware doesn't fail in which case you loose the lot :o)

It will be fine I am sure.

So the next job is to begin to move our base reporting system over from our test server to Manchester.

Tuesday 2 January 2007

Detox

We're back! After having failed miserably to get the thin client running before the holidays it was relieving that we had a rather uneventful holiday in terms of IT failure. I have moved on to trying a micro drive instead of the flash card as the main disc but I don't really want to spend much more time on it as we can use it for something else and maybe a mini itx based PC would be a better option :o)

We recently commissioned a server in a data centre in Manchester and that should be plugged into our VPN this week so I am looking forward to finally getting some time to investigate the possibilities that will bring. Eventually the plan is to completely split the computing tasks between head office and a rack in Manchester so that where ever possible the hardware can be taken off site, there are lots of obvious benefits to this but the big question is speed.

If the setup proves to be quick enough we may be able to host all of our databases in Manchester but I have a feeling that a more likely scenario would be that all of the pub applications and data will be held in Manchester and all of the head office applications will stay here. The big question is reporting as we are building a system at present which is capable of generating reports on the fly from any of the databases we have be it accounts, HR or pub stock etc etc. The burning question is if we have to split the data between sites can we still report seamlessly across them?

Watch this space.

A view from the rack is the personal blog of an IT manager who works for a pub company - hence beer