RBS glitch revisited

Following the NatWest/RBS glitch balls-up, various people expressed their opinion, some of it informed.  The sentiments below are by an anonymous correspondent in the game:

Yes, it is unique that the bank can’t find at least one partly-intelligent person to comment …  First of all, patches – especially of this magnitude – are tested on a development machine before being applied to the live box. So, if there was a fault, the patch should never have made it to the live systems.

The queue is held in memory. It is loaded from disk, the back-ups for which are typically held (a) on tape and (b) at a remote disaster recovery site.  Recovery time should have been 30 minutes max …

Finally, when the bank is in the middle of a f***ing crisis, they use – to quote – an inexperienced team ??

In the UK everything and everyone bows and pledges allegiance to ITIL and its ‘best practice’.  Competence and using judgement or common sense are suicide jobwise. Boxes  must be ticked, and asses must be covered.  No-one has authority to do anything except follow a process.

They are absolutely process and box ticking mad, to the extent that blindly and slavishly following the process is more important than using common sense.

In their deranged world, this allows you to use cheap outsourced labour, as there is no such thing as domain knowledge or expertise, only a process to be followed.

In my years as a contractor I saw it every time something went wrong. They had absolutely no idea how to debug or fault-find a problem. Their knowledge was completely theoretical, and ‘It doesn’t work’ the limit of their ability.

Repeat until it does, or call upstairs. In this case, there was no upstairs.

And, as I’m sure you’ve seen. no charra on this planet can work under pressure. They become quivering wrecks in 5 seconds, especially when it looks serious and there’s no-one else to blame.

Your thoughts?

3 Responses to “RBS glitch revisited”

  1. ivan July 7, 2012 at 23:00 Permalink

    As I have said before when you give P45s to your staff the know the software and the quirks of the hardware and then send their work to a place of cheap labour you are asking for trouble.

    Another question that should be asked is when a bank is in a deep hole financially how much money did they make during the time the systems were down.

  2. Wolfie July 8, 2012 at 16:04 Permalink

    All the banks have spent the last ten years rabidly sending as much of their IT function to remote outsource locations, mostly India.

    The reputational damage has certainly eclipsed any savings made in this period and the saying that comes to mind is simply – “penny wise and pound foolish”.

    As an insider I predict there is far worse to come but what is most sad is that come the day when they decide its time to onshore the workforce once more they will find a once world-leading industry so utterly withered that the option will no longer be available to them.

  3. Churchmouse July 8, 2012 at 23:48 Permalink

    Agree with all the comments. Have been peripherally associated with software upgrades and changes which always required carefully monitored parallel runs over a period of several weeks or months, depending on how detailed they were.

    Testing, testing testing as well as many late hours, weekends and continuous checking by managers: ‘Are you sure?’ ‘What happens if …?’ ‘Did you run this scenario?’ Etc., etc.

    This NatWest/RBS scenario never should have happened. This is why IT people did it the way they did in the ‘old’ days (i.e. ten years ago). The ‘old hands’ knew what could happen.

Leave a Reply

Please copy the string q7UteG to the field below: