Old projects and new mainframes

A long time ago (about 35 years), I was the project leader and main designer of what was probably the first true distributed solution. It started with one of the largest bank in Europe, which went through a one month strike of its data center. In what was probably the Jurassic period of IT (which makes me a dinosaur), the centralized mainframe reigned supreme and of course the whole commercial part of the bank ground to a halt, and with it millions of customers who could not get to their money.. The CIO (the title did not exist at the time, but the function did) came up with the brilliant idea of putting a server in each branch, connected to the central mainframe through a network. Each local server had to be able to process locally, on a local "database" all the typical operations of the branch. This would guarantee that, in case of a repeat strike, the basic banking needs of customers would be covered. So armed with the latest minicomputer from Honeywell and several $Millions in project money, we set up developing everything in sight: network protocols, transactional languages and supervisors, local file structure, etc. Even intelligent virtual terminals.

After the first pilot installations, the whole things ground to a halt again. Even though we had automated our distributed systems, someone had to power them up and down. That someone, of course had to be a banker. But being a banker, he or she could not do this. Unions required that touching information systems should be done by specialized personnel. Our second snag was that branch managers did not want to give us the square footage needed for our minicomputers. Then we had air conditioning issues. At the end, hundreds of minicomputers were grouped in data centers.. and we were back to square one.

The reason for this story is that it seems to preview what's happening to IT in the near future. We all embraced distributed systems, because they were cheap and open and could be placed were computing was needed. Then we put them back in the data centers and dedicated them by applications. Then we started to realize how complex and costly the administration of all these machines would be. I think that, on this journey, we forgot that "distributed systems" today meant actually two things: 1) a cheap hardware based on commodity technology and 2) an open system that could run applications written to certain "open" standards (language, interfaces, protocols, etc..). The former is interesting, but not critical. A combination of the two is what we have. But an optimized system based on the latter is what we need.

So we are re-inventing the mainframe: Cisco's UCS or HP BladeSystem Matrix are mainframes based on commodity technology. I could also argue that IBM's z10 is a series of open system machines with a proprietary co-processor. The fundamental reasons for using a mainframe are the ones we are using to justify this new generation of systems: flexibility in resource allocation, ease of administration, automation, floor space, energy efficiency, etc.

As it was 35 years ago, it seems terribly inefficient to put hundreds of servers in a data center where a couple of mainframes could do the job. The difference between then and now is that we are no longer limited by competing proprietary technologies. let's take advantage of it.

Regards,

JP Garbani