Archived Content

The following content is from an older version of this website, and may not display correctly.

Imagine it is 2008 and you wake up one day to find yourself in the middle of a global financial crisis.

The problem is, you work in a bank which owns an aging, dispersed and inefficient data center estate and you know that crisis or not, it needs modernizing.

You have too many data centers – 16 in all. And to deliver against your objectives you will have to close almost all of these facilities. And while you are building your new physical infrastructure you will have to migrate applications to new IT platforms. You will need to transform the way departments inside your organization consume IT and IT services, and you will need to engage with a regulator who needs to be educated in the latest data center, IT and cloud technologies. 

The bank is Dutch giant ING, for which the crisis was bad but not terrible. It was bailed out to the tune of €10bn by the Dutch government.

The 2008 crisis meant the bank had all of the stimulus it needed to reconsider its original consolidation plans and consider some radical ones. Its board of governors accepted that a program of infrastructure change was necessary, and the decision was taken to achieve as much as possible. The original idea for shifting to new data centers was to consolidate from 16 to two using two traditional build-outs. Land for two sites was chosen. (Having two live-live data centers was obviously the way to go.) But why build capital-intensive monolithic mirrored data centers to carry each of your existing 4,000 applications when any migration will inevitably involve a major application rationalization program? And if you are replacing many of your 16,000 old

servers, then why over-provision an entire data center (or two) when it could be possible to deploy capacity matched to workload?

If you can attune your capital outlay to your need and avoid locking up capital in traditional build outs, this looks sensible. It is, after all, the middle of a crisis.

Why not consider the dynamism of the market? Cloud technologies and programs are promising savings. Hybrid strategies point to new possibilities in terms of service provision. There are many unknowns but there are many opportunities. The infrastructure is changing, the locations are changing, the organisation is restructuring. In a banking crisis, it is best to look prudent.

Migration not stagnation

“We had 16 data centers carrying legacy or ageing technology. The majority were ING owned with a few services based at third-party colocation providers,” says Danny O’Connor, head of technology services at ING. Efficient use of capacity resource was high on the agenda but risk mitigation, availability and performance would not be sacrificed. Migrating a legacy environment to new data centers meant that risk management was vital. Experience taught ING that data center construction was a multi-year process, so the bank wanted to be ready to move the right applications to the right platforms. 

“We wanted to be ready on that. Part of the data center consolidation project meant a refresh of the technology, deployment of virtualization where necessary and the preparation of applications for virtualized and cloud environments. We were maintaining the old world while preparing for the new one,” O’Connor says. The old world also included some lease expiration dates at third-party facilities. So the streamed approach was defined. The first stream was deciding the physical platform (see Box 1); the second was ‘cloudifying’ the apps in preparation for the new data centers; the third was transforming the existing estate; the fourth undergoing the migration and fifth – and equally important – educating the users in accepting a whole new approach to how they consume and are charged for services within the bank.

As part of the migration, ING is retiring as many applications as possible but a significant number of applications will always be owned and managed by the bank. However, even as the application migration and rationalization continues, the strategy is to disassociate the application from the underlying technology in an attempt to standardize, provision and deliver agility. “Being an internal service provider and owning and operating our own assets is an interim step. Over time, the infrastructure team will play more of a broker role. We will have internal hosted and external hosted systems,” O’Connor says. “Today we have our own stack inside our data centers. When we get into a natural lifecycle of compute – say the typical three- to five-year timeframe – I keep asking the question: ‘Do I want to be buying more technology to add to my footprint?’ Looking out to the next two to three years, I would look to stop buying technology whenever possible,” he says. That next migration step will be no less challenging than the existing migration undertaken by ING, but as well as being an internal provider, the bank also sees the possibility of being an external one. “There are many unanswered questions, even though we are witnessing a rapid maturity curve from the service providers. At the moment it is about educating ourselves,” O’Connor says. The bank is exploring pushing its test and development to a hosted environment. There is a working party exploring the security and associated risk factors.

From servers to service

ING is building an operations model that is focused on commoditized service offerings, most of which will be standardized. As they get used, they are picked up by others in the bank, and as the model matures, it opens up the possibility of providing those services to others in the financial-services industry.

Regulation

As in any regulated environment there will always be a need for internal technology capability – at ING there is no talk of shifting core systems to third-party providers. The service providers do not have the ability to match regulatory needs and the bank would not underwrite the risk. So do not expect any wholesale handover of technology to the market. Internally, the plan is to become more marketlike. One of the big challenges was rewriting the IT service catalogue, which involved moving users from fixed price to price by consumption. “Ultimately, we do have to provide capacity on demand and with good predictive forecasting it can be delivered without having to invest in over-capacity,” O’Connor says.

“We like to think that everything we do is about being customer-centric and operationally excellent. “Ensuring that the appropriate technology is deployed and available with clear service level agreements is our objective and ultimately, we felt that was at risk with the existing environment.”

O’Connor says ING felt it could improve reliability and the program was accepted.

“The board readily accepted there were some unknowns,” he says. “But the greater risk was in doing nothing or in maintaining existing systems. It was a strategic technology decision to align the bank with technology for the future. “We’re closing the doors on a lot of data centers,” O’Connor says. “When we drive up to the north of the Netherlands and take a sign off the door of a building that was [once] a data center, we know we are providing applications for the future.” The shift is one that data center and IT providers within large corporates will recognize. The commonly-heard maxim that the most efficient data center is one that does not exist rings true today more than ever.

 

Box 1: Modularity for flexibility

ING has struck a deal with Colt for a 500 sq m modular data center. Why modular? The first thing is flexibility – being able to tune capacity to demand requirements. When using smaller tranches, efficiency is increased with greater capacity – closer to 100%.

“We actually looked at our demand projection over time and analysed what the optimum step value was. Having looked at various module sizes, we looked at what the optimum increases of capacity are for our projected demand curve.

We landed in a range of the 750kW 500 m sq module from Colt,” O’Connor says. Colt did not land the contract without competition. The biggest competitor was a firm with a 20-year heritage of building modules that had come into the data center market. So, one supplier had deployed the product but was new to the data center market, while Colt’s modular data center was untested (at the time) outside the company’s own Welwyn facility just north of London.

O’Connor says he recognized the unique risks and benefits to each option. “With Colt we visited Welwyn and liked the approach,” he says. “They were very open about things they would and could do better with the product. We liked this.”