In the cloud

Click to enlarge
The centre is located on a greenfields site in Takanini in Auckland's south.

The centre is located on a greenfields site in Takanini in Auckland’s south. Image: Supplied

1 of 6
The $60 million 400-rack facility is owned by Spark and is the first stage of a tiered development based upon scalability and flexibility enabling easy expansion.

The $60 million 400-rack facility is owned by Spark and is the first stage of a tiered development based upon scalability and flexibility enabling easy expansion. Image: Supplied

2 of 6
The building is designed solely to house server racks and as such retains an industrial feel.

The building is designed solely to house server racks and as such retains an industrial feel. Image: Supplied

3 of 6
The building is designed solely to house server racks and as such retains an industrial feel.

The building is designed solely to house server racks and as such retains an industrial feel. Image: Supplied

4 of 6
The centre's elemental design means the concept is based on the micro-modular – all the data halls operate independently from one another as separate modules.

The centre’s elemental design means the concept is based on the micro-modular – all the data halls operate independently from one another as separate modules. Image: Supplied

5 of 6
Detail of services running through the building.

Detail of services running through the building. Image: Supplied

6 of 6

It is one of New Zealand’s most secure buildings, and it may just provide a prototype for developments of similar nature around the globe. But this building isn’t designed for human inhabitants; it is a facility built solely to house mega data – accommodating the equivalent of 640,000 servers, or enough processing power for every New Zealander to host 10 personal websites.

The $60 million 400-rack facility is owned by Spark (formerly Telecom) and is the first stage of a tiered development based upon scalability and flexibility enabling easy expansion. It has a state-of-the-art security centre, and the central data hall is mounted on base isolators to provide added seismic protection. The buildings in place now are stage one of the tiered development. Future demand will be met by the facility’s flexibility to scale up as required.

The site was chosen for various reasons: it is away from flight paths but close to road transport, high voltage power and data fibre links and is in an area rated as a low risk for tsunami. “This is as good as it can get in New Zealand with its Tier 3 international rating. That means our customers will receive very secure and reliable operations when utilising our services,” Spark Digital chief executive Tim Miles said. “The next level of international rating is military grade and can’t be built here because we don’t have competing national electricity networks.”

AECOM technical director Chris Treleaven said because of the deep bedrock up to 20 metres under layers of peat and mud, the orthodox approach would generally be to utilise many large, bored in situ piles for this sort of development. “AECOM used a mix of screw piles and bored in situ piles, which reduced costs and piling times by one third.” The design also incorporates a base isolated system, which greatly reduced the lateral forces applied to the piles and resulted in a cost and time saving. “It also reduced the data hall acceleration levels to a point where standard IT racks could be used instead of the seismic Zone 4 types, which are more expensive,” Trevalen said.

The ground floor structure accommodates full construction loading on unpropped partial depth precast beams. This design was used because the peat soils were too soft to prop off and it also sped up the construction process. A steel frame structural solution was utilised above the ground floor, allowing the bulk of the three-storey building’s structure to be fabricated off site while the piling was done on site.

AECOM design manager Paul Garlick said the overarching concept behind the facility’s design was the micro-modular; the construction methodology was able to provide separate data halls in a staged rollout. Each of the four halls houses 100 racks, with a major benefit of this approach the ability for Spark to offer some of the halls early to their anchor tenants. The data halls all operate independently from one another as separate modules, ensuring halls cannot be affected by events in another.

But the modular approach also has other benefits. It meant smaller cables could be used to deliver power to the servers – part of the AECOM Elemental design – which means there are no electrical or power systems in the data halls. These are housed on separate levels.

Data centre designers have historically leant towards installing bigger generators and coolers but this is not the case here. “AECOM’s Elemental data centre design has deconstructed the traditional data centre topology and reassembled it,” Murray Dickinson of AECOM said. “The Elemental philosophy is to adopt a smaller module size wherever is reasonable … small modules are then micro engineered”.

At the Takanini data centre, the advantages created by the micro-modular system are clear and highlighted by Spark’s reduced power use. Spark’s data centre required 400 racks at an average power density of 10kW per rack (4MW of IT power). A more traditional design would use eight very large 1.5MW no-break power systems, while the Elemental solution uses 16 smaller 0.5MW units providing a 40 per cent reduction in power system size.

Traditional design for N+1 redundancy [a form of resilience that ensures system availability in the event of component failure. Components (N) have at least one independent backup component (+1)] would have necessitated a large capital plant and sizeable distribution systems to be installed on day one to meet a relatively low initial load. The micro-modular design approach allows the installed plant capacity to closely follow the demand curve as the site grows. This significantly reduced the initial capital cost of the facility.

The micro modular design also results in redundancy and resilience benefits. High availability is achieved by using many smaller plant items in highly paralleled arrangements.

Sustainability and energy efficiency were central to the design process. In this case, the design enabled a power use effectiveness or PUE (a measure of the facility’s total power delivered divided by its IT power equipment usage level) of less than 1.3 to be achieved across the entire site. “Delivering this level of efficiency within the project budget creates an exemplar that is likely to be replicated globally,” Dickinson said.

AECOM sustainability manager Scott Smith explained this in another way: “This is an exemplar project because reductions in energy use are hugely significant. A change in PUE of 0.1 for the building could result in 3,000,000 kWh per annum of energy savings, which is more than the total energy consumption of 20,000m2 of new office space, or the CO2 equivalent of more than 15 million driving kilometres.”

But the servers themselves produce a lot of heat, so a cooling system was developed to combat this. This isn’t a traditional solution either. Cooling is by way of a closed circuit adiabatic-free cooling system, which is able to operate for 95 per cent of the year without the assistance of chillers. “This mechanical solution is a deconstructed version of a traditional arrangement … the adiabatic cooling system does not rely on chillers. A facility of this size would normally have an Olympic-sized swimming pool of chilled water held at low temperature that would give the chillers time to recover after a power failure. Our primary cooling system doesn’t use chillers so we don’t need to protect against this system lag. Instead, we simply provided two independent water feeds directly from separate mains as a redundancy provision,” Smith said.

The cooling system is also micro modular. Each of the four data halls has three independent primary cooling systems in an N+1 arrangement. The power supplies, fans, and coil banks are arranged in a matrix across the primary cooling circuits. The matrix arrangement of the mechanical plant provides excellent redundancy. The internal design temperatures were optimised to eliminate the need for close control of the air conditioning systems. This system allowed the use of separate high efficiency EC fans and standard coil/filter banks instead of more traditional computer room air conditioning units. All of this plant is located on a completely separate floor from the data halls, which reduces the operational risk and administration overhead of maintenance activities.


More projects