> ABOUT BULL > News >

Bull launches the bullx B700 DLC series: a new generation of supercomputers delivering spectacular improvements in data center energy efficiency

  • The bullx B700 DLC supercomputers are aimed at large-scale data centers. Their power - running to several Petaflops - will enable major advances in industry and research to be achieved
  • They feature revolutionary direct liquid cooling technology. Using warm water for cooling improves energy performance by around 40% compared with traditional data centers, although the systems are just as easy to maintain as standard air-cooled servers
  • The new bullx B700 supercomputers are to be built around the future Intel® Xeon® processors E5 family (codenamed Sandy Bridge)


Paris, November  10, 2011 -

Bull is continuing to innovate in a spectacular way in response to the demands of High-Performance Computing (HPC) users, with the launch of its bullx B700 DLC (Direct Liquid Cooling) supercomputers: a new series of Bull blade servers that revolutionize data center cooling and deliver drastic savings in energy consumption. 

The amount of electricity needed to power data centers - often running to several megawatts for the very largest facilities - is the biggest limiting factor for processing power today. In the ideal data center, the only energy needed would be that consumed by the servers; giving a perfect PUE* rating of 1.0. In practice, a data center uses other electrical devices, especially the air conditioning system, resulting in a typical PUE ratio of around 1.8.

First-stage optimization based on Bull's Cool Cabinet Door technology can bring the PUE down to 1.4, with no need to modify computing hardware. With the DLC technology unveiled today, Bull has achieved a step-change in cutting energy consumption, by reducing the PUE to less than 1.1.

Direct Liquid Cooling technology for much greater energy efficiency
In order to improve the overall energy efficiency of the data center, Bull's R&D teams have radically redesigned the server, to ensure that the heat generated by the main components is drawn out by a liquid, running as close as possible to the source of that heat. This principle of Direct Liquid Cooling (DLC) is at the heart of the new bullx B700 DLC series of supercomputers: cooling happens inside the blade itself, via direct contact between heat-generating components (processors, memory chips...) and a cold plate in which heat-exchanger fluid is circulating.

What's more, with processors running at over 50° and memory and SSD disks at over 40°, the technology developed by Bull only needs water at room temperature to cool the hardware: so it is no longer necessary to create cold water, which saves a great deal of electricity and results in a PUE of less than 1.1 under normal operating conditions.

Finally, DLC technology can be used to cool up to 80kW per rack compared with 40kW currently, which will enable forthcoming generations of ultra-dense multi-core processors to be integrated into these systems. 

Extreme ease of maintenance and technology from the aeronautical industry
The bullx B700 DLC series systems are housed in cabinets containing all the internal cooling-cycle equipment. This cooling cycle removes the heat from the compute blades via an integral heat-exchanger. The cabinet simply needs to be connected to the ambient temperature water supply at the site.

Each element (compute blades, chassis, embedded switches, power supplies) can be inserted into or removed from the bullx DCL cabinet while it is still running, as easily as an air-cooled element, thanks to the use of anti-drip connectors based on a technology developed for the aeronautical industry. Unlike various other integral liquid-cooled systems, the bullx B700 DLC series uses standard components: processors, memory modules, disk...

In addition, the bullx B700 DLC series is significantly quieter than air-cooled systems. 

A range of solutions designed for the long term
The bullx B700 DLC series has been designed to enable the integration of future technologies, so as to optimize user investment. In particular, it will be able to incorporate the coming generations of GPU (Graphics Processing Unit) and the MIC processors being develop by Intel® - as soon they become available - as it can remove much more heat than traditional blade systems. It will also be able to take advantage of InfiniBand® EDR (Eight Data Rate) interconnect technology as soon as this is available, which will deliver much higher speeds than the current FDR technology (4x25gb/s vs. 4x14gb/s).

Availability
The first model in the bullx DLC series, featuring B710 compute blades, will be available during Q2 2012.

Appendix : technical specifications

bullx DLC B700 cabinet


Specifications

Mount capacity: 42U - Dimensions (HxWxD) of the bare cabinet: 2020x600x1200mm
Net weight: 173kg - Maximum weight with 4 chassis: 1144.42kg, with 5 chassis: 1331.78kg
PDU: 1 as standard, 1 optional extension
Electric column: 4 x 54V - expansion tank

Regulatory compliance

CE, IEC60297, EIA-310-E, EN60950

Supply chassis (CPC)

7U format drawer mounted in the bullx DLC cabinet or on the roof

Electricity supply

Number of shelves for 54V supplies: up to 7 in 1U format
Number of supplies on the first shelf: 3 x 3kW 54VDC - containing the RMM
Number of supplies on the other shelves: 4 x 3kW 54VDC
Number of three-phase 32A AC lines: up to 5 - 3 lines for the first 4 shelves
54V supply redundancy (PSUs): N+1
Max power supplied with the first 15 supplies: up to 42kW, with the 27 supplies: up to 78kW

Hydraulic chassis (HYC)

3.5U chassis, mounted in the bullx DLC cabinet, 7U with 2 units

Number of units

1 as standard - 1 optional for redundancy

Components

1 heat exchanger for the cooling liquid
1 multistage centrifugal pump
2 x 2 channel motorized flaps to regulate temperature and flow of the private circuit
3 temperature sensors for the cooling liquid
Static pressure sensors, regulation module

Power supply units

220V AC and 54V DC

bullx DLC B700 Chassis


Form factor

Rack mount 7U drawer for up to 9 double blades, i.e. 18 compute nodes

Management

Chassis management module (CMM2) comprising:
1Gb Ethernet switch with 24 ports, 3 of which are external
LEDs to indicate module operating status
Display: screen and control panel on the front of the chassis

Ventilation and cooling

2 front fans for the chassis, 6 rear fans for the disks
Cooling: by cooling liquid via CHMA type tubes

Interconnect

InfiniBand (ISM) switch: up to two 36 port FDR switches with 18 internal ports and 18 external QSFP+ ports (optional)

Ethernet

Ethernet 1Gb/s switch (ESM): 21 1Gb ports, including 18 internal ports and 3 external RJ45 ports (optional)
10Gb/s switch (TSM): 22 ports, including 18 internal 1Gb ports and 4 external 10BaseSR ports (SFP+ connector ) (optional)

Midplane

Passive board consisting of connectors for 18 servers, disk fans, ISM, CMM, ESM, PDM, all CHMA tubes

Power module

Up to 2 hotswap modules - Max. consumption 15kW
Input voltage 54VDC, output voltage 12VDC and 3.3VDC

Physical specifications

Dimensions (HxWxD) 31.1cm (7U) x 48.3cm (19") x 74.5cm - Max. weight 175kg

bullx B710 DLC compute blade


Form factor

Double width blade containing 2 compute nodes

Processors

2 x 2 processors with 8 Intel® Xeon® cores from the future Intel® Xeon® processor E5 family

Architecture

2 x 1 future Intel® chipset compatible with the E5 family

Memory

2x8 DIMM DDR3 slots
Up to 2 x 256Gb Reg EC DDR3 (with 32Gb bars subject to availability)
Frequencies accepted 1333Mhz and 1600Mhz - Voltages accepted: 1.35V and 1.5V

Storage

Up to 2 x 2 SATA disks (2.5") or 2 x 2 SSD disks (2.5") - max. height 9.5mm

Ethernet

2 x 1 1Gb dual port Ethernet controller for links to CMM and ESM or TSM

InfiniBand

2 x1 ConnectX3 adapter supplying two IB FDR or QDR ports

Management

2 x integrated baseboard management controller (IBMC)

Cooling

Cold plate and heat distributors for memory bars

OS and cluster software

Red Hat Enterprise Linux & bullx supercomputer suite

Regulatory compliance

CE (UL, FCC, RoHS)

Warranty

1 year, optional warranty extension

 


*PUE (Power Usage Effectiveness) is the industry-standard energy efficiency indicator established by the Green GridTM consortium. It measures the ration between the energy used by the whole data center and by the servers within it.

About Bull

 

Bull is the trusted partner for enterprise data. The Group, which is firmly established in the Cloud and in Big Data, integrates and manages high-performance systems and end-to-end security solutions. Bull's offerings enable its customers to process all the data at their disposal, creating new types of demand. Bull converts data into value for organisations in a completely secure manner.

Bull currently employs around 9,200 people across more than 50 countries, with over 700 staff totally focused on R&D. In 2013, Bull recorded revenues of €1.3 billion.

For more information:
http://www.bull.com  /  http://www.facebook.com/BullGroup  /  http://twitter.com/bull_com


Contact
Bull
Aurélie Negro
5 Bd Gallieni - 92445 Issy Les moulineaux Cedex - France
Phone: +33 1 58 04 05 02
E-mail: aurelie.negro@bull.net


Retour haut de page
Print page.Send page.Share on Facebook.Share on Linkedin.Share on Viadeo.Share on Technorati.Share on Digg.Share on Delicious.Bookmark this page on Google.Share on Windows Live.Share on Twitter.
CONTACT

Aurélie Negro
+33 1 58 04 05 02
aurelie.negro@bull.net