This is a good article. Click here for more information.

Titan (supercomputer)

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Titan (supercomputer)
An image of the cabinets that make up Titan.
Active Became operational October 29, 2012
Sponsors US DOE and NOAA (<10%)
Operators Cray Inc.
Location Oak Ridge National Laboratory
Architecture 18,688 AMD Opteron 6274 16-core CPUs
18,688 Nvidia Tesla K20X GPUs
Power 8.2 MW
Operating system Cray Linux Environment
Space 404 m2 (4352 ft2)
Memory 693.5 TiB (584 TiB CPU and 109.5 TiB GPU)
Storage 40 PB, 1.4 TB/s IO Lustre filesystem
Speed 17.59 petaFLOPS (LINPACK)
27 petaFLOPS theoretical peak
Cost $97 million
Ranking TOP500: #2, June 2013[1]
Purpose Scientific research
Legacy Ranked 1 on TOP500 when built.
First GPU based supercomputer to perform over 10 petaFLOPS
Web site www.olcf.ornl.gov/titan/

Titan is a supercomputer built by Cray at Oak Ridge National Laboratory for use in a variety of science projects. Titan is an upgrade of Jaguar, a previous supercomputer at Oak Ridge, that uses graphics processing units (GPUs) in addition to conventional central processing units (CPUs). Titan is the first such hybrid to perform over 10 petaFLOPS. The upgrade began in October 2011, commenced stability testing in October 2012 and it became available to researchers in early 2013. The initial cost of the upgrade was US$60 million, funded primarily by the United States Department of Energy.

Titan is due to be eclipsed at Oak Ridge by Summit in 2018, which is being built by IBM and features fewer nodes with much greater GPU capability per node as well as local per-node non-volatile caching of file data from the system's parallel file system.[2]

Titan employs AMD Opteron CPUs in conjunction with Nvidia Tesla GPUs to improve energy efficiency while providing an order of magnitude increase in computational power over Jaguar. It uses 18,688 CPUs paired with an equal number of GPUs to perform at a theoretical peak of 27 petaFLOPS; in the LINPACK benchmark used to rank supercomputers' speed, it performed at 17.59 petaFLOPS. This was enough to take first place in the November 2012 list by the TOP500 organization, but Tianhe-2 overtook it on the June 2013 list.

Titan is available for any scientific purpose; access depends on the importance of the project and its potential to exploit the hybrid architecture. Any selected code must also be executable on other supercomputers to avoid sole dependence on Titan. Six vanguard codes were the first selected. They dealt mostly with molecular scale physics or climate models, while 25 others queued behind them. The inclusion of GPUs compelled authors to alter their codes. The modifications typically increased the degree of parallelism, given that GPUs offer many more simultaneous threads than CPUs. The changes often yield greater performance even on CPU-only machines.

History

File:Titan render.png
A rendering of the Titan supercomputer

Plans to create a supercomputer capable of 20 petaFLOPS at the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory (ORNL) originated as far back as 2005, when Jaguar was built.[3] Titan will itself be replaced by an approximately 200 petaFLOPS system in 2016 as part of ORNL's plan to operate an exascale (1000 petaFLOPS to 1 exaFLOPS) machine by 2020.[3][4][5] The initial plan to build a new 15,000 square meter (160,000 ft2) building for Titan, was discarded in favor of using Jaguar's existing infrastructure.[6] The precise system architecture was not finalized until 2010, although a deal with Nvidia to supply the GPUs was signed in 2009.[7] Titan was first announced at the private ACM/IEEE Supercomputing Conference (SC10) on November 16, 2010, and was publicly announced on October 11, 2011, as the first phase of the Titan upgrade began.[4][8]

Jaguar had received various upgrades since its creation. It began with the Cray XT3 platform that yielded 25 teraFLOPS.[9] By 2008, Jaguar had been expanded with more cabinets and upgraded to the XT4 platform, reaching 263 teraFLOPS.[9] In 2009, it was upgraded to the XT5 platform, hitting 1.4 petaFLOPS.[9] Its final upgrades brought Jaguar to 1.76 petaFLOPS.[10]

File:Cray Technician upgrading Titan.jpg
A Cray technician upgrading Jaguar to Titan.

Titan was funded primarily by the US Department of Energy through ORNL. Funding was sufficient to purchase the CPUs but not all of the GPUs so the National Oceanic and Atmospheric Administration agreed to fund the remaining nodes in return for computing time.[11][12] ORNL scientific computing chief Jeff Nichols noted that Titan cost approximately $60 million upfront, of which the NOAA contribution was less than $10 million, but precise figures were covered by non-disclosure agreements.[11][13] The full term of the contract with Cray included $97 million, excluding potential upgrades.[13]

The yearlong conversion began October 9, 2011.[14][15] Between October and December, 96 of Jaguar's 200 cabinets, each containing 24 XT5 blades (two 6-core CPUs per node, four nodes per blade), were upgraded to XK7 blades (one 16-core CPU per node, four nodes per blade) while the remainder of the machine remained in use.[14] In December, computation was moved to the 96 XK7 cabinets while the remaining 104 cabinets were upgraded to XK7 blades.[14] ORNL's external ESnet connection was upgraded from 10 Gbit/s to 100 Gbit/s and the system interconnect (the network over which CPUs communicate with each other) was updated.[14][16] The Seastar design used in Jaguar was upgraded to the Gemini interconnect used in Titan which connects the nodes into a direct 3D torus interconnect network.[17] Gemini uses wormhole flow control internally.[17] The system memory was doubled to 584 TiB.[15] 960 of the XK7 nodes (10 cabinets) were fitted with a Fermi based GPU as Kepler GPUs were not then available; these 960 nodes were referred to as TitanDev and used to test code.[14][15] This first phase of the upgrade increased the peak performance of Jaguar to 3.3 petaFLOPS.[15] Beginning on September 13, 2012, Nvidia K20X GPUs were fitted to all of Jaguar's XK7 compute blades, including the 960 TitanDev nodes.[14][18][19] In October, the task was completed and the computer was finally christened Titan.[14]

In March 2013, Nvidia launched the GTX Titan, a consumer graphics card that uses the same GPU die as the K20X GPUs in Titan.[20] Titan underwent acceptance testing in early 2013 but only completed 92% of the tests, short of the required 95%.[14][21] The problem was discovered to be excess gold in the female edge connectors of the motherboards' PCIe slots causing cracks in the motherboards' solder.[22] The cost of repair was borne by Cray and between 12 and 16 cabinets were repaired each week.[22] Throughout the repairs users were given access to the available CPUs.[22] On March 11, they gained access to 8,972 GPUs.[23] ORNL announced on April 8 that the repairs were complete[24] and acceptance test completion was announced on June 11, 2013.[25]

Titan's hardware has a theoretical peak performance of 27 petaFLOPS with "perfect" software.[26] On November 12, 2012, the TOP500 organization that ranks the worlds' supercomputers by LINPACK performance, ranked Titan first at 17.59 petaFLOPS, displacing IBM Sequoia.[27][28] Titan also ranked third on the Green500, the same 500 supercomputers ranked in terms of energy efficiency.[29] In the June 2013 TOP500 ranking, Titan fell to second place behind Tianhe-2 and to twenty-ninth on the Green500 list.[1][30] Titan did not re-test for the June 2013 ranking,[1] because it would still have ranked second, at 27 petaFLOPS.[31]

Hardware

File:ORNL EVEREST visualization.jpg
EVEREST allows researchers to visualize the data that Titan outputs in 3D on a 10 by 3 meter (33 by 10 ft) wall.

Titan uses Jaguar's 200 cabinets, covering 404 square meters (4,352 ft2), with replaced internals and upgraded networking.[32][33] Reusing Jaguar's power and cooling systems saved approximately $20 million.[34] Power is provided to each cabinet at three-phase 480 V. This requires thinner cables than the US standard 208 V, saving $1 million in copper.[35] At its peak, Titan draws 8.2 MW,[36] 1.2 MW more than Jaguar, but runs almost ten times as fast in terms of floating point calculations.[32][35] In the event of a power failure, carbon fiber flywheel power storage can keep the networking and storage infrastructure running for up to 16 seconds.[37] After 2 seconds without power, diesel generators fire up, taking approximately 7 seconds to reach full power. They can provide power indefinitely.[37] The generators are designed only to keep the networking and storage components powered so that a reboot is much quicker; the generators are not capable of powering the processing infrastructure.[37]

Titan has 18,688 nodes (4 nodes per blade, 24 blades per cabinet),[38] each containing a 16-core AMD Opteron 6274 CPU with 32 GB of DDR3 ECC memory and an Nvidia Tesla K20X GPU with 6 GB GDDR5 ECC memory.[39] There are a total of 299,008 processor cores, and a total of 693.6 TiB of CPU and GPU RAM.[35]

Initially, Titan used Jaguar's 10 PB of Lustre storage with a transfer speed of 240 GB/s,[35][40] but in April 2013, the storage was upgraded to 40 PB with a transfer rate of 1.4 TB/s.[41] GPUs were selected for their vastly higher parallel processing efficiency over CPUs.[39] Although the GPUs have a slower clock speed than the CPUs, each GPU contains 2,688 CUDA cores at 732 MHz,[42] resulting in a faster overall system.[33][43] Consequently, the CPUs' cores are used to allocate tasks to the GPUs rather than directly processing the data as in conventional supercomputers.[39]

Titan runs the Cray Linux Environment, a full version of Linux on the login nodes that users directly access, but a smaller, more efficient version on the compute nodes.[44]

Titan's components are air-cooled by heat sinks, but the air is chilled before being pumped through the cabinets.[45] Fan noise is so loud that hearing protection is required for people spending more than 15 minutes in the machine room.[46] The system has a cooling capacity of 23.2 MW (6600 tons) and works by chilling water to 5.5 °C (42 °F), which in turn cools recirculated air.[45]

Researchers also have access to EVEREST (Exploratory Visualization Environment for Research and Technology) to better understand the data that Titan outputs. EVEREST is a visualization room with a 10 by 3 meter (33 by 10 ft) screen and a smaller, secondary screen. The screens are 37 and 33 megapixels respectively with stereoscopic 3D capability.[47]

Projects

File:VERA reactor core.jpg
A VERA simulation of a light water reactor's core. This image was rendered on Jaguar but the project will continue with greater detail on Titan

In 2009, the Oak Ridge Leadership Computing Facility that manages Titan narrowed the fifty applications for first use of the supercomputer down to six "vanguard" codes chosen for the importance of the research and for their ability to fully utilize the system.[33][48] The six vanguard projects to use Titan were:

  • S3D, a project that models the molecular physics of combustion, aims to improve the efficiency of diesel and biofuel engines. In 2009, using Jaguar, it produced the first fully resolved simulation of autoigniting hydrocarbon flames relevant to the efficiency of direct injection diesel engines.[48]
  • WL-LSMS simulates the interactions between electrons and atoms in magnetic materials at temperatures other than absolute zero. An earlier version of the code was the first to perform at greater than one petaFLOPS on Jaguar.[48]
  • Denovo simulates nuclear reactions with the aim of improving the efficiency and reducing the waste of nuclear reactors.[33] The performance of Denovo on conventional CPU-based machines doubled after the tweaks for Titan and it performs 3.5 times faster on Titan than it did on Jaguar.[48][49]
  • Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a molecular dynamics code that simulates particles across a range of scales, from quantum to relativistic, to improve materials science with potential applications in semi-conductor, biomolecule and polymer development.[50]
  • CAM-SE is a combination of two codes: Community Atmosphere Model, a global atmosphere model, and High Order Method Modeling Environment, a code that solves fluid and thermodynamic equations. CAM-SE will allow greater accuracy in climate simulations.[48]
  • Non-Equilibrium Radiation Diffusion (NRDF) plots non-charged particles through supernovae with potential applications in laser fusion, fluid dynamics, medical imaging, nuclear reactors, energy storage and combustion.[48] Its Chimera code uses hundreds of partial differential equations to track the energy, angle, angle of scatter and type of each neutrino modeled in a star going supernova, resulting in millions of individual equations.[51] The code was named Chimera after the mythological creature because it has three "heads": the first simulates the hydrodynamics of stellar material, the second simulates radiation transport and the third simulates nuclear burning.[51]
  • Molecular dynamics modeling is the simulation and analysis of the movements of atoms and molecules, and as such it covers the physical and mechanical aspects of almost everything in the known universe. To put this into perspective, Tianhe-1A, the second most powerful supercomputer in the world, ran a simulation involving 110 billion atoms through 500,000 time steps. In every one of these steps, Tianhe-1A has to analyze the relationships between each and every atom. These calculations took three hours to complete and accounted for 0.116 nanoseconds of simulated time — and this is on a computer capable of processing two petaFLOPS per second.
  • Bonsai is a gravitational tree code for simulation galaxies. It has been used for the 2014 Gordon Bell prize nomination for simulating the Milky Way Galaxy on a star by star basis, with 200 billion stars. In this application the computer reached a sustained speed of 24.773 PetaFlops.[52]

VERA is a light water reactor simulation written at the Consortium for Advanced Simulation of Light Water Reactors (CASL) on Jaguar. VERA allows engineers to monitor the performance and status of any part of a reactor core throughout the lifetime of the reactor to identify points of interest.[53] Although not one of the first six projects, VERA was planned to run on Titan after optimization with assistance from CAAR and testing on TitanDev. Computer scientist Tom Evans found that the adaption to Titan's hybrid architecture was more difficult than to previous CPU-based supercomputers. He aimed to simulate an entire reactor fuel cycle, an eighteen to thirty-six month-long process, in one week on Titan.[53]

In 2013 thirty-one codes were planned to run on Titan, typically four or five at any one time.[46][54]

Code modifications

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

The code of many projects has to be modified to suit the GPU processing of Titan, but each code is required to be executable on CPU-based systems so that projects do not become solely dependent on Titan.[48] OLCF formed the Center for Accelerated Application Readiness (CAAR) to aid with the adaptation process. It holds developer workshops at Nvidia headquarters to educate users about the architecture, compilers and applications on Titan.[55][56] CAAR has been working on compilers with Nvidia and code vendors to integrate directives for GPUs into their programming languages.[55] Researchers can thus express parallelism in their code with their existing programming language, typically Fortran, C or C++, and the compiler can express it to the GPUs.[55] Dr. Bronson Messer, a computational astrophysicist, said of the task: "...an application using Titan to the utmost must also find a way to keep the GPU busy, remembering all the while that the GPU is fast, but less flexible than the CPU."[55] Moab Cluster Suite is used to prioritize jobs to nodes to keep utilization high; it improved efficiency from 70% to approximately 95% in the tested software.[57][58] Some projects found that the changes increased efficiency of their code on non-GPU machines; the performance of Denovo doubled on CPU-based machines.[48]

The amount of code alteration required to run on the GPUs varies by project. According to Dr. Messer of NRDF, only a small percentage of his code runs on GPUs because the calculations are relatively simple but processed repeatedly and in parallel.[59] NRDF is written in CUDA Fortran, a version of Fortran with CUDA extensions for the GPUs.[59] Chimera's third "head" was the first to run on the GPUs as the nuclear burning could most easily be simulated by GPU architecture. Other aspects of the code were planned to be modified in time.[51] On Jaguar, the project modeled 14 or 15 nuclear species but Messer anticipated simulating up to 200 species, allowing far greater precision when comparing the simulation to empirical observation.[51]

See also

References

  1. 1.0 1.1 1.2 Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. 3.0 3.1 Lua error in package.lua at line 80: module 'strict' not found.
  4. 4.0 4.1 Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. Lua error in package.lua at line 80: module 'strict' not found.
  7. Lua error in package.lua at line 80: module 'strict' not found.
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. 9.0 9.1 9.2 Lua error in package.lua at line 80: module 'strict' not found.
  10. Lua error in package.lua at line 80: module 'strict' not found.
  11. 11.0 11.1 Lua error in package.lua at line 80: module 'strict' not found.
  12. Lua error in package.lua at line 80: module 'strict' not found.
  13. 13.0 13.1 Lua error in package.lua at line 80: module 'strict' not found.
  14. 14.0 14.1 14.2 14.3 14.4 14.5 14.6 14.7 Lua error in package.lua at line 80: module 'strict' not found.
  15. 15.0 15.1 15.2 15.3 Lua error in package.lua at line 80: module 'strict' not found.
  16. Lua error in package.lua at line 80: module 'strict' not found.
  17. 17.0 17.1 Lua error in package.lua at line 80: module 'strict' not found.
  18. Lua error in package.lua at line 80: module 'strict' not found.
  19. Lua error in package.lua at line 80: module 'strict' not found.
  20. Lua error in package.lua at line 80: module 'strict' not found.
  21. Lua error in package.lua at line 80: module 'strict' not found.
  22. 22.0 22.1 22.2 Lua error in package.lua at line 80: module 'strict' not found.
  23. Lua error in package.lua at line 80: module 'strict' not found.
  24. Lua error in package.lua at line 80: module 'strict' not found.
  25. Lua error in package.lua at line 80: module 'strict' not found.
  26. Lua error in package.lua at line 80: module 'strict' not found.
  27. Lua error in package.lua at line 80: module 'strict' not found.
  28. Lua error in package.lua at line 80: module 'strict' not found.
  29. Lua error in package.lua at line 80: module 'strict' not found.
  30. Lua error in package.lua at line 80: module 'strict' not found.
  31. Lua error in package.lua at line 80: module 'strict' not found.
  32. 32.0 32.1 Lua error in package.lua at line 80: module 'strict' not found.
  33. 33.0 33.1 33.2 33.3 Lua error in package.lua at line 80: module 'strict' not found.
  34. Lua error in package.lua at line 80: module 'strict' not found.
  35. 35.0 35.1 35.2 35.3 Lua error in package.lua at line 80: module 'strict' not found.
  36. Lua error in package.lua at line 80: module 'strict' not found.
  37. 37.0 37.1 37.2 Lua error in package.lua at line 80: module 'strict' not found.
  38. Lua error in package.lua at line 80: module 'strict' not found.
  39. 39.0 39.1 39.2 Lua error in package.lua at line 80: module 'strict' not found.
  40. Lua error in package.lua at line 80: module 'strict' not found.
  41. Lua error in package.lua at line 80: module 'strict' not found.
  42. Lua error in package.lua at line 80: module 'strict' not found.
  43. Lua error in package.lua at line 80: module 'strict' not found.
  44. Lua error in package.lua at line 80: module 'strict' not found.
  45. 45.0 45.1 Lua error in package.lua at line 80: module 'strict' not found.
  46. 46.0 46.1 Lua error in package.lua at line 80: module 'strict' not found.
  47. Lua error in package.lua at line 80: module 'strict' not found.
  48. 48.0 48.1 48.2 48.3 48.4 48.5 48.6 48.7 Lua error in package.lua at line 80: module 'strict' not found.
  49. Lua error in package.lua at line 80: module 'strict' not found.
  50. Lua error in package.lua at line 80: module 'strict' not found.
  51. 51.0 51.1 51.2 51.3 Lua error in package.lua at line 80: module 'strict' not found.
  52. Lua error in package.lua at line 80: module 'strict' not found.
  53. 53.0 53.1 Lua error in package.lua at line 80: module 'strict' not found.
  54. Lua error in package.lua at line 80: module 'strict' not found.
  55. 55.0 55.1 55.2 55.3 Lua error in package.lua at line 80: module 'strict' not found.
  56. Lua error in package.lua at line 80: module 'strict' not found.
  57. Lua error in package.lua at line 80: module 'strict' not found.
  58. Lua error in package.lua at line 80: module 'strict' not found.
  59. 59.0 59.1 Lua error in package.lua at line 80: module 'strict' not found.

External links

  • Official website
  • Media related to Lua error in package.lua at line 80: module 'strict' not found. at Wikimedia Commons
  • Lua error in package.lua at line 80: module 'strict' not found.
Records
Preceded by
IBM Sequoia
16.325 petaflops
World's most powerful supercomputer
November 2012 – June 2013
Succeeded by
Tianhe-2
33.9 petaflops