Steele (supercomputer)

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

Steele is a supercomputer that was installed at Purdue University on May 5, 2008. The high-performance computing cluster is operated by Information Technology at Purdue (ITaP), the university's central information technology organization. ITaP also operates clusters named Coates built in 2009, Rossmann built in 2010, and Hansen and Carter built in 2011. Steele was the largest campus supercomputer in the Big Ten outside a national center when built. It ranked 104th on the November 2008 TOP500 Supercomputer Sites list.

Hardware

Steele consisted of 893 64-bit, 8-core Dell PowerEdge 1950 and nine 64-bit, 8-core Dell PowerEdge 2950 systems with various combinations of 16-32 gigabytes RAM, 160 GB to 2 terabytes of disk, and Gigabit Ethernet and SDR InfiniBand to each node. The cluster had a theoretical peak performance of more than 60 teraflops. Steele and its 7,216 cores replaced the Purdue Lear cluster supercomputer which had 1,024 cores but was substantially slower. Steele is primarily networked utilizing a Foundry Networks BigIron RX-16 switch with a Tyco MRJ-21 wiring system delivering over 900 Gigabit Ethernet connections and eight 10 Gigabit Ethernet uplinks.

Software

Steele nodes ran Red Hat Enterprise Linux starting with release 4.0[1] and used Portable Batch System Professional 10.4.6 (PBSPro 10.4.6) for resource and job management. The cluster also had compilers and scientific programming libraries installed.

Construction

The first 812 nodes of Steele were installed in four hours on May 5, 2008,[2] by a team of 200 Purdue computer technicians and volunteers, including volunteers from in-state athletic rival Indiana University. The staff had made a video titled "Installation Day" as a parody of the film Independence Day.[3] The cluster ran 1,400 science and engineering jobs by lunchtime.[4][5] In 2010, Steele was moved to an HP Performance Optimized Datacenter, a self-contained, modular, shipping container-style unit installed on campus in order to make room for new clusters in Purdue's main research computing data center.[6][7][8][9]

Funding

The Steele supercomputer and Purdue's other clusters were part of the Purdue Community Cluster Program, a partnership between ITaP and Purdue faculty. In Purdue's program, a "community" cluster is funded by hardware money from grants, faculty startup packages, institutional funds and other sources. ITaP's Rosen Center for Advanced Computing administers the community clusters and provides user support. Each faculty partner always has ready access to the capacity he or she purchases and potentially to more computing power when the nodes of other partners are idle. Unused, or opportunistic, cycles from Steele are made available to the National Science Foundation's TeraGrid (now XESEDE) system and the Open Science Grid using Condor software. A portion of Steele also was dedicated directly to TeraGrid use.

Users

Steele users came fields such as aeronautics and astronautics, agriculture, biology, chemistry, computer and information technology, earth and atmospheric sciences, mathematics, pharmacology, statistics, and electrical, materials and mechanical engineering. The cluster was used to design new drugs and materials, to model weather patterns and the effects of global warming, and to engineer future aircraft and nano electronics. Steele also served the Compact Muon Solenoid Tier 2 Center at Purdue, one of the particle physics experiments conducted with the Large Hadron Collider.

DiaGrid

Unused, or opportunistic, cycles from Steele were made available to the TeraGrid and the Open Science Grid using Condor software. Steele was part of Purdue's distributed computing Condor flock, and the center of DiaGrid, a nearly 43,000-processor Condor-powered distributed computing network for research involving Purdue and partners at nine other campuses.

Naming

The Steele cluster is named for John M. Steele, Purdue associate professor emeritus of computer science, who was involved with research computing at Purdue almost from its inception. He joined the Purdue staff in 1963 at the Computer Sciences Center associated with the then-new Computer Science Department. He served as the director of the Purdue University Computing Center, the high-performance computing unit at Purdue prior to the Rosen Center for Advanced Computing, from 1988 to 2001 before retiring in 2003. His research interests have been in the areas of computer data communications and computer circuits and systems, including research on an early mobile wireless Internet system.[10]

See also

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. Lua error in package.lua at line 80: module 'strict' not found.
  7. Lua error in package.lua at line 80: module 'strict' not found.
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. Lua error in package.lua at line 80: module 'strict' not found.
  10. Lua error in package.lua at line 80: module 'strict' not found.

External links

  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.