High Bandwidth Memory

From Infogalactic: the planetary knowledge core
Jump to: navigation, search
File:High Bandwidth Memory schematic.svg
Cut through a graphics card that uses High Bandwidth Memory. See the TSVs.

High Bandwidth Memory (HBM) is a high-performance RAM interface for 3D-stacked DRAM from AMD and Hynix. It is to be used in conjunction with high-performance graphics accelerators and network devices.[1] The competing technology is the incompatible Hybrid Memory Cube interface from Micron Technology.[2] The first devices to use HBM are the AMD Fiji GPU as well as AMD Arctic Islands-based.[3]

High Bandwidth Memory has been adopted by JEDEC as an industry standard in October 2013.[4]

Technology

HBM achieves higher bandwidth while using less power in a substantially smaller form factor than DDR4 or GDDR5.[5] This is achieved by stacking up to eight DRAM dies, including an optional base die with a memory controller, which are interconnected by through-silicon vias (TSV) and microbumps.

HBM memory bus is very wide in comparison to other DRAM memories such as DDR4 or GDDR5. An HBM stack of four DRAM dies (4-Hi) has two 128-bit channels per die for a total of 8 channels and a width of 1024 bits in total. A chip with four 4-Hi HBM stacks would therefore have a memory bus with a width of 4096 bits. In comparison, the bus width of GDDR memories is 64 bits, with 8 channels for a graphics card with a 512-bit memory interface.[6]

Interface

The HBM DRAM is tightly coupled to the host compute die with a distributed interface. The interface is divided into independent channels. Each channel is completely independent of one another. Channels are not necessarily synchronous to each other. The HBM DRAM uses a wide-interface architecture to achieve high-speed, low-power operation. The HBM DRAM uses differential clock CK_t/CK_c. Commands are registered at the rising edge of CK_t, CK_c. Each channel interface maintains a 128 bit data bus operating at DDR data rates.

HBM 2

On January 12, 2016 HBM 2 was accepted as JESD235a.[7]

HBM 2 specifies up to 8 dies per staple and doubles throughput to 1 TB/s.

On January 19th, 2016, Samsung announced early mass production of HBM2.[8][9] HBM2 is predicted to be especially useful for performance sensitive consumer applications such as virtual reality.[10]

History

The development of High Bandwidth Memory began at AMD in 2008 to solve the problem of ever increasing power usage and form factor of computer memory. Amongst other things AMD developed procedures to solve the die stacking problems with a team led by Senior AMD Fellow Bryan Black. Partners from the memory industry (SK Hynix), interposer industry (UMC) and packaging industry (Amkor Technology and ASE) were obtained to help AMD realize their vision of HBM.[11] High volume manufacturing began at a Hynix facility in Icheon, Korea in 2015.

HBM has been adopted as industry standard JESD235 by JEDEC as of October 2013 following a proposal by AMD and SK Hynix in 2010.[4] The first chip utilizing HBM is AMD Fiji which was released on June 24, 2015 powering the AMD Radeon R9 Fury X.[12]

The world's first GPU chip utilizing HBM2 is Nvidia Tesla P100 which was officially announced on April 5, 2016.[13]

See also

References

  1. ISSCC 2014 Trends page 118 "High-Bandwidth DRAM"
  2. Where Are DRAM Interfaces Headed? // EETimes, 4/18/2014 "The Hybrid Memory Cube (HMC) and a competing technology called High-Bandwidth Memory (HBM) are aimed at computing and networking applications. These approaches stack multiple DRAM chips atop a logic chip."
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. 4.0 4.1 HIGH BANDWIDTH MEMORY (HBM) DRAM (JESD235), JEDEC, October 2013
  5. HBM: Memory Solution for Bandwidth-Hungry Processors, Joonyoung Kim and Younsu Kim, SK hynix // Hot Chips 26, August 2014
  6. Highlights of the HighBandwidth Memory (HBM) Standard. Mike O’Connor, Sr. Research Scientist, NVidia // The Memory Forum – June 14, 2014
  7. Lua error in package.lua at line 80: module 'strict' not found.
  8. https://news.samsung.com/global/samsung-begins-mass-producing-worlds-fastest-dram-based-on-newest-high-bandwidth-memory-hbm-interface
  9. http://www.extremetech.com/extreme/221473-samsung-announces-mass-production-of-next-generation-hbm2-memory
  10. Lua error in package.lua at line 80: module 'strict' not found.
  11. [1] High-Bandwidth Memory (HBM) from AMD: Making Beautiful Memory
  12. [2] AMD Ushers in a New Era of PC Gaming including World’s First Graphics Family with Revolutionary HBM Technology
  13. http://www.nvidia.com/object/tesla-p100.html
  14. http://chipdesignmag.com/display.php?articleId=5279
  15. http://www.jedec.org/news/pressreleases/jedec-publishes-wide-io-2-mobile-dram-standard

External links