Heterogeneous System Architecture

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

Heterogeneous System Architecture (HSA) is a cross-vendor set of specifications that allow for the integration of central processing units and graphics processors on the same bus, with shared memory and tasks.[1] The HSA is being developed by the HSA Foundation, which includes (among many others) AMD and ARM. The platform's stated aim is to reduce communication latency between CPUs, GPUs and other compute devices, and make these various devices more compatible from a programmer's perspective,[2]:3[3] relieving the programmer of the task of planning the moving of data between devices' disjoint memories (as must currently be done with OpenCL or CUDA).[4]

Cuda and OpenCL as well as most other fairly advanced programming languages can use HSA to increase their execution performance.[5] Heterogeneous computing is widely used in system-on-chip devices such as tablets, smartphones, and other mobile devices.[6] HSA allows programs to use the graphics processor for floating point calculations without separate memory or scheduling.[7]

Rationale

The rationale behind HSA is to ease the burden on programmers when offloading calculations to the GPU. Originally driven solely by AMD and called the FSA, the idea was extended to encompass processing units other then GPUs, e.g. the widespread DSPs by other hardware design companies as well.

Steps performed when offloading calculations to the GPU on a non-HSA system 
Steps performed when offloading calculations to the GPU on a HSA system, using the HSA functionality 

Modern GPUs are very well suited to perform Single instruction, multiple data (SIMD) and Single instruction, multiple threads (SIMT), while modern CPUs are still being optimized for branching. etc.

Overview

Originally introduced by the Cell Broadband Engine, sharing system memory directly between multiple system actors makes heterogeneous computing more mainstream. Heterogeneous computing itself refers to systems that contain multiple processing units – central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), or any type of application-specific integrated circuits (ASICs). The system architecture allows any accelerator, for instance a graphics processor, to operate at the same processing level as the system's CPU.

Among its main features, HSA defines a unified virtual address space space for compute devices: where GPUs traditionally have their own memory, separate from the main (CPU) memory, HSA requires these devices to share page tables so that devices can exchange data by sharing pointers. This is to be supported by custom memory management units.[2]:6–7 To render interoperability possible and also to ease various aspects of programming, HSA is intended to be ISA-agnostic for both CPUs and accelerators, and to support high-level programming languages.

So far, the HSA specifications cover:

  • HSA Intermediate Layer (HSAIL), a virtual instruction set for parallel programs
  • HSA memory model
    • compatible with C++11, OpenCL, Java and .NET memory models
    • relaxed consistency
    • designed to support both managed languages (e.g. Java) and unmanaged languages (e.g. C)
    • will make it much easier to develop 3rd-party compilers for a wide range of heterogeneous products programmed in Fortran, C++, C++ AMP, Java, et al.
  • HSA dispatcher and run-time
    • designed to enable heterogeneous task queueing: a work queue per core, distribution of work into queues, load balancing by work stealing
    • any core can schedule work for any other, including itself
    • significant reduction of overhead of scheduling work for a core

Mobile devices are one of the HSA's application areas, in which it yields improved power efficiency.[6]

Block diagrams

The block diagrams below provide high-level illustrations of how HSA operates and how it compares to traditional architectures.

Standard architecture with a discrete GPU attached to the PCI Express bus. Zero-copy between the GPU and CPU is not possible due to distinct physical memories. 
HSA brings unified virtual memory, and facilitates passing pointers over PCI Express instead of copying the entire data. 
In partitioned main memory, one part of the system memory is exclusively allocated to the GPU. As a result, zero-copy operation are not possible. 
Unified main memory, made possible by a combination of HSA-enabled GPU and CPU. As a result, it is possible to perform zero-copy operations.[8] 
Both the CPU's MMU and the GPU's IOMMU have to comply with the HSA hardware specifications. 

Software support

AMD GPUs contain certain additional functional units intended to be used as part of HSA. In Linux, kernel driver amdkfd provides required support.[9][10]

Some of the HSA-specific features implemented in the hardware need to be supported by the operating system kernel and specific device drivers. For example, support for AMD Radeon and AMD FirePro graphics cards, and APUs based on Graphics Core Next (GCN), was merged into version 3.19 of the Linux kernel mainline, released on February 8, 2015.[10] Programs do not interact directly with amdkfd, but queue their jobs utilizing the HSA runtime.[11] This very first implementation, known as amdkfd, focuses on "Kaveri" or "Berlin" APUs and works alongside the existing Radeon kernel graphics driver.

Additionally, amdkfd supports heterogeneous queuing (HQ), which aims to simplify the distribution of computational jobs among multiple CPUs and GPUs from the programmer's perspective. As of February 2015, support for heterogeneous memory management, suited only for graphics hardware featuring version 2 of the AMD's IOMMU, has not yet been accepted into the Linux kernel mainline.

Integrated support for HSA platforms has been announced for the "Sumatra" release of OpenJDK, due in 2015.[12]

AMD APP SDK is AMD's proprietary software development kit targeting parallel computing, available for Microsoft Windows and Linux. Bolt is a C++ template library optimized for heterogeneous computing.[13]

GPUOpen comprehends a couple of other software tools related to HSA. Code XL version 2.0 includes an HSA profiler.[14]

Hardware support

As of February 2015, only AMD's "Kaveri" A-series APUs (cf. "Kaveri" desktop processors and "Kaveri" mobile processors) and Sony's PlayStation 4 allowed the integrated GPU to access memory via version 2 of the AMD's IOMMU. Earlier APUs (Trinity/Richland) included v2 IOMMU functionality but only for use by an external GPU via PCIE.

More recently than 2015, Carrizo and Bristol Ridge APUs also include IOMMUv2 functionality for the integrated GPU.

Features of AMD Accelerated Processing Units
Brand Llano Trinity Richland Kaveri Carrizo Bristol Ridge Raven Ridge    Desna,
Ontario,
Zacate
Kabini,
Temash
Beema,
Mullins
Carrizo-L
Platform Desktop, Mobile Desktop, Mobile Mobile, Desktop Desktop, Mobile Ultra-mobile
Released Aug 2011 Oct 2012 Jun 2013 Jan 2014 Jun 2015 Jun 2016 Mar 2017 Jan 2011 May 2013 Q2 2014 May 2015
Fab. (nm) GlobalFoundries 32 SOI 28 14 TSMC 40 28
Die size (mm2) 228 246 245 244.62 TBA TBA 75 (+ 28 FCH) ~107 TBA
Socket FM1, FS1 FM2, FS1+, FP2 FM2+, FP3 FP4, FM2+ AM4, FP4 AM4 FT1 AM1, FT3 FT3b FP4
CPU architecture AMD 10h Piledriver Steamroller Excavator Zen Bobcat Jaguar Puma Puma+[15]
Memory support DDR3-1866
DDR3-1600
DDR3-1333
DDR3-2133
DDR3-1866
DDR3-1600
DDR3-1333
DDR4-2400
DDR4-2133
DDR4-1866
DDR4-1600
DDR3L-1333
DDR3L-1066
DDR3L-1866
DDR3L-1600
DDR3L-1333
DDR3L-1066
DDR3L-1866
DDR3L-1600
DDR3L-1333
3D engine[lower-alpha 1] TeraScale 2
(VLIW5)
TeraScale 3
(VLIW4)
GCN 1.1
(Mantle, HSA)
GCN 1.2
(Mantle, HSA)
GCN 1.3
(Mantle, HSA)
TeraScale 2
(VLIW5)
GCN 1.1
Up to 400:20:8 Up to 384:24:6 Up to 512:32:8 Up to 768:48:12 80:8:4 128:8:4
IOMMUv1 IOMMUv2 IOMMUv1[16] TBA
Unified Video Decoder UVD 3 UVD 4.2 UVD 6 TBA UVD 3 UVD 4 UVD 4.2 UVD 6
Video Coding Engine N/A VCE 1.0 VCE 2.0 VCE 3.1 TBA N/A VCE 2.0 VCE 3.1
GPU power saving PowerPlay PowerTune N/A Enduro
Max. displays[lower-alpha 2] 2–3 2–4 2–4 3 4 TBA 2 TBA
TrueAudio N/A [18] N/A[16]
FreeSync N/A N/A
/drm/radeon[19][20][21] N/A
/drm/amd/amdgpu[22] N/A Experimental N/A Experimental
  1. Unified shaders : texture mapping units : render output units
  2. To feed more than two displays, the additional panels must have native DisplayPort support.[17] Alternatively active DisplayPort-to-DVI/HDMI/VGA adapters can be employed.

See also

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. 2.0 2.1 Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. 6.0 6.1 Lua error in package.lua at line 80: module 'strict' not found.
  7. Lua error in package.lua at line 80: module 'strict' not found.
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. Lua error in package.lua at line 80: module 'strict' not found.
  10. 10.0 10.1 Lua error in package.lua at line 80: module 'strict' not found.
  11. Lua error in package.lua at line 80: module 'strict' not found.
  12. Lua error in package.lua at line 80: module 'strict' not found.
  13. Lua error in package.lua at line 80: module 'strict' not found.
  14. Lua error in package.lua at line 80: module 'strict' not found.
  15. Lua error in package.lua at line 80: module 'strict' not found.
  16. 16.0 16.1 Lua error in package.lua at line 80: module 'strict' not found.
  17. Lua error in package.lua at line 80: module 'strict' not found.
  18. Lua error in package.lua at line 80: module 'strict' not found.
  19. Lua error in package.lua at line 80: module 'strict' not found.
  20. Lua error in package.lua at line 80: module 'strict' not found.
  21. Lua error in package.lua at line 80: module 'strict' not found.
  22. Lua error in package.lua at line 80: module 'strict' not found.

External links