Non-RAID drive architectures

From Infogalactic: the planetary knowledge core
(Redirected from JBOD)
Jump to: navigation, search

Lua error in package.lua at line 80: module 'strict' not found.

The most widespread standard for configuring multiple hard disk drives is RAID (Redundant Array of Inexpensive/Independent Disks), which comes in a number of standard configurations and non-standard configurations. Non-RAID drive architectures also exist, and are referred to by acronyms with similarity to RAID, several tongue-in-cheek

  • JBOD (derived from "just a bunch of disks"): an architecture involving multiple hard disk drives, while making them accessible either as independent hard disk drives, or as a combined (spanned) single logical volume with no actual RAID functionality.
  • SPAN or BIG: A method of combining the free space on multiple hard disk drives to create a spanned volume. Such a concatenation is sometimes also called JBOD. A SPAN or BIG is generally a spanned volume only, as it often contains mismatched types and sizes of hard disk drives.[1]
  • MAID (derived from "massive array of idle drives"): an architecture using hundreds to thousands of hard disk drives for providing nearline storage of data, primarily designed for "Write Once, Read Occasionally" (WORO) applications, in which increased storage density and decreased cost are traded for increased latency and decreased redundancy.

JBOD

JBOD (abbreviated from "just a bunch of disks") is an architecture using multiple hard drives, but not in a RAID configuration, thus providing neither redundancy nor performance improvements. Hard drives may be handled independently as separate logical volumes, or they may be combined into a single logical volume using a volume manager like LVM; such volumes are usually called "spanned".[2]

When combined into a single logical volume, JBOD configurations are also called "linear", as separate hard drives are concatenated in a linear manner to form a logical volume. Due to its nature, no redundancy is provided with this configuration, meaning that failure of a single hard drive destroys the logical volume as a whole. mdadm, in addition to LVM, supports creation of such non-RAID linear volumes.[3][4]

Concatenation (SPAN, BIG)

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Lua error in package.lua at line 80: module 'strict' not found.

Diagram of a SPAN/BIG ("JBOD") setup.

Concatenation or spanning of drives is not one of the numbered RAID levels, but it is a popular method for combining multiple physical disk drives into a single logical disk. It provides no data redundancy. Drives are merely concatenated together, end to beginning, so they appear to be a single large disk. It may be referred to as SPAN or BIG (meaning just the words "span" or "big", not as acronyms).[citation needed]

In the diagram to the right, data are concatenated from the end of disk 0 (block A63) to the beginning of disk 1 (block A64); end of disk 1 (block A91) to the beginning of disk 2 (block A92). If RAID 0 were used, then disk 0 and disk 2 would be truncated to 28 blocks, the size of the smallest disk in the array (disk 1) for a total size of 84 blocks.[citation needed]

What makes a SPAN or BIG different from RAID configurations is the possibility for the selection of drives. While RAID usually requires all drives to be of similar capacity[lower-alpha 1] and it is preferred that the same or similar drive models are used for performance reasons, a spanned volume does not have such requirements.[1][5]

Implementations

The initial release of Microsoft's Windows Home Server employs drive extender technology, whereby an array of independent drives are combined by the OS to form a single pool of available storage. This storage is presented to the user as a single set of network shares. Drive extender technology expands on the normal features of concatenation by providing data redundancy through software – a shared folder can be marked for duplication, which signals to the OS that a copy of the data should be kept on multiple physical drives, whilst the user will only ever see a single instance of their data.[6] This feature was removed from Windows Home Server in its subsequent major release.[7]

Greyhole, a disk-pooling application, implements what it calls a "storage pool". This pool is created by presenting to the user, through Samba shares, a logical drive that is as large as the sum of all physical drives that are part of the pool. Greyhole also provides data redundancy through software - the user can configure, per share, the number of file copies that Greyhole is to maintain. Greyhole will then ensure that for each file in such shares, the correct number of extra copies are created and maintained on multiple physical drives. The user will only ever see one copy of each file.[8]

MAID

MAID (abbreviated from "massive array of idle drives") is an architecture using hundreds to thousands of hard drives for providing nearline storage of data. MAID is designed for "Write Once, Read Occasionally" (WORO) applications.[9][10][11]

Compared to RAID technology, MAID has increased storage density, and decreased cost, electrical power, and cooling requirements. However, these advantages are at the cost of much increased latency, significantly lower throughput, and decreased redundancy. Low drive utilization rates may actually reduce reliability in consumer-oriented large PATA and SATA drives.[12] Drives designed for multiple spin-up/down cycles (e.g. laptop drives) are significantly more expensive.[13] Latency may be as high as tens of seconds.[14] MAID can supplement or replace tape libraries in hierarchical storage management.[10]

To allow a more gradual tradeoff between access time and power savings, some MAIDs such as Nexsan's AutoMAID incorporate drives capable of spinning down to a lower speed.[15] Large scale disk storage systems based on MAID architectures allow dense packaging of drives and are designed to have only 25% of disks spinning at any one time.[14]

See also

Notes

  1. Otherwise, in most cases only the drive portions equaling to the size of the smallest RAID set member would be used.

References

  1. 1.0 1.1 Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. Lua error in package.lua at line 80: module 'strict' not found.
  7. Lua error in package.lua at line 80: module 'strict' not found.
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. Lua error in package.lua at line 80: module 'strict' not found.
  10. 10.0 10.1 Lua error in package.lua at line 80: module 'strict' not found.
  11. Lua error in package.lua at line 80: module 'strict' not found.
  12. Harris, Rick (2007-02-19). "Failure Trends in a Large Disk Drive Population", Google, Retrieved on 2012-08-28
  13. Lua error in package.lua at line 80: module 'strict' not found.
  14. 14.0 14.1 Cook, Rick (2004-07-12). "Backup budgets have it MAID with cheap disk" Retrieved on 2008-07-15
  15. Lua error in package.lua at line 80: module 'strict' not found.

External links