Bit slicing

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

Bit slicing is a technique for constructing a processor from modules of smaller bit width. Each of these components processes one bit field or "slice" of an operand. The grouped processing components would then have the capability to process the chosen full word-length of a particular software design.

Operational details

Bit slice processors usually include an arithmetic logic unit (ALU) of 1, 2, 4 or 8 bits and control lines (including carry or overflow signals that are internal to the processor in non-bitsliced CPU designs).

For example, two 4-bit ALU chips could be arranged side by side, with control lines between them, to form an 8-bit ALU. Four 4-bit ALU chips could be used to build a 16-bit ALU. It takes 8 chips to build a 32-bit word ALU. The designer can add as many slices as required to manipulate increasingly longer word lengths.

A microsequencer or Control ROM would be used to execute logic to provide data and control signals to regulate function of the component ALUs. Examples of bit-slice microprocessor modules can be seen in the Intel 3000 family,[citation needed] the AMD Am2900 family,[citation needed] the National Semiconductor IMP-16,[citation needed] and IMP-8 family,[citation needed] and the 74181[citation needed].

Historical necessity

Bit slicing, although not called that at the time, was also used in computers before large scale integrated circuits (LSI, the predecessor to today's VLSI, or very-large-scale integration circuits). The first bit-sliced machine was EDSAC 2, built at the University of Cambridge Mathematical Laboratory in 1956-8.[citation needed]

Prior to the mid-1970s and late 1980s there was some debate over how much bus width was necessary in a given computer system to make it function. Silicon chip technology and parts were much more expensive than today. Using multiple, simpler, and thus less expensive ALUs was seen[by whom?] as a way to increase computing power in a cost effective manner. While 32-bit architecture microprocessors were being discussed at the time,[by whom?] few were in production.[citation needed]

At the time 16-bit processors were common but expensive, and 8-bit processors, such as the Z80, were widely used in the nascent home computer market.

Combining components to produce bit slice products allowed engineers and students to create more powerful and complex computers at a more reasonable cost, using off-the-shelf components that could be custom-configured. The complexities of creating a new computer architecture were greatly reduced when the details of the ALU were already specified (and debugged).

The main advantage was that bit slicing made it economically possible in smaller processors to use bipolar transistors,[citation needed] which switch much faster than NMOS or CMOS transistors.[citation needed] This allowed for much higher clock rates, where speed was needed; for example DSP functions or matrix transformation, or as in the Xerox Alto, the combination of flexibility and speed, before discrete CPUs were able to deliver that.

Modern use

In more recent times, the term bit-slicing was re-coined by Matthew Kwan[1] to refer to the technique of using a general purpose CPU to implement multiple parallel simple virtual machines using general logic instructions to perform Single Instruction Multiple Data (SIMD) operations. This technique is also known as SWAR, SIMD Within A Register.

This was initially in reference to Eli Biham's 1997 paper A Fast New DES Implementation in Software,[2] which achieved significant gains in performance of DES by using this method.


  2. Eli Biham (1997). "A Fast New DES Implementation in Software". Cite journal requires |journal= (help)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>

This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later.

External links