Computer Architecture
|
![]() |
The Arithmetic and Logic Unit is the 'core' of any processor: it's the unit that performs the calculations. A typical ALU will have two input ports (A and B) and a result port (Y). It will also have a control input telling it which operation (add, subtract, and, or, etc) to perform and additional outputs for condition codes (carry, overflow, negative, zero result). |
Aside According to some sources, the most popular processors are not those found in your IBM PC, but the 4-bit microprocessors that control your washing machine, telephone handset, computer keyboard, parts of your car, etc. |
![]() Note the large area allocated to floating point in the lower right. |
More recently, transistor geometries have shrunk to the point where it's possible to get 107 transistors on a single die. Thus it becomes feasible to include floating point ALUs on every chip - probably more economic than designing separate processors without the floating point capability. In fact, some manufacturers will supply otherwise identical processors with and without floating point capability. This can be achieved economically by marking chips which had defects only in the region of the floating point unit as "integer-only" processors and selling them at a lower price for the commercial information processing market! This has the desirable effect of increasing your semiconductor yield quite significantly - a floating point unit is quite complex and occupies a considerable area of silicon - look at a typical chip micrograph. Thus the probability of defects in this area is reasonably high. |
In simple processors, the ALU is a large block of combinatorial logic
with the A and B operands and the opcode (operation code) as inputs and a
result, Y, plus the condition codes as outputs. Operands and opcode are
applied on one clock edge and the circuit is expected to produce a result
before the next clock edge. Thus the propagation delay through the ALU
determines a minimum clock period and sets an upper limit to the clock
frequency.
In advanced processors, the ALU is heavily pipelined to extract higher instruction throughput. Faster clock speeds are now possible because complex operations (eg floating point operations) are done in multiple stages: each individual stage is smaller and faster. |
Note for hackers A small "industry" has grown up around the phenomenon of "clock-chipping" - the discovery that a processor will generally run at a frequency somewhat higher than its specification. Of necessity, manufacturers are somewhat conservative about the performance of their products and have to specify performance over a certain temperature range. For commercial products this is commonly 0oC - 70oC. A reputable computer manufacturer will also be somewhat conservative, ensuring that the temperature inside the case of his computer normally never rises above, say 45oC. This allows sufficient margin for error in both directions - chips sometimes degrade with age and computers may encounter unusual environmental conditions - so that systems will continue to function to their specifications. Clock-chippers rely on the fact that propagation delays usually increase with temperature so that a chip specified at x MHz at 70oC may well run at 1.5x at 45oC. Needless to say this is a somewhat reckless strategy: your processor may functional perfectly well for a few months in winter - and then start failing, initially occasionally, and then more regularly as summer approaches! The manufacturer may also have allowed for some degradation with age so that a chip specified for 70oC now will still function at xMHz in two years time. Thus a clock-chipped processor may start to fail after a few months at the higher speed - again the failures may be irregular and occasional initially, and start to occur with greater frequency as the effects of age show themselves. Restoring the original clock chip may be all that's needed to give you back a functional computer! |
Key terms |
Continue on to Register File | Back to the Table of Contents |