Next: The Central Processing Unit Up: Computer Architecture - CSC Previous: Digital Logic and Boolean

Subsections

The Components of a Computer

This chapter describes the intermediate sized building blocks of computers, i.e. those which are intermediate between simple gates at one extreme, and full microprocessors on a chip at the other extreme. With an understanding of these components we will be able to describe the construction of a working computer - next chapter.

Most of the circuits below are available as CMOS or TTL MSI packages.

In a microprocessor such as a Pentium III, obviously these packages are not used; however, if you examined the circuit diagram for a microprocessor, you would find similar `building blocks' used throughout the chip.

Multiplexers and routing circuits

Before you proceed, make sure that you have attempted exercises 17-20 of chapter 3.

Multiplexer

A multiplexer (mux) connects via switches a number (typically, some power of two, 2n) of input lines to one output line; see Figure 4.1 shows and eight input multiplexer. A, B, C are the selection (or switching or control or address) lines; when $(A,\ B,\ C) = (0,\ 0,\ 0)$, input D0 - and no other input - is fed through to F.


  
Figure 4.1: Eight input multiplexer
\begin{figure}\centerline{
\hbox{
\psfig{figure=3-11.eps}
}}
\end{figure}

A demultiplexer does the opposite to a multiplexer: it has one input, and many 2n) outputs. A demultiplexer uses select/address lines just as the multiplexer, i.e. the appropriate output is addressed by the select lines.

Exercise. Construct a truth-table for the eight-input multiplexer shown in Figure 4.1 and thereby verify that it does as promised, i.e. F =D0 when (A, B, C) = (0, 0, 0), etc.

Exercise. By examining Figure 4.1, derive an expression for Fin terms of $A, B, C, D_0, D_1, \ldots D_7$. Check it for $(A, B, C, D_0) =
(0,0,0,1)\ \text{and } (0,0,0,0)$.

Decoder

 

A decoder takes n inputs, has 2n outputs, and, according to the select/address lines, one (and one only) of the outputs goes to a logic 1.

In principle, a decoder operates like a demultiplexer, but with a logic 1 tied to the (single) input all the time, i.e. in a decoder, the selected output goes to 1. Figure 4.2 shows a 3-to-8 decoder.

Exercise. By examining Figure 4.2, derive an expression for Fin terms of $A, B, C, D_0, D_1, \ldots D_7$. Check it for $(A, B, C, D_0) =
(0,0,0,1)\ \text{and } (0,0,0,0)$.


  
Figure 4.2: A 3-to-8 decoder
\begin{figure}\centerline{
\hbox{
\psfig{figure=3-13.eps}
}}
\end{figure}

Consider the following application of a decoder: a hotel with 8 (23) rooms needs to send room identity, e.g. for fire-alarm, to a central location. Instead of using eight lines, being knowledgeable of computer science, they decide to use just three lines, and code the room identity as a three bit number. But, they want any alarm to light one of eight lights; hence, they need a 3-to-8 decoder.

Similarly, in a computer, it may be wasteful to use eight lines for eight actions: so code them and transmit them as three bits, then decode for use.

Arithmetic Circuits

Adder

Half Adder

A half adder, shown in Figure 4.3 together with its truth-table adds together two single bit inputs bits to give a sum bit S, and a carry bit C. We see that $S = \bar{A}.B + A.\bar{B} = A\
\mathbf{xor}\ B$; and C = A.B.

Exercise. Think back to our mention of (decimal) addition using carries, e.g. 6804 + 1236 (6 + 4 is 10, put down 0 and carry one ... Considering that, discuss whether the half-adder makes sense.


  
Figure 4.3: A half-adder. (a) truth-table; (b) circuit.
\begin{figure}\centerline{
\hbox{
\psfig{figure=3-17.eps}
}}
\end{figure}

Full Adder

The full-adder is a bit more useful; the inputs are two bits to be added, plus carry - from another previous addition. A full-adder may be built from half-adders and an OR gate, see Figure 4.4.


  
Figure 4.4: A full adder. (a) truth-table; (b) circuit.
\begin{figure}\centerline{
\hbox{
\psfig{figure=3-18.eps}
}}
\end{figure}

Arithmetic and Logic Unit (ALU)

 

Getting more ambitious, Figure 4.5 shows a one bit ALU. It uses two function bits F0, F1 - bottom left-hand corner - to choose between four possible operations: $A\ \mathbf{and}\ B,\ A\ \mathbf{or}\ B,\
\mathbf{not}\ B,\ \text{and arithmetic sum, }\ A + B.$ In addition, via INVA, $\bar{A}$ may be substituted for A in any of the four. Moreover, either A or B, or both, may be enabled via $ENA,\ ENB$; if ENx is 1then the value for x is enabled (allowed to pass into) the circuit, otherwise 0 is passed in.

Notice the full adder on the bottom right, and the decoder, see section 4.1.2, on the bottom left.


  
Figure 4.5: A one bit ALU
\begin{figure}\centerline{
\hbox{
\psfig{figure=3-19.eps}
}}
\end{figure}

Connecting one bit ALUs together

We can make an n-bit ALU by connecting n one bit ALUs together; such an 8 bit ALU is shown in Figure 4.6. In some contexts, such one bit circuits (or sometimes more than one) are called `bit-slices'.


  
Figure 4.6: An 8 bit ALU constructed from one bit ALUs
\begin{figure}\centerline{
\hbox{
\psfig{figure=3-20.eps}
}}
\end{figure}

In the next chapter we will use an ALU, viewed functionally as a subsystem, like that in Figure 4.7.

It can computer four functions and these are selected by (input) control lines $F_0,\ F_1$. The functions are: $A + B,\ A\ \mathbf{and}\ B,\ A\
\text{\lq straight through', and}\ \bar{A}$.

Two condition flags are output, which give the `condition' of the result of the last operation. These are: N, set to 1 if the last operation caused a negative result, Z, set to 1 if the last operation resulted in zero.


  
Figure 4.7: ALU - Functional view
\begin{figure}\begin{tex2html_preform}\begin{verbatim}Inputs
A B
\vert \vert \...
... \vert
\vert \vert
V V
Output\end{verbatim}\end{tex2html_preform}\end{figure}

Magnitude Comparator

Figure 4.8 shows a four bit magnitude comparator.


  
Figure 4.8: A four bit comparator
\begin{figure}\centerline{
\hbox{
\psfig{figure=3-14.eps}
}}
\end{figure}

Exercise. Verify that Figure 4.8 does in fact output 1 when the inputs are equal, 0 when unequal. Hint 1: see section 3.5.2, xor is true if and only if the inputs are different. Hint 2: use de Morgan's law to show $\overline{X_0+X_1+X_2+X_3}$ in terms of and, where Xi is the output of the ith xor gate.

Shifter

Figure 4.9 shows an eight bit shifter. C=1 shifts right, C=0 shifts left.


  
Figure 4.9: An eight bit shifter
\begin{figure}\centerline{
\hbox{
\psfig{figure=3-16.eps}
}}
\end{figure}

Flip-Flops and Latches - Memory

 

Introduction

Up to now, we have dealt with combinatorial logic, i.e. the outputs depend only on the current inputs - the outputs disappear when the inputs disappear. In many cases, definitely in a computer, we require retention of state - memory. Digital circuits with memory are called sequential circuits.

After showing a theoretical model of sequential circuits, we will describe some basic memory circuits - flip-flops, etc. Then section 4.4 will give an overview of larger memory subsystems such as are used as the main memory of computers.

Sequential Circuit as Combinatorial plus Memory

A sequential circuit can be modelled as a separate combinatorial circuit connected to memory or storage, as shown in Figure 4.10. However, this diagram is useful only from a theoretical point of view, and, in practice, the memory and gates are all mixed up.


  
Figure 4.10: Sequential Circuit as Combinatorial + Memory
\begin{figure}\begin{tex2html_preform}\begin{verbatim}+-------------------------...
...vert
+--------------------+\end{verbatim}\end{tex2html_preform}\par\end{figure}

Set-Reset (SR) Latch

[Incidentally, the terms `latch' and `flip-flop' tend to get mixed up; [Tanenbaum, 1999] makes the distinction that latches are level triggered, whilst flip-flops are edge triggered, see section 4.3 below.

Figure 4.11 shows a Set-Reset(SR) latch implemented with nor gates.


  
Figure 4.11: SR Latch (a) State 0 (b) State 1 (c) Truth-table for nor
\begin{figure}\centerline{
\hbox{
\psfig{figure=3-22.eps}
}}
\end{figure}

Analysis

Two inputs: S, R; one Sets - output goes to 1, the other Resets - output goes to 0. Two outputs: $Q,\ \bar{Q}$, which are inverses of one another.

1.
Assume S and R both 0; and, Q=0. Gate 1 (G1) has (0,0) as input, i.e. $\bar{Q} \rightarrow 1$. This 1 goes to Gate 2 (G2) which now has (1,0) as input, so $Q \rightarrow 0$.

2.
Assume S = R = 0; but that now, Q=1. So, (0,0) into G1 gives $\bar{Q}
\rightarrow 0$. And, (0,0) into G2 gives $Q \rightarrow 1$.

3.
The states $Q = \bar{Q} = 0$, and $Q = \bar{Q} = 1$ are inconsistent, so for S = R= 0, Q will remain 0 or 1; it is stable in either state.

4.
If we start at Q = 0, and S changes to 1. (1,0) into G1 gives $\bar{Q}
\rightarrow 0$. This 0 goes to G2 i.e. (0,0) into G2 which gives $Q \rightarrow 1$. Thus, setting S to 1 switches the latch from 0 to 1.

5.
At Q = 0, setting R to 1 has no effect since (1,0) into G2 has the same effect as (1,1).

6.
At Q = 1, $R \rightarrow 1$. (0,1) into G2 gives $Q \rightarrow 0$.

7.
At Q = 1, $S \rightarrow 1$ has no effect. Verify as exercise.

8.
What happens if S = R = 1, i.e. someone is not using the device correctly? While S, R are held at 1, the only `consistent' output state is $Q = \bar{Q} = 0$. As soon as one drops, the output goes into the appropriate stable state. If they drop together, then the stable state is chosen randomly.

Clocked SR Latch

It is often important to restrict changes to specified times which are specified by a clock pulse - also called enable or strobe. See Figure 4.12. In this case, $S,\ R$ get through the and gates only when the clock pulse is present (1).


  
Figure 4.12: Clocked SR Latch
\begin{figure}\centerline{
\hbox{
\psfig{figure=3-23.eps}
}}
\end{figure}

Clocked D-type Latch

Figure 4.13 shows a D-type latch; this gets rid of the nonsense of S = R = 1 together, i.e there is only one input, D (Data).


  
Figure 4.13: Clocked D-type Latch
\begin{figure}\centerline{
\hbox{
\psfig{figure=3-24.eps}
}}
\end{figure}

D-Type Edge Triggered Flip-Flop

Enabling/clocking with a level is uncomfortable for engineers. Edges or transitions are better - they are more clearly defined. Thus, D-type edge triggered, see Figure 4.14 for a circuit.


  
Figure 4.14: D-type edge triggered flip-flop
\begin{figure}\centerline{
\hbox{
\psfig{figure=3-26.eps}
}}
\end{figure}

Figure 4.15 shows the symbols used for some D-type latches and flip-flops. CK stands for clock, a circle on it signifies clocked by 0level, or falling edge - as opposed to level 1, or rising edge.


  
Figure 4.15: D-type latches and flip-flops
\begin{figure}\centerline{
\hbox{
\psfig{figure=3-27.eps}
}}
\end{figure}

Memory

 

It would be possible to make a (main) memory, or individual registers, from D-type flip-flops, but pretty impractical if you need a number of Megabytes. And, it's not only that multi-million chips is a problem, but the multi-million lines in and out, not to mention the clock lines.

Figure 4.16 shows the logic diagram for a four word three-bits memory system.


  
Figure 4.16: Four word three-bit per word memory
\begin{figure}\centerline{
\hbox{
\psfig{figure=3-29.eps}
}}
\end{figure}

It is instructive to understand how it works in principle, because all bigger chips operate similarly.

Inputs:
$I_0,\ I_1, I_2$ data in; two address bits: $A_0,\ A_1$ - for 4 words. Note: a single address refers to all 3-bits of a word; you cannot subdivide or access individual bits.

Outputs:
$O_0,\ O_1,\ O_2$ data out.

Control:
CS - chip select; RD - read, if 1 read if 0 write; OE - output enable.

Operation:

1.
Write. $A0,\ A1$ are decoded (see section 4.1.2 and anded with $CS\ \mathbf{and}\ \overline{RD}$ to produce a clock CK on the appropriate row of 3-bits. I0 is on the D-input of of all four bits (rows) of the first column; ditto $I_1,\ I_2$ on columns 1 and 2 respectively. CK enables the data into the flip-flop. Thus writing is finished.

2.
Read. $A0,\ A1$ are decoded to select the appropriate row (word). The Q (output) data are enabled onto the Oi lines.

3.
Slight Enhancement. Data in $(I_0,\ I_1,\ I_2)$ and data out $(O_0,\ O_1,\ O_2)$ are never used at the same time (verify as an exercise), so pins on a package can be saved by replacing these six with three, simply called Data $(D_0,\ D_1,\ D_2)$, used as in/out. What we need is an electronic switch that completely disconnects the output or gates when writing, or whenever OE is not set. Tri-state circuits - the triangular `things' on the bottom right-hand corner - provide this facility; see section 4.5. In fact, the outputs of the or gates are connected to the outside world only when CS.OE.RD is true.

This is reflected in the schematic diagram (summary) shown in Figure 4.17.


  
Figure 4.17: Four word three-bit per word memory showing bi-directional data bus
\begin{figure}\begin{tex2html_preform}\begin{verbatim}+--------------------+
\v...
...--+
\vert \vert \vert
CS RD OE\end{verbatim}\end{tex2html_preform}\end{figure}

Memory Mapped Input-output

With little more added, the decoding and enabling schemes shown in Figure 4.16 can be used to handle input-output ports as well as memory. Thus, some memory addresses are retained, e.g. FFF0Hex to FFFF Hex, and when, e.g. a memory write is performed to FFF0Hex, this does not go to memory, but to an output port, which in turn is connected to some output device. Likewise, memory-mapped input.

Graphics memory

On PC compatibles, the screen graphics is represented by data in memory. This memory resides in the same memory space as ordinary memory. Hence, writing to appropriate (low) memory addresses can change the text or graphics on the screen.

Registers

Conceptually, a register is no different from a main memory word.

A 16-bit Register can store 16 bits, see Figure 4.18. Please note the numbering scheme for bits (now fairly universally agreed).


  
Figure 4.18: A 16-bit register
\begin{figure}\begin{tex2html_preform}\begin{verbatim}Bit no. 15 14 13 12 11 10 ...
...-++-++-++-++-++-++-++-++-++-++-+\end{verbatim}\end{tex2html_preform}\end{figure}

When we talk of CPU registers, we usually mean registers located in the CPU; these may be special purpose - and so not directly addressable by programmers -, or general purpose, in which case they are addressed by some alphanumeric code (e.g. AC, or X, or R1). Because of the speed of the memory making up the register, and/or its proximity, data transfer to and from a CPU register is normally an order of magnitude faster than main memory access.

Tri-State

 

Don't worry, you don't have to learn a new calculus of a logic with three logic states! Simply, as well as 1 and 0, a tri-state buffer can have a third state: disconnected or open-circuit, i.e. it operates purely as an electronic switch. A control line operates the switch. See Figure 4.19.


  
Figure 4.19: Tri-state buffer (a) Non-inverting (b) Effect when Control = 1 (c) = 0 (d) Inverting Buffer
\begin{figure}\centerline{
\hbox{
\psfig{figure=3-30.eps}
}}
\end{figure}

Buses

Crudely, a bus is a pipeline along which data can flow; but, usually, it's not as simple as connecting two devices with wires, usually the bus is multi-purpose and can be used by a number of devices; the devices must take their turn; thus, in addition to the physical connection, we need to define rules - these rules are called the bus protocol.

In general, you can have many devices connected by the same bus. Like a group of people talking - or even closer analogy, a group of people on a telephone conference - some order has to be preserved. Not everyone can talk at once. A bus has no problem with many listeners - but there can only be one talker at any one time - a bit like a lecture!)

Essentially, bus = physical connection + protocol.

In its simplest form a bus is controlled by a master, who handles requests, from slaves to use the bus.

The complexity of the protocol depends on how general purpose is the design of the bus. Internal computer buses can be simple; external buses, which may be used for printers, analogue interfacing etc., need a well specified protocol.

For the meanwhile, think of a bus as a pipe along which data flow, and access to the bus is controlled by a system of `taps'; the taps are often in the form of multiplexers on the input to the bus, and tri-state buffers on the output.

Figure 4.20 shows a 16-bit register connected to buses C, A and B.

The signals OEB, OEC open tri-state buffers to allow, respectively, the contents of the register onto bus B, or bus C, or both; the controller which handles OEB, OEC ensures that this register will never be enabled onto any bus (B, or C) at the same time any other register is enable onto that bus - let us reiterate: a bus can tolerate no more than one talker.

Whilst one must use tri-state to output-enable each register that is connected to a bus, normally, a multiplexer is sufficient to select between multiple buses capable of writing to a register. In addition, a clock (CK) operates to select when the data actually gets clocked into the register.

Note the shorthand for a multi-line bus.


  
Figure 4.20: 16-bit register connected to buses A, B and C
\begin{figure}\begin{tex2html_preform}\begin{verbatim}------/------ = 16 line bu...
...t
\vert \vert \vert
CK OEB OEC\end{verbatim}\end{tex2html_preform}\end{figure}

ROM and RAM

ROM = Read-only-memory. RAM = Random-access-memory.

But, unfortunately, the names are misleading. Both are in fact Random Access, or Direct Access - as opposed to Sequential Access. ROM is so-called because, usually, it is only ever written to once, and thereafter is only read (i.e. Read Only).

Random Access versus Sequential Access

Random access means that you can access any memory sell on demand; you can read the addresses in any order;

Sequential access means that to get to memory address N, you have to read address $1,\ 2,\ 3,\ \ldots \text{up to}\ N-1$, finally, N;

Sequential access is more common in disk and tape files; in that case, a record contains, not only its own data, but a pointer to the next record. In a random access disk file, there is a table, at the beginning of the file, giving the pointers to all records.

The circuit discussed in section 4.4 is a four word $\times
3$-bit RAM; you can read and write, and both read and write are random access.

A ROM is logically quite similar, except you cannot write it in the normal way; it is fixed during manufacture; or it written with special equipment - PROM, Programmable ROM. ROM stays the same even when the power is off: it is non-volatile. Thus: RAM: Read/Write, volatile; ROM: Read only, non-volatile

Timing and the Clock

Introduction

Nearly everything in a computer happens on the rising (or falling - depends on convention) edge of a pulsed signal (the computer's equivalent of a `beat' or orchestra conductor's hand signals. Thus, we need a clock, which produces a periodic sequence of pulses. The period - or cycle time - is typically around 0.01 microsecond, 0.01 x 10-6 secs which is 0.01 of a millionth of a second.

A period of 0.01 microsecs implies a clock rate (or frequency of 100MHz. 100 MegaHertz is $100 \times 10^{6}$ cycles per second.

Frequency and Period of Periodic Events

In what follows times (t and T) are measured in seconds, and frequencies, f, in Hertz (Hz); Hertz is a synonym for cycles per second; Hertz is credited with the discovery of radio-waves. The period - usually denoted T - of a periodic event (a repetitive event) is the time between the beginning of one event and the beginning of the next one; e.g. if the event 0 starts at t = 0.0, event 1 at t = 0.1, event 2 at t = 0.2, etc. the period T = 0.1 seconds.

The frequency (or rate) refers to the number of events that happen in one second; thus the events above have a frequency of 10 cycles per second, or 10 Hz.

It is a simple matter to convert from period (T) to frequency (f):

\begin{displaymath}f = \frac{1}{T}\ Hz.
\end{displaymath}

where T is measured in seconds. And,

\begin{displaymath}T = \frac{1}{f}
\end{displaymath}

We need shorthand for large frequencies, and small times; it is common practice to deal in multiples of 103:

Note the difference between m - milli-, 10-3, and M - Mega, 106 !

Exercise. If the period is 0.002 $\mu$sec, or $2\ nsec$, what is the clock rate? Ans: $500\ MHz$.

Exercise. If you wanted a clock rate of 1000 MHz what is the period? Ans: 1 nsec.

Clock Periods and Instruction Times

Q. If you had a 750 MHz PC, i.e. one on which the clock `beats' at a rate of 750 million beats per second, could it do 750 million adds per sec.? A. Definitely no. There are two main reasons:

Exercises

1.
The following equation describes a two-input multiplexer: when A is false the output is (the same as) D0, when A is true the output is (the same as) D1: $D_0.\bar{A} + D_1.A$.

(a) Verify this statement using a truth table.

(b) Draw a diagram for a two-input multiplexer - see Figure 4.1.

(c) A four input multiplexer will have four inputs - and one output; how many control lines (A, B, C, ...); again, see Figure 4.1.

(d) Sketch a diagram for a four-input multiplexer.

2.
Draw a diagram of a two-input multiplexer that uses on/off switches.

3.
How would you use two xors, an or and an bf inverter/not to create a circuit to compare two two-bit numbers; Hint: see Figure 4.8.
4.
A single input decoder (see Figure 4.2 for a 3 input decoder) takes one input (say A) and has two outputs D0, D1; for A = true, D1 = 1, and D0 = 0; A = false, vice-versa; (a) Give the equations for D0, D1; (b) Based on Figure 4.2, draw a diagram of a 1-to-2 decoder.

5.
(a) Draw a block diagram for a one-bit full adder: a rectangle with the (three) inputs coming in at the top, and the (two) outputs emerging from the bottom.

(b) Hence, or otherwise, draw a block diagram for a two-bit full adder.

(c) Hence, or otherwise, draw a block diagram for an n-bit full adder - who only the lower two components, and the top one.

6.
See Figure 4.5; note that the description at the bottom of page 62 should be amended to say: "The four functions are: $A\ \mathbf{and}\ B, A
\mathbf{or}\ B, \bar{B}, A + B\ (sum)$".

(a) Convince yourself that the decoder subsystem (bottom left-hand corner) does its work properly; name the output lines, respectively from the top, Fand, For, FcompB, Fsum, and determine what values of (F0, F1) generate these signals.

Note the use of and gates to enable the appropriate outputs into the output or gate.

(b) ENA, ENB are used to enable (or the opposite - block) A and B into the logic and adder units; likewise INVA is used to cause A to be replaced by $\bar{A}$, explain qualitatively, how, using both F0, F1 and ENA, ENB, INVA, you would generate: $\bar{A}\ \mathbf{or}\
B$, A `straight-through', B `straight-through', $\bar{B}$`straight-through'.

7.
Verify that Figure 4.8 does in fact output 1 when the inputs are equal, 0 when unequal. Hint 1: xor is true if and only if the inputs are different. Hint 2: use de Morgan's law to show $\overline{X_0+X_1+X_2+X_3}$ in terms of and, where Xi is the output of the ith xor gate.
8.
Design a circuit (as simple as possible, hint, hint!) that will test if a 4-bit number is zero.

9.
Examine the 4 word $\times$ 3-bit memory cell in Figure 4.16 uses four rows $\times$ 3 columns of 1-bit cells; (a) how many rows and columns for $256 \times 8$; $1024 \times 4$.

10.
The 4 word $\times$ 3-bit memory cell in Figure 4.16 uses 22 and gates and 3 ors. If the circuit were expanded to $256 \times 8$, how many of each?

11.
(a) Draw a block diagram which summarises Figure 4.16; as inputs to the (i) left of the rectangle, show two address lines A0, A1; (ii) as input to the bottom of the rectangle show the three control lines CS, RD, OE, and, (iii) to the right, as bi-directional input-output, show three data lines D0, D1, D3

(b) Draw a similar diagram for $256 \times 8$ memory.

12.
(a) How many address lines are required to address 64 Megabytes of memory?

(b) How many memory locations can be addressed using 12 address lines.

13.
As memory volumes on chips get larger, you need more address lines (n address lines for 2n cells). Devise a method for using less than n address lines.

14.
Implement a $8-bit\ \times 8-bit$ multiplier using a ROM. Describe: (a) Number of locations required; (b) Hence, number of address lines; (c) Number of bits in each word.

15.
Find a type of memory that is not random access.

16.
Buses, section 4.6. Explore the following. Family (of 4) A in Belfast, family (of 8) B in Dublin. Think of the steps that person A1 must go through to speak to person B5; and, within a pairwise dialogue, the manners (protocol) that must be observed in order for them to communicate effectively. Think of the telephone system providing a bus. Draw a diagram.

17.
What sort of device/circuit is used to permit only one (of many) devices (e.g. registers) to put their data on a bus.

18.
Refer back to Chapter 1, Figure 1.5. Assume that, in the bus connecting MAR and MBR with memory, we have have a 12-bit address part, and a 16-bit data part. However, more lines will be required - make some suggestions.

19.
A normal arithmetic-and-logic-unit (ALU) has two inputs and an output, i.e. $input1 + input2 \rightarrow output$; what other input(s) and output(s) should an ALU have?

20.
(a) On the original IBM PC (using an Intel 8086), the memory space was limited to 1 Megabyte. How, why? Actually, only 640K was available for normal program and data memory - what was the remainder used for?

21.
(a) What is cache memory? (b) On modern processors, where is cache normally located? (c) How can cache improve the performance of a system? (d) In order to maximally benefit from cache, how should programs be organised? (as far as possible) - consider both program and data. We will return to this matter in later chapters.


next up previous contents
Next: The Central Processing Unit Up: Computer Architecture - CSC Previous: Digital Logic and Boolean
jc
2000-11-13