Introduction to Microprocessors and Microcomputers


History of computers


Hundreds of different kinds of computers have been designed and built during the evolution of the modern digital computer. Most have been forgotten, but a few have had a significant impact on modern ideas.


The Mechanical Era (Zeroth Generation) (1642 – 1945)


        First mechanical calculator was developed in 1623 by Wilhelm Schickkard, a professor at the University of Tubingen.

        The first working Mechanical calculator was developed by Blaise Pascal in 1642 a French Scientist. The calculator could only perform addition and subtraction. He made this machine to assist his father in calculation of tax.

        Around 1671 Pascal's machine was extended to perform multiplication and division automatically by German Mathematician Baron Gottfried Wilhelm von Leibniz.

        In 1823, Charles Babbage tried to build a mechanical computing machine. He named it difference engine. It was designed to compute tables of functions such as logarithms and trigonometric functions. A polynomial was used to represent the function.

        In 1830, Babbage improved the machine and named analytical engine. It contained all the component of modern digital computers.

i)        The store (equivalent to memory)

ii)       The mill (equivalent to CPU)

iii)     The input section using punched card

iv)     The output section using punched card

This was the programmable machine. Ada was the programmer hired to work for the Babbage's engine. Because of wear and tear of wheels and gears this could not be used later. Charles Babbage is considered as the father of the modern computer.

        In 1930, Konrad Zuse, a German engineering student built a series of automatic calculating machines using electromagnetic relays. He was unaware of Babbage's work. His machines were destroyed in 1944 because of bombing in Berlin in 1944. Subsequent machines do not have his influence.

        In 1944 Howard Aiken, a professor of physics at Harvard University, designed a general purpose mechanical digital computer, called “Automatic Sequence Controlled Calculator”, which was the improvement of Babbage’s work. Later it was named Harvard Mark I. It had 72 words of 23 decimal digits each. Punched paper tape was used.

        After Mark II was developed, the relay computers were obsolete. The electronic era had begun.




The Electronic Era (After 1945)


        The first electronic computer using valves was developed by John V. Atanasoff in 1930 at Iowa State University. It contained add-subtract unit, and used about 300 valves. Capacitors were used as memory. Binary numbers were used for operation.

        In 1943 COLOSSUS was developed in England to decode the trapped message sent by the Germans for their own purpose, in the Second World War.

        The first General purpose computer was the ENIAC (Electronic Numerical Integrator and Calculator). It was developed by John W. Mauchly and J. Presper Eckert, at the University of Pennsylvania. John von Neumann was the consultant of the ENIAC project. It was a large machine weighing 30 tons and consumed 140 kW of power. It consisted of 18,000 VT and 1500 relays. It was completed in 1946. It had 20 registers, each capable of holding a 10-digit decimal number.

        Later John von Neumann and his followers developed EDVAC (Electronic Discrete Variable Automatic Computer) in 1951. It was built with stored program concept and used binary number system for its operation. It also had secondary memory.

        Neumann and his colleagues designed and built a new computer called IAS (Institute of Advanced Studies) at the Institute for Advanced studies in Princeton during 1946-52. The machine had the features of the modern computer. It used random access main memory consisting of cathode-ray-tube. The CPU contained several high-speed registers (made of vacuum tubes) to store operands and results. This machine served as the prototype for the most subsequent general purpose computers.

        After the invention of transistors in 1947 the vacuum tubes were replaced. The transistorized computers used transistors as the component of CPU. These computers consumed less power and were faster than the computers made from the vacuum tubes. The computer using transistor are IBM 7090, IBM 7094, PDP 5, PDP 8 (it also has the IC version) etc

        After the invention of the Integrated Circuits (ICs) in about 1958-1959, computers started to ICs as their components replacing the transistor. Computers using ICs are IBM 370, PDP-8, PDP-11, etc.

        By 1970 all new computers used ICs, SSI (small scale integration) and MSI (medium scale integration) as CPU components and LSI (large scale integration) for main memory.

        The first LSI chips were introduced in 1970 in the form of computer memory units.

        With the advent in the IC technology, VLSI chips were soon developed and whole CPU was integrated on a single IC chip. The CPU integrated on a single IC chip is called microprocessor.

        The first microprocessor was 4004 developed by Intel Corporation in 1971.

        SSI, MSI, LSI and VLSI are the classification of ICs based on components density. SSI contains 1-100 components (usually transistors), MSI contains 100-1000 components, LSI contains 1000-10,000 components and VLSI contains more than 10,000 components.

        Computers build after 1971 used microprocessor as their CPU.




Computer Generations


The electronic era can also be categorized by computer generations. It is divided into five generations as described as follows.


First Generation (1946 – 1954)


The digital computers using electronic valves (vacuum tubes) are known as first-generation computers. The first generation computers used vacuum tubes as their CPU components. The high cost of vacuum tubes prevented their use for main memory. So less costly but slower devices such as acoustic delay lines were used for memory. They store information in the form of propagating sound waves. CRT memories were also used in the first generation computers. Magnetic tapes and magnetic drums were used as secondary memory. The first generation computers used assembly language in programming. They used fixed-point arithmetic. Some examples of the first-generation computers are: IBM 701, IBM 704, IBM 709, ENIAC, EDVAC, UNIVAC.


Second Generation (1955 – 1964)


The second generation computers used transistors for the CPU components. They used ferrite cores for main memory. Magnetic disks and tapes were used for secondary memory. High level languages such as FORTRAN, ALGOL and COBOL were used for programming. The hardware for floating point arithmetic was incorporated. I/O processor was included to control input/output operations, so that it relieved CPU from many time-consuming routine tasks. Examples of second generation computers are: IBM 1620, IBM 7090, IBM 7094I, IBM 7094II, PDP-1, PDP-5, PDP-8


Third Generation (1965 – 1974)


The third generation computers used ICs (SSI and MSI) for CPU components. In the beginning third generation computers used magnetic core memory, but later on semiconductor memories (RAMs and ROMs) were used. Semiconductor memories were LSI chips. Magnetic disks and tapes were used as secondary memories. The third generation computers incorporated cache memory. Cache is the fastest memory used for the frequently used instructions and data. Microprogramming, parallel processing, multiprocessing, multiprogramming, multiuser system were introduced. The concept of virtual memory was introduced. The examples of third generation computer are IBM/370 series, CDC 7600, PDP-11, CYBER-175, STAR etc.


Fourth Generation (1974 – present)


Fourth generation computers use VLSI chip for CPU, memory and supporting chips. They use microprocessor as CPU. 32-bit and 64 bit microprocessor are available today. A powerful computer uses more than one microprocessor. Today the latest processor has more than 20 million transistors. The clock frequency of the latest processor has reached up to 2 GHz. Besides CPU microprocessor contain many other essential components such as FPU (Floating-Point Unit), MMU (memory management unit), first level and second-level cache memory etc. These computers use semiconductor memory (RAM & ROM). 512 MB chips are available today. Disk drives more than 40 GB are available today. Optical memory such as CD-ROMs and DVD-ROMs are widely used as read-only backup memory.  Optical Disk Drives such as CD R/W and DVD R/W are available to read and write to the Optical Disk.

BASIC, C, C++, JAVA, PASCAL etc. are used as programming languages. Fourth generation languages (4GL) like ORACLE, SYBASE etc are used for database management. Expert systems have been developed. They use artificial intelligence. Operating systems used in fourth generation computers are: Windows-95, Windows-98, UNIX, LINUX, Windows-NT, Windows ME, Windows 2000, Windows Xp etc.

Examples of the latest microprocessor are: Pentium II Pentium III, PowerPC, DEC’s Alpha, SUN’s ULTRASPARC, AMD’s K-6, K-7, Athelon etc.




Fifth Generation (Forth coming)


The fifth generation computers are under development stage. Japan and USA have undertaken projects to design and develop such computers. These computers will use ULSI (ultra large scale integration) chips. ULSI chip contain millions of components into a single IC chip. Such computers will use intelligent programming, knowledge-based problem solving techniques, high performance multiprocessor system and improved human-machine interface. The input and output for these computers will be in the form of speech and graphics. The computers will be intelligent enough to understand human languages. The fifth generation computers will use intelligent software. The intelligent software will use artificial intelligence concept.



The Stored Program Concept

Although the computers available today's are too complex they are still based on a design principle first proposed by Dr. John Von Neumann in 1946. Now taken for granted by most com­puter users, Von Neumann's idea defined the architecture to be used by all computers for the next 50 years.

Von Neumann Machine

One of the first digital computers was a machine called ENIAC (Electronic Numerical Integrator and Computer). It was designed and built at the Moore School of Electrical Engineering at the University of Pennsylvania in 1946. ENIAC measured over 18-ft high and was 80-ft long. It contained nearly 17,500 vacuum tubes, weighed more than 30 tons, and required 1500 square feet of floor space. More than 200,000 man hours went into its construction (500,000 solder connections alone were required). It was pro­grammed by setting up to 6000 switches and connecting cables between the various units of the computer.

While ENIAC was under construction, Dr. John Von Neumann, also of the Moore School of Electrical Engineering, wrote a paper in collaboration with A. W. Burks and H. H. Goldstein that would define the architecture to be used by nearly all computers from that day on. Now called the stored program concept, Von Neumann suggested that rather than rewire the computer for each new task, the program instructions should be stored in a memory unit just like the data. The resulting computer would then be soft­ware programmable rather than hardware programmable.

One of the first stored program computers to be built was called EDVAC (Elec­tronic Discrete Variable Automatic Computer). It was completed in 1952 and had a memory capacity of 1000 words of 10 decimal digits each. EDVAC was superior to ENIAC because it could be programmed much more efficiently and used a paper tape input device. At about this same time, the first random access core memory appeared. The first generation of computers was now well under way.

The Processing Cycle

The block diagram of a basic stored program com­puter is shown in the following figure. It has three major parts. (1) The central processing unit or CPU which acts as the “brain” coordinating all activities within the computer. (2) The mem­ory unit where the program instructions and data are temporarily stored. (3) The in­put/output or I/O devices which allow the computer to input information for processing and then output the result.

At one time, the CPU of a computer was constructed using many different logic circuits and several circuit boards. Today all of this circuitry has been reduced to a tiny (typically 1/4 inch on a side) silicon chip or integrated circuit (1C) called the micro­processor. The entire computer, including microprocessor, memory, and I/O, is called a microcomputer. The Intel microprocessors derive their heritage from a chip whose part number was 8086. Subsequent versions of this chip have been num­bered 80286, 80386, and 80486. The term 80x86 is therefore used to describe the family of compatible Intel microprocessors.

The basic timing of the computer is controlled by a square wave oscillator or clock generator circuit. This signal is used to synchronize all activities within the com­puter, and it determines how fast the program instructions can be fetched from mem­ory and executed.

The CPU contains several data registers (flip-flops wired in series with each other). Some are general purpose and are used for storing temporary information. Others are special purpose. The accumulator, for example, is reserved for performing complex math operations such as multiply and divide. On 80x86 micro­processors, all data intended for the I/O devices must pass through this register.

The basic processing cycle begins with a memory fetch or read cycle. The in­struction pointer (IP) (also called program counter) register holds the address of the memory cell to be selected. That is it “points” at the program instruction to be fetched. In this example, IP is storing the address 672,356, and the binary equivalent of this address is output onto the system address bus lines and routed to the memory unit.

The memory unit consists of a large number of storage locations each with its own unique address. Because the CPU can randomly access any location in memory, the term random access memory (RAM) is often used. In this example we assume each memory location is 8 bits wide, referred to as a byte. This memory organization is typical for most microprocessors today.

RAM is a volatile memory so there should be power for the memory to contain data. Because of this, a portion of the memory unit is often built using read-only memory (ROM) chips. The program stored by a ROM is permanent and therefore not lost when power is removed. As the name implies, the data stored by a ROM can only be read, not written. A special programmer is required to write data into a ROM.

The memory unit's address selector/decoder circuit examines the binary number on the address lines and selects the proper memory location to be accessed. In this ex­ample, because the CPU is reading from memory, it activates its MEMORY READ control signal. This causes the selected data byte in memory to be placed onto the data lines and routed to the instruction register within the CPU.

The instruction in the instruction register is decoded and executed. In this example the instruction has the decimal code 64 which (for an 80x86 microprocessor) is decoded to be INC AX-increment the accumulator register. The arithmetic logic unit (ALU) is therefore in­structed to add 1 to the contents of the accumulator where the new result will be stored. In general, the ALU portion of the CPU performs all mathematical and Boolean logic functions.

When the instruction fetch, decode and execute cycle is completed the cycle repeats beginning with a new instruction fetch cycle. The control logic in the CPU is wired such that register IP is always incre­mented after an instruction fetch, thus the next sequential instruction in memory will normally be accessed. The entire process of reading memory, incrementing the instruc­tion pointer, and decoding and executing the instruction, is known as the fetch and exe­cute principle of the stored program computer.

The Instruction Set: It is the job of the instruction decoder to recognize and activate the appropriate circuits in the CPU needed to carry out each new instruction as it is fetched from memory. The list of all such instructions recognizable by the decoder is called the instruction set. Microprocessors in the 80x86 family are known as complex in­struction set computers (CISC) because of the large number of instructions in their in­struction set (more than 3000 different forms). Some recent microprocessors have been designed to have only a small number of very fast executing instructions. Computers based on this concept are called reduced instruction set computers (RISC).

Modern CPUs: Most microprocessor chips today are designed to allow the fetch and execute cycles to overlap. This is done by dividing the CPU into an execution unit (EU) and a bus interface unit (BIU). The BIU's job is to fetch instructions from memory as quickly as possible, storing these in a special instruction queue. The EU then fetches instructions from this queue, not from memory. Because the fetch and execute cycles are allowed to overlap, the total processing time is reduced.

Modem processors also use a pipelined execution unit which allows the decoding and execution of instructions to be overlapped, further increasing processing perform­ance. Intel's Pentium III, for example, has a 12-stage pipeline with three execution en­gines called the Fetch/Decode unit, Dispatch/Execution unit, and the Retire unit. This multiple execution unit architecture is referred to as superscalar. Superscalar micro­processors can process more than one instruction per clock cycle.



The Intel Series of Microprocessor (Brief History)


In 1968, Robert Noyce, inventor of the silicon integrated circuit, Gordon Moore, of Moore's law fame, and Arthur Rock, a San Francisco venture capitalist, formed the Intel Corporation to make memory chips. In its first year of operation, Intel sold only $3000 worth of chips, but business has picked up since then.


In the late 1960s, calculators were large electromechanical machines the weighing 20 kg. In Sept. 1969, a Japanese com­pany, Busicom, approached Intel with a request for it to manufacture 12 custom chips for a proposed electronic calculator. The Intel engineer assigned to this pro­ject, Ted Hoff, looked at the plan and realized that he could put a 4-bit general-purpose CPU on a single chip that would do the same thing and be simpler and cheaper as well. Thus in 1970, the first single-chip CPU, the 2300-transistor 4004 was born.


It is worth noting that neither Intel nor Busicom had any idea what they had just done. When Intel decided that it might be worth a try to use the 4004 in other projects, it offered to buy back all the rights to the new chip from Busicom by returning the $60,000 Busicom had paid Intel to develop it. Intel's offer was quickly accepted, at which point it began working on an 8-bit version of the chip, the 8008, introduced in 1972.


Intel did not expect much demand for the 8008, so it set up a low-volume pro­duction line. Much to everyone's amazement, there was an enormous amount of interest, so Intel set about designing a new CPU chip that got around the 8008's 16K memory limit (imposed by the number of pins on the chip). This design resulted in the 8080, a small, general-purpose CPU, introduced in 1974. Much like the PDP-8, this product took the industry by storm and instantly became a mass market item. Only instead of selling thousands, as DEC had, Intel sold mil­lions.


In 1978 came the 8086, a true 16-bit CPU on a single chip. The 8086 was designed to be somewhat similar to the 8080, but it was not completely compati­ble with the 8080. The 8086 was followed by the 8088, which had the same architecture as the 8086, and ran the same programs but had an 8-bit bus instead of a 16-bit bus, making it both slower and cheaper than the 8086. When IBM chose the 8088 as the CPU for the original IBM PC, this chip quickly became the personal computer industry standard.


Neither the 8088 nor the 8086 could address more than 1 megabyte of memory. By the early 1980s this became more and more of a serious problem, so Intel designed the 80286, an upward compatible version of the 8086. The basic instruction set was essentially the same as that of the 8086 and 8088, but the memory organization was quite different, and rather awkward, due to the require­ment of compatibility with the older chips. The 80286 was used in the IBM PC/AT and in the midrange PS/2 models. Like the 8088, it was a huge success, mostly because people viewed it as a faster 8088. 80286 had two modes of operation: Real Mode & Protected mode. In real mode it worked exactly as 8086 (but was faster than 8086) but in protected mode it supports a multiprogram environment and uses all the features of 80286 with the 16MB of memory support. In protected mode each program is given a address via the segment selector. No program can have the physical address and one program cannot access other programs memory space. When switched to protected mode the chip should not be able to switch back to real mode it is because a clever programmer can access the data in another programs area. Since MS-DOS (which runs in real mode) was the famous operating system of those days the protected mode could not be used. That’s why 80286 chips are operated in Read mode and thus function only as fast 8086 chips.                                                                          


The next logical step was a true 32-bit CPU on a chip, the 80386, brought out in 1985. Like the 80286, this one was more-or-less compatible with everything back to the 8080. Being backward compatible was a boon to people for whom running old software was important, but a nuisance to people who would have preferred a simple, clean, modem architecture which is free of the mistakes and technology of the past. Like 80286, the 80386 supports two different operating modes: Real and protected mode. The 80386 microprocessor still has another protected mode feature the virtual 8086 mode. In Real mode 80386 behaves just as 8086 with faster speed of execution. In protected the true power of the 386 processor can be revealed. In protected the on board memory management unit (MMU) manages the 4 GB of memory in a way similar to that of the Protected Mode 80286. But Operating System like Windows uses the virtual 8086 mode. In this mode windows can run multiple DOS programs in virtual 8086 mode.


Four years later the 80486 came out. It was essentially a faster version of the 80386 that also had a floating-point unit and 8K of cache memory on chip. Cache memory is used to hold the most commonly used memory words inside or close to the CPU, to avoid (slow) accesses to main memory. The 80486 also had built-in multiprocessor support, to allow manufacturers to build systems containing multiple CPUs.


At this point, Intel found out the hard way (by losing a trademark infringe­ment lawsuit) that numbers (like 80486) cannot be trademarked, so the next gen­eration got a name: Pentium (from the Greek word for five). Unlike the 80486, which had one internal pipeline, the Pentium had two of them, which helped make it twice as.


When the next generation appeared, people who were hoping for the Sexium (sex is six in latin) were disappointed. The name Pentium was now so well known that the marketing people wanted to keep it, and the new chip was called the Pentium Pro. Despite the small name change from its predecessor, this pro­cessor represented a major break with the past. Instead of having two or more pipelines, the Pentium Pro had a very different internal organization and could execute up to five instructions at a time.


Another innovation found in the Pentium Pro was a two-level cache memory. The processor chip itself had 8 KB of memory to hold commonly-used instruc­tions and 8 KB of memory to hold commonly-used data. In the same cavity with­in the Pentium Pro package (but not on the chip itself) was a second cache memory of 256KB.


The next new Intel processor was the Pentium II, essentially a Pentium Pro with special multimedia extensions (called MMX) added. These instructions were intended to speed up computations required to process audio and video, making the addition of special multimedia coprocessors unnecessary. These instructions were also available in later Pentiums, but not in the Pentium Pro, so the Pentium II combined the strengths of the Pentium Pro with multimedia.


In early 1998, Intel introduced a new product line called the Celeron, which was basically a low-price, low-performance version of the Pentium II intended for low-end PCs. The Celeron is a Pentium II without the level 2 cache. Later Celeron A appeared. It used the Pentium II core but includes a 128 KB level 2 cache on the same die with the processor.


In February 1999, the Pentium III processor was developed. Pentium III process was made on the Pentium II core. It supports clock speeds greater than 600MHz with an external bus speed as high as 133 MHz. The most significant feature of the Pentium III is the inclusion of 70 new streaming SIMD extensions (SSE). Multimedia processing is also enhanced by the addition of 12 new instructions to the MMX instruction set. In addition there are eight new instructions to help improve the efficiency of the L1 cache.


In June 1998, Intel introduced a special version of the Pentium II for the upper end of the market. This processor, called the Pentium II Xeon, had a larger cache, a faster bus, and better multiprocessor support, but was otherwise a normal Pentium II. It offered level 2 cache size as large as 2 MB. In addition, because the cache in the slot 2 cartridge runs at the processor core speed, very high performance can be expected. Later Pentium III Xeon was also developed with higher clock speeds.


The Pentium 4 processor was first available in the late 2000. Pentium 4 is available in larger speeds up to 2 GHz and the bus speed is 400MHz. It has rapid execution cycle, hyper pipelined technology. New instructions for Streaming SIMD Extension 2 (SSE2) were added.


In May 2001 Intel released another newest microprocessor using the IA64 architecture. The name of the processor was Itanium. The Itanium processor was designed from the ground up to meet the increasing demands for high availability, scalability and performance needed for high-end enterprise and technical computing applications. The Itanium processor uses Explicitly Parallel Instruction Computing (EPIC) technology to enable breakthrough levels of performance in targeted application segments. EPIC’s explicit parallelism provides the capability to execute multiple instructions simultaneously. EPIC also delivers new features such as predication and speculation to overcome legacy performance limitations such as instruction branches and memory latency. The Itanium processor family extends open-standards-based computing to the enterprise and brings flexibility, choice and value over proprietary solutions. The processor delivers world-class performance for the most demanding enterprise and high-performance computing applications, including e-Commerce security transactions, large databases, mechanical computer-aided engineering, and sophisticated scientific and engineering computing. A broad range of Itanium-based software offerings from industry-leading vendors combined with IA-32 instruction binary compatibility in hardware providing an increased level of investment protection.


In July 2002, Intel relesead the second member of the Itanium family called Itanium 2. It is used in enterprises and data centric servers. The family brings outstanding performance and the volume economics of the Intel Architecture to the most data-intensive, business-critical and technical computing applications. It provides leading performance for databases, computer-aided engineering, secure online transactions, and more


In March 2003, Intel released a version of the processor which consumes low power. It is the Intel Pentium M. It is equivalent with Pentium 4 but with less power consumption. Intel Centrino is the Pentium M processor with the built in wireless LAN capability. It enables extended battery life and thinner, lighter mobile computers.


In May 2005, Intel released a dual core version of Pentium 4. It was a very fast processor with dual executing cores. Following Pentium D, in March 2006, Intel released a processor with a slight different architecture still following its IA32 architecture. It was improved in its power consumption. It was the Core® Architecture. The NetBurst architecture with high frequency clock consumed a lot of power. The core architecture used the less power consumption as the Pentium M but with the multiple executing cores. It was available with solo and dual execution cores mentioned as Core Solo and Core Duo. In July 2006, at the same time when Intel is still supplying the market with Core® processors, Intel developed a new processor with 64 bit architecture with x86_64 instruction which they named as Core® 2 processors. It was not like Itanium (IA64) but was a improved Core® architecture which can support 64 bit instruction set. Core® 2 processors are available as Core® 2 solo, Core® 2 duo and Core® 2 quad with one, two and four executing cores respectively.


The Intel family of microprocessors is shown below.









Reg Size

Data Bus

Address Bus












First microprocessor on a chip









First 8-bit microprocessor









First general-purpose CPU on a chip








8080 with three more instructions for hardware interrupt









First 16-bit CPU on a chip









Used in IBM PC









Memory protection present









First 32-bit CPU, 386SX had 16bit data bus and 24 bit address bus









Built-in 8K cache memory










Two pipelines; later models had MMX

Pentium Pro








Two levels of cache built in

Pentium II








Pentium Pro plus MMX








Low cost Pentium II

Pentium III







Added SIMD instructions for MMX

Pentium 4







More instructions for SIMD were added








First 64-bit microprocessor from Intel family

Itanium 2


220 M





More advanced processor for enterprise class servers

Pentium M


77 M





Consumes less power. Intel Centrino has built in wireless LAN capability

Pentium D







Dual Core Pentium 4 processor








The new Core architecture with less power consumption

Core 2







New Core architecture that uses x86_64 instruction set








Worlds smallest powerful processor compatible with Core 2 processor

All the Intel chips are backward compatible with their predecessors back as far as the 8086. In other words, a Pentium II can run 8086 programs without modification. This compatibility has always been a design requirement for Intel, to allow users to maintain their existing investment in software. Of course, the Pentium II is 250 times more complex than the 8086, so it can do quite a few things that the 8086 could not do. These piecemeal extensions have resulted in an architecture that is not as elegant as it might have been had someone given the Pentium II architects 7.5 million transistors and instructions to start all over again.


It is interesting to note that although Moore's law was long associated with the number of bits in a memory, it applies equally well to CPU chips. By plotting the transistor counts in the above figure against the date of introduction of each chip on a semilog scale, we see that Moore's law holds here too ie the number of transistor doubling every 18 months. The following figure shows this argument.


Moore’s Law for CPU Chips





The CPU on a single IC chip is called a microprocessor

The important characteristics of a microprocessor are the widths of its internal and external address bus and data bus (and instruction), its clock rate and its instruction set. Processors are also often classified as either RISC or CISC.

The first commercial microprocessor was the Intel 4004 which appeared in 1971. This was the CPU member of a set of four LSI integrated circuits called the MCS-4, which was originally designed for use in a calculator but was marketed as "programmable controller for logic replacement". The 4004 is referred to as a 4-bit microprocessor since it processed only 4 bits of data at a time. This very short word size is due mainly to the limitations imposed by the maximum integrated circuit density then achievable.

As integrated circuit densities increased with the rapid development of integrated circuit manufacturing technology, the power and performance of the microprocessors also increased. This is reflected in the increase in the CPU word size to 4, 8, 16, 32 and today, 64 bits. The smaller microprocessors have relatively simple instruction sets, e.g., no floating point instructions, but they are nevertheless suitable as controllers for a very wide range of applications such as car engines and microwave ovens.

The Intel 4004 was followed with, among others the 4040, 8008, 8080, 8086, 80186, 80286, 80386, 486 and Pentium, Pentium II, Pentium III, Pentium IV, Itanium. Other families include the Motorola 6800 and 680x0 families, National Semiconductor NS16000 and NS32000, SPARC, ARM, MIPS, Zilog Z8000, PowerPC, AMD K6-K7, AMD Athlon and many more.

The larger, more recent microprocessors families have gradually acquired most of the features of large computers. As the microprocessor industry has matured, several families of microprocessors have evolved into de facto industrial standards with multiple manufacturers and numerous "support" chips including RAM, ROM, I/O controllers etc.

A single chip microprocessor may also include other components such as memory management unit, caches, floating-point unit etc.


Before the launch of microprocessors a computer was designed as shown in the figure below




The computer is divided into four components namely: Memory, Input devices, output devices and the CPU (Central Processing Unit). The CPU contains registers to store data, the ALU which performs arithmetic and logic operations on data, the instruction decoders, counters, circuits and control lines. During the days of SSI fabrication (Small Scale Integration) all these different components were contained on separate IC's and wired together so as to work collec­tively as the CPU. ALU is also made by wiring different ICs such as the adder IC, multiplexer IC etc. so as to be collectively work as an ALU. The CPU job is to read instructions from memory and perform the opera­tion specified. Its job also includes communication with input and output devices so as to accept or send data. Since earlier, discrete components (i.e., separate IC's) were used to implement each stage of the CPU (i.e., ALU, decoder, counter etc.) the size of CPU was extremely large. The large size as well as delays of each component created CPU's which were very slow.

However, with the advent of LSI (Large Scale Integration), VLSI (Very Large Scale Inte­gration), and ULSI (Ultra Large Scale Integration) the complete CPU was fabricated on a single wafer of Silicon. This resulted in an IC's which was compact, cheap and capable of doing all the functions of the CPU. This integrated circuit came to be known as the microprocessor.


The traditional block Computer block diagram of computer is replaced by the block diagram shown in the following figure.





A computer having microprocessor as CPU is called a microcomputer. Sometimes microprocessor is also termed as Micro Processing Unit (MPU) because the later microprocessors include most of the necessary circuitry to perform controlling with the necessary set of control signals on a single chip. And from the above discussion we can say that microprocessor is a CPU on a single silicon wafer (chip).



The microprocessor based system

The above figure shows the major components of a digital computer. The major components of a digital computer are microprocessor (integrated CPU), memory, input devices and output devices. The input & output devices are also known as peripherals. These components are organized around a common communication path called bus. A microcomputer is a microprocessor-based system. The microcomputer with the bus architecture is shown below.

A bus is basically a path or set of wires through which data, control and other signals can flow so as for the microprocessor to be able to control and communicate with the complete system. Each component is referred to as a subsystem. The group of components collectively constitutes the microcomputer or microprocessor based system.

Thus the microprocessor is one of the many components of a microcomputer and the micro­computer is a computer whose CPU operations are performed by the microprocessor. Now let us discuss these various components in detail.



The microprocessor is a programmable integrated circuit consisting of a high density of electronic logic circuits and is fabricated by LSI, VLSI or ULSI techniques. Like most sequential digital devices it is driven by a clock signal so as to synchronize all the components present in the microprocessor. The microprocessor is capable of performing arithmetic and logical operations on data and making logical decisions for the change of program execution sequence. This complete capability is available to the user through its instruction set. The user can in any way manipulate these capabilities so a>s to achieve a desired result. The microproc­essor is similar to the CPU except for the fact that the microprocessor contains all the arithme­tic and logic circuitry, registers and control unit on a single chip or wafer of silicon. The micro­processor can be divided into three main units as shown in the above figure ALU, REGISTERS and CONTROL UNIT.

·        Arithmetic and Logic Unit (ALU): This is the area on the chip where all the cir­cuitry for the various computing functions is fabricated. The circuitry collectively is called the ALU. The ALU performs arithmetic operations such as addition, subtraction, multiplica­tion division and logical operations such as AND, OR, XOR, NOT etc. It also performs increment, decrement, shift & clear operations. Each of these operations can be done by using specific instructions. The results of the operations are stored in the memory or the registers for temporary purpose.

·        Registers: This is the area of the microprocessor chip which can be used to store data temporarily. These registers are accessible to the user through instructions. Some reg­isters are also present in the microprocessor for storage of addresses of the memory locations that must be accessed by the microprocessor.

·        Control Unit: The control unit contains many parts such as the instruction decoder, and the control and synchronization circuits. The instructions are decoded by the con­trol unit and the corresponding control signals are generated for the instruction execu­tion. The control unit also synchronizes all components of the microprocessor. The con­trol unit also controls the flow of data between the microprocessor and memory (and peripherals).


2)      MEMORY

Memory is used to store binary information such as data and instructions which are re­quired by the microprocessor for execution. The microprocessor reads instructions and data from memory and performs the corresponding operation. The result can either be transferred to memory or stored in the microprocessor's registers. The results can also be transferred to the I/O devices such as the CRT (Cathode Ray Tube) screen. The memory can be divided into two parts: RAM and ROM             .       .    .

  • ROM or Read Only Memory is a non-volatile memory that cannot be altered. That is once the ROM has been programmed the values stored cannot be changed. The boot up loader (BIOS) and other initialization programs are stored on the ROM. Programs and data stored in ROM cannot be altered.
  • RAM or Random Access Memory is volatile alterable memory. It is also called the user memory as the user can read or write wherever and whenever in the RAM. The operat­ing system etc. are loaded into RAM and then executed by the microprocessor.

3)      I/O (DEVICES)

The third component of a microprocessor is the input and output devices. These can be the keyboard (input device), monitor (output device), hard disk drive (input and output device) etc. The I/O devices are also called the peripherals.

Input devices are used to transfer binary data to the microprocessor. The microprocessor reads this data and performs the corresponding operation required. There are many input de­vices such as keyboards, mouse etc.

Input devices such as keyboard, switches, and an analog-to-digital (A/D) converter transfer binary information (data and instructions) from the outside world to the microprocessor. Typically a microcomputer includes a hexadecimal keyboard or an ASCII keyboard as an input device. The ASCII keyboard is similar to a typewriter keyboard and is used to enter programs in English-like language. The hexadecimal keyboard has 16 data keys (0 to 9 and A to F) and some additional function keys to perform such operations as storing data and executing programs. Although ASCII keyboard is found in nost microcomputers (PCs), single-baard microcomputers generally have Hex keyboards, and microprocessor-based products such as a microwave oven have decimal keyboards.

The output devices transfer data from the microprocessor to the outside world. They include devices such as light emitting diodes (LEDs), a cathode-ray-tube or video screen, a printer, X-Y plotter, a magnetic tape, a digital-to-analog (D/A) converter. Typically, single-board microcomputers and microprocessor-based products (such as a dishwasher) include LEDs, seven-segment LEDs, and alphanumeric LED displays as output devices. Microcomputers (PCs) are generally equipped with the output devices such as a video screen (also called a monitor) and a printer.

4)      SYSTEM BUS

The system bus is a common communication path between the microprocessor, input/out­put (I/O) devices and the memory. The system bus is nothing but a set of wires to carry bits. All equipment connected to the microprocessor share the same system bus, but the microprocessor interacts with one peripheral at a time. The system bus can be subdivided into three parts namely:

a)      Control bus

b)      Data bus

c)      Address bus

The microprocessor based system showing all the bus is in the figure that follows.

The data bus is a set of wires that are used to transmit data from the microprocessor to peripherals or vice-versa. That is data bus is bidirectional. The internal and external data bus of a microprocessor is often different. 8088 has internal data bus of 16-bit width and the external data bus of 8-bit width while 8086 has internal and external data bus both of 16-bit width. This means the 8088 will require two memory read operations to input the same information that the 8086 inputs in memory read cycle. The result is 8088 operates less efficiently than 8086. Pentium processors, on the other hand, have an external data bus width of 64 bits, but internally a 32-bit data bus. These chips are data processing engines, capable of executing two or three instructions per clock cycle with clock rates greater than 400 MHz. The expanded data bus width is needed to keep these chips supplied with data.

Similarly the address bus is a set of wires that is used to transmit address from the microprocessor to the peripherals. This address is used to select one periph­eral and to enable the required device. After the device has been enabled, the data can be transferred from the microprocessor to the device or vice-versa. Since the address is always sent from the microprocessor to a device for selection it is a unidirectional bus. The data bus however is used to transfer data from the microprocessor to a device or vice-versa and is thus a bidirectional bus. Most microprocessors have a multiplexed address and data bus so as to re­duce the number of external pins of the microprocessor IC. This means that these microproces­sors use the same bus for address and data on a time shared basis.

In a computer system, each peripheral or memory location is identified by a binary number, called an address, and the address bus is used to carry a 16-bit address. This is similar to the postal address of a house. A house can be identified by various number schemes. For example, the forty-fifth house in a lane can be identified by the two-digit number 45 or by the four-digit number 0045. The two-digit numbering scheme can identify only a hundred houses, from 00 to 99. On the other hand, the four-digit scheme can identify ten thousand houses, from 0000 to 9999. Similarly, the number of address lines of the MPU determines its capacity to identify different memory locations (or peripherals). The 8085 MPU with its 16 address lines is capable of addressing 216 = 65,536 (generally known as 64K) memory locations.


8086 microprocessors have 20-bit address lines. This means 8086 microprocessor can handle 220 = 1M (it is because 210 is termed as 1K because 210 is nearest to 1000 and 220 is is termed as 1M) However, not every microcomputer system has 1M of memory. In fact, most single-board microcomputers have memory less than 1M, even if the MPU is capable of addressing 1M memory.

The control bus is used to transfer control signal to peripheral for timing and control. Sometimes some peripheral have to tell the microprocessor about certain conditions requiring control lines that must be input to the microprocessor. Thus the control bus is also bidirec­tional. The complete timing for the synchronization of the system is provided by the microproc­essor control bus.


The term bus, in relation to the control signals, is somewhat confusing. These are not groups of lines like address or data buses, but individual lines that provide a pulse to indicate an MPU operation. The MPU generates specific control signals for every operation (such as Memory Read or I/O Write) it performs. These signals are used to identify a device type with which the MPU intends to communicate.


To communicate with a memory—for example, to read an instruction from a mem­ory location—the MPU places the 20-bit address on the address bus (see above figure that shows all components of a stored program computer). The address on the bus is decoded by an external logic circuit, which will be explained later, and the memory location is identified. The MPU sends a pulse called Memory Read as the control signal. The pulse activates the memory chip, and the contents of the memory location (8-bit data) are placed on the data bus and brought inside the microprocessor.


Working of a Microprocessor

At this point a question that may arise is how the microprocessor works. Here we shall briefly explain the working of a general microprocessor. The detailed working of the 8086 microproces­sor is explained in the following chapters.

We at this stage assume that a program has been entered into the RAM (user memory). We are not concerned how the program is stored in memory because that shall be explained in later chapters. The program we assume includes instructions and data on which the microproc­essor must perform arithmetic or logic operations and then display the result on the seven segment LED output. On providing the microprocessor with a command to execute this program, it reads and executes one instruction from a specific address in the RAM. For this it would have to send an address and the corresponding signal to enable the RAM. The RAM on receiving this signal obliges and sends out the instructions and data to the microprocessor. The microprocessor then decodes these instructions and performs the operations required. Similarly, all other instruction are read, decoded and executed by the processor. After all instructions have executed it finally sends the address and control signal to the seven segments LED display. This is followed by the result which is obligingly displayed by the LED display.

This process can be correlated to an electronics engineer designing an electronic circuit. The electronics engineer would first have to read all the basics of electronics. After reading all basics he will have to interpret these and then sit in a laboratory and execute the necessary designing steps. Similarly, the microprocessor reads the first instruction from memory. It then decodes the instruction and follows it by executing the instruction. The sequence of fetch, decode and execute is performed until the microprocessor encounters an instruction that tells it to perform a halt operation. During this entire process the microprocessor uses the system bus to fetch instructions and data from memory (or I/O). It will use the registers as a temporary storage device for the data or result and the ALU to perform arithmetic or logic operations. After all this it will send data using the same system bus to I/O devices or memory




Microcontroller is an entire computer on a chip.

Microcontrollers are not as well known as their relatively popular cousin's the microprocessor. However, microcontrollers today are used almost everywhere, from burglar alarms to VCRs, in automobiles, videogames, copiers and a host of other machines that are just about called pro­grammable machines. Automatic machines, microwaves ovens, robots and many more sys­tems use microcontrollers. A lot of non-technical text uses the terms microprocessors and microcontrollers as same. However, you as a technical person must know the many differences be­tween these two very similar chips and the different advantages of both.

 Following the widespread popularity that microprocessors gained, it was found that using microprocessors for control applications for devices such as microwave ovens, automatic wash­ing machines etc. were very expensive. The main reason was that the microprocessor required many components such as RAM, ROM etc. for a designer to be able to use them as a control device. Also this required a large amount, of space which is generally not available in such devices. This led to the demand for small, dedicated processors, that required minimal compo­nents (like RAM, ROM etc.) for their working and had the advantage of being cheap.

The microcontroller essentially was developed keeping these things in mind. A microcontroller incorporates all features of a microprocessor. It also has some added features which include the presence of RAM, ROM, I/O Ports, counters and a clock circuit. All these features are sufficient to make it a complete computer and the best part of all is that it is present on a single chip. This means that microcontrollers are essentially equivalent to a small computer built, into a single chip requiring no other external devices such as RAM or ROM. for its working.

More simply we can say that microprocessors are intended to be used in general purpose digital computers while microcontrollers are special purpose digital computers. The difference between microprocessor and microcontroller is given in the following table.






1.      Microprocessors are designed to be used as the CPU of general purpose computers.

2.      Microprocessors contain an ALU, General purpose registers, control and interrupt circuits, in one chip.

3.      Microprocessor require many components such as RAM, ROM etc. for their working.

4.      Microprocessors have higher speeds, greater data handling capability and can address large amounts of memory.


1.      Microcontrollers are special purpose digital computers requiring few or no external component for their working.

2.      Microcontrollers have all these as well as timers, I/O ports, RAM and ROM on the chip.

3.      Many microcontrollers require no exter­nal components for their workings as all these components are built into the microcontroller chip.

4.      Microcontrollers have RAM and ROM built on the same chip. Since the microcontrollers are designed for use in machines such as a control device, they have slower speed and address lesser memory.



The general block diagram for a microcontroller is shown below.

Microcontrollers are also known as dedicated or embedded controllers. Microcontrollers are designed for use as a dedicated computer which controls some device such as a microwave oven, automobiles etc. Microcontrollers are available as 4 bit, 8 bit, 16 bit, 32 bit and 64 bit versions. An 8 bit microcontroller implies that the microcontroller has an 8 bit CPU. Some common microcontrollers available in the market are Intel 8048, Intel 8031, Intel 8051, Motorola MC6801, Motorola MC 6805, Zilog Z8 and Texas Instrument TMS 7500. The 8051 microcontroller from Intel Corporation is one of the most popular and widely used microcontrollers. 

Copyright (c) 2009 DSBaral