History of the Computer – Cache Memory Part 1 of 2
We looked at the early digital computer memory, see History of the computer – Core Memory, and mentioned that the present standard RAM (Random Access Memory) is chip memory. This conforms with the commonly quoted application of Moore’s Law (Gordon Moore was one of the founders of Intel). It states that component density on integrated circuits, which can be paraphrased as performance per unit cost, doubles every 18 months. Early core memory had cycle times in microseconds, today we are talking in nanoseconds.
You may be familiar with the term cache, as applied to PCs. It is one of the performance features mentioned when talking about the latest CPU, or Hard Disk. You can have L1 or L2 cache on the processor, and disk cache of various sizes. Some programs have cache too, also known as buffer, for example, when writing data to a CD burner. Early CD burner programs had ‘overruns’. The end result of these was a good supply of coasters!
Mainframe systems have used cache for many years. The concept became popular in the 1970s as a way of speeding up memory access time. This was the time when core memory was being phased out and being replaced with integrated circuits, or chips. Although the chips were much more efficient in terms of physical space, they had other problems of reliability and heat generation. Chips of a certain design were faster, hotter and more expensive than chips of another design, which were cheaper, but slower. Speed has always been one of the most important factors in computer sales, and design engineers have always been on the lookout for ways to improve performance.
The concept of cache memory is based on the fact that a computer is inherently a sequential processing machine. Of course one of the big advantages of the computer program is that it can ‘branch’ or ‘jump’ out of sequence – subject of another article in this series. However, there are still enough times when one instruction follows another to make a buffer or cache a useful addition to the computer.
The basic idea of cache is to predict what data is required from memory to be processed in the CPU. Consider a program, which is made up of a series instructions, each one being stored in a location in memory, say from address 100 upwards. The instruction at location 100 is read out of memory and executed by the CPU, then the next instruction is read from location 101 and executed, then 102, 103 etc.
If the memory in question is core memory, it will take maybe 1 microsecond to read an instruction. If the processor takes, say 100 nanoseconds to execute the instruction, it then has to wait 900 nanoseconds for the next instruction (1 microsecond = 1000 nanoseconds). The effective repeat speed of the CPU is 1 microsecond.. (Times and speeds quoted are typical, but do not refer to any specific hardware, merely give an illustration of the principles involved).
Source by Tony Stockill