cache: a safe place for hiding or storing things.
--webster
the topic of today's and friday's lecture will be caches

caches

why have caches?

CPU is faster than memory.

speeds have been diverging (e.g. CPU has been getting faster at a faster rate, while speed of memory has fallen behind significantly).

there are techniques to slightly speed up memory access (such as interleaving by having several parallel buses at the cost of great complexity in the circuitboard), but these techniques only increase speed linearly, not geometrically, as is needed to keep pace with CPU speeds.

program locality

caches exploit program and data locality.

locality: program spends a large amount of time in blocks of relatively small size.

cache performance

cache performance is best measured in terms of "hit rate" or "miss rate", rather than in terms of simply the size of the cache.

knowing the size of the cache alone will not be an accurate indicator of its performance.

performance also depends on how the cache is mapped, etc..

of course, hit rate and miss rate depend on the specific program being run, and such measures must be determined in the context of specific benchmarks (e.g. specific program examples).

the 4 fundamental concepts of caches

  1. block placement: where can a block be placed in a cache?
  2. block identification: how is a block found if it is in cache?
  3. block replacement: which block should be replaced in a cache miss?
  4. write strategy: what happens on a write?

1: block placement

in terms of where a block can be placed from memory, there are three kinds of caches, each with its own method of mapping, as follows:

cache2.htm