main memory, DRAM
"the one single development that put
computers on their feet was the
invention of a reliable form of memory,
namely, the core memory. ... cost
was reasonable, it was reliable and,
because it was reliable, it could ...
be made large."
2 kinds of RAM:
capacity of DRAM is 4 to 8 times that
of comparable SRAM, but SRAM has cycle
times 8 to 16 times faster than DRAM,
and is 8 to 16 times more costly.
- DRAM: 1 transistor per bit;
DRAM needs to be refreshed
(approx. 5% of time), and when a
bit is read it must be written
back. memory controller usually
takes care of this.
DRAM uses multiplexed addr.
(column and row of a square array).
DRAM optimized for memory capacity.
- SRAM: 4 to 6 transistors per bit
prevent information from being lost
during a read. thus no difference
between access time and cycle time.
SRAM optimized for good tradeoff
between capacity and speed.
for speed, addr. lines not
main memory (main store) is almost always DRAM, while caches are almost
principle of operation of dynamic RAM (reading)
principle of operation of dynamic RAM (writing)
principle of operation of dynamic RAM (single cell)
One transistor and capacitor per cell
RAS and CAS timing diagram
- very dense
- low cost for volume fabrication
- array of cells arrange roughly square:
- square arrangement allows for the least number of row/column drivers
- allows efficent utilization of die
- since capacitors are very small and "leaky", information "evaporates"
- "evaporation" time is on the order of 2-8ms
- "refresh" operation is needed to prevent corruption:
- a "read" operation puts a whole row into buffers and then automaticly
writes the re-constituted data back to the memory cells
- since matrix is square, number of refresh operations is roughly
square root of die capacity
- designers try to keep amount of time needed to refresh to less than 5
percent of total time
- "static colunm" and "burst mode" DRAM allow access to cells in the same
row (once the row is read) is very short time periods (typicly 5-10
percent of total access time)
cache controllers use this property to fill cache blocks on a chache miss,
improving the efficency of memory access
- capacity growing 4-fold every three years
- performance growing at seven percent per year
Fast Page Mode DRAM (FPM DRAM)
Prior to newer forms of DRAM, Fast Page Mode DRAM (FPM DRAM) was the most
common kind of DRAM in personal computers. Page mode DRAM essentially accesses
a row of RAM without having to continually respecify the row. A row access
strobe (RAS) signal is held active while the column access strobe (CAS) signal
changes to read a sequence of contiguous cells. This reduces access time and
lowers power requirements. Clock timings for FPM DRAM are typically 6-3-3-3
(meaning 3 clock cycles for access setup, and 3 clock cycles for the first
and each of three successive accesses based on the initial setup).
fast page: once assert RAS you can access any column addr.
therefore clock through column addresses.
typically row widths are integer number of cache blocks:
work well with cache.
static column DRAM
can change col. addr. without having to re-assert CAS.
compare with SRAM Technology (will talk about in later lectures)
-Four to six transistors per cell
-no need to refresh (hence "static")
-access time and cycle time identical (no need to write back a row after read)
-emphasis of design on speed and not capacity
-DRAM 4-8 times as dense as SRAM
-SRAM 8-16 times as fast as DRAM
-SRAM 8-16 times as expensive