1
votes

In a computer the cache memory caches the main memory using a concept called cache lines. Lets say that we increase the cache line size. The spatial locality improves right?

But i see no improvement on temporal locality because temporal locality of reference means the accessing a same memory location repeadetly.

Can we actually improve temporal locality of reference ? I feel that it cannot be done because how can one improve memory accesses to the same memory location.

2
Bigger is not better. Cache coherency becomes more expensive, false sharing becomes more likely, the processor stall is longer.Hans Passant
what happens to cache locality ?user602774
You haven't given a scenario for this. In some circumstances, conceivably the compiler could store the data in a register, which is as fast as it can be.Andrew Morton

2 Answers

3
votes

Technically, temporal locality is spatial locality :)

Spatial locality says that if a certain memory area M0 is accessed at time T0 then near-future memory accesses are going to be around M0.

Temporal locality is stricter, it says that near-future memory accesses are going to be at M0. You improve it by reusing the already loaded data as much as possible.

Spatial locality is good for reads but not necessarily a good thing for writes, mainly in multiprocessor machines but perhaps in single core ones, too: the bigger the cache line and the smaller the data types used, the likelier it is that the same cache line is in loaded in more than one processor's cache. A write there by one processor would have to invalidate the cache lines in all the other processors.

We are, arguably, around a good size for the cache. Further improvements are probably going to be due to smarter cache algorithms in the CPU and cache-aware algorithms in the executed code, and less due to just bigger caches.

0
votes

Temporal locality is a property exhibited by applications and the way they access data. There is no single practical cache design that is optimal for all cases.

Certainly applications exhibit some form of spatial locality (accessing nearby blocks), however temporal locality is not always there, e.g. streaming behavior.

The most important thing to consider when designing a cache for temporal locality, is the replacement and cache block allocation policy. LRU replacement approaches the optimal case for most applications but not all (there are pathological cases) and is practical only for caches with limited associativy (e.g. 4-ways). A cache designer may also choose the allocation policy for writes (e.g. write-no-allocate with the optimization of a write-combining/merging buffer).

Temporal locality optimization is generally a software task, where several techniques can be applied, such as blocking and cache-oblivious algorithms. There is also a very useful metric/method where you can characterize the temporal locality of applications and is called reuse-distance. This method assumes a fully-associative cache with LRU replacement and cache-line size of one word. This is usually modeled as a stack and when you hit in the cache, the position in the stack gives you the distance (then you move this to the top of the stack and leave a hole), then you can generate histograms and see what happens.

There can be endless discussion on locality because it is a research topic forever.