Quantcast
Channel: User rafalcieslak - Stack Overflow
Viewing all articles
Browse latest Browse all 25

Answer by rafalcieslak for Direct-­mapped instruction cache VS fully associative instruction cache using LRU replacement

$
0
0

This can happen for caches of small size. Let's compare caches of size 2.

In my example, the directly-mapped "DM" cache will use row A for odd addresses, and row B for even addresses.

The LRU cache will use the least recently used row to store values on a miss.

The access pattern I suggest is 13243142 (repeated as many times as one wants).

Here's a breakdown of how botch caching algorithms will behave:

H - hitsM - misses----- time ------>>>>>Accessed:    1 | 3 | 2 | 4 | 3 | 1 | 4 | 2             \   \   \   \   \   \   \   \ LRU  A    ? | ? | 3 | 3 | 4 | 4 | 1 | 1 | 2 |     B    ? | 1 | 1 | 2 | 2 | 3 | 3 | 4 | 4 |             M   M   M   M   M   M   M   M   DM   A    ? | 1 | 3 | 3 | 3 | 3 | 1 | 1 | 1 |     B    ? | ? | ? | 2 | 4 | 4 | 4 | 4 | 2 |             M   M   M   M   H   M   H   M   

That gives 8 misses for the LRU, and 6 for directly-mapped. Let's see what happens if this pattern gets repeated forever:

----- time ------>>>>>Accessed:    1 | 3 | 2 | 4 | 3 | 1 | 4 | 2             \   \   \   \   \   \   \   \ LRU  A      | 2 | 3 | 3 | 4 | 4 | 1 | 1 | 2 |     B      | 1 | 1 | 2 | 2 | 3 | 3 | 4 | 4 |             M   M   M   M   M   M   M   M   DM   A      | 1 | 3 | 3 | 3 | 3 | 1 | 1 | 1 |     B      | 2 | 2 | 2 | 4 | 4 | 4 | 4 | 2 |             H   M   H   M   H   M   H   M   

So the directly-mapped cache will have 50% of hits, which outperforms 0% hits of LRU.

This works this way because:

  • Any address repeated in this pattern has not been accessed for previous 2 accesses (and both these were different), so LRU cache will always miss.
  • The DM cache will sometimes miss, as the pattern is designed so that it utilizes what has been stored the last time the corresponding row was used.

Therefore once can build similar patterns for larger cache sizes, but the larger the cache size, the longer such pattern would need to be. This corresponds to the intuition that for larger caches it would be harder to exploit them this way.


Viewing all articles
Browse latest Browse all 25

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>