52
votes

I am trying to profile and optimize algorithms and I would like to understand the specific impact of the caches on various processors. For recent Intel x86 processors (e.g. Q9300), it is very hard to find detailed information about cache structure. In particular, most web sites (including Intel.com) that post processor specs do not include any reference to L1 cache. Is this because the L1 cache does not exist or is this information for some reason considered unimportant? Are there any articles or discussions about the elimination of the L1 cache?

[edit] After running various tests and diagnostic programs (mostly those discussed in the answers below), I have concluded that my Q9300 seems to have a 32K L1 data cache. I still haven't found a clear explanation as to why this information is so difficult to come by. My current working theory is that the details of L1 caching are now being treated as trade secrets by Intel.

7
This was indicated by Norman Ramsey in a comment below, but I didn't realize what he meant at the time. CPUID is an x86 instruction which can be used to query cache details.Brent Bradburn
I just encountered the lscpu command on Linux, which gives a very nice display of CPU data on x86 -- including a cache summary.Brent Bradburn

7 Answers

67
votes

It is near impossible to find specs on Intel caches. When I was teaching a class on caches last year, I asked friends inside Intel (in the compiler group) and they couldn't find specs.

But wait!!! Jed, bless his soul, tells us that on Linux systems, you can squeeze lots of information out of the kernel:

grep . /sys/devices/system/cpu/cpu0/cache/index*/*

This will give you associativity, set size, and a bunch of other information (but not latency). For example, I learned that although AMD advertises their 128K L1 cache, my AMD machine has a split I and D cache of 64K each.


Two suggestions which are now mostly obsolete thanks to Jed:

  • AMD publishes a lot more information about its caches, so you can at least got some information about a modern cache. For example, last year's AMD L1 caches delivered two words per cycle (peak).

  • The open-source tool valgrind has all sorts of cache models inside it, and it is invaluable for profiling and understanding cache behavior. It comes with a very nice visualization tool kcachegrind which is part of the KDE SDK.


For example: in Q3 2008, AMD K8/K10 CPUs use 64 byte cache lines, with a 64kB each L1I/L1D split cache. L1D is 2-way associative and exclusive with L2, with latency of 3 cycles. L2 cache is 16-way associative and latency is about 12 cycles.

AMD Bulldozer-family CPUs use a split L1 with a 16kiB 4-way associative L1D per cluster (2 per core).

Intel CPUs have kept L1 the same for a long time (from Pentium M to Haswell to Skylake, and presumably many generations after that): Split 32kB each I and D caches, with L1D being 8-way associative. 64 byte cache lines, matching the burst-transfer size of DDR DRAM. Load-use latency is ~4 cycles.

Also see the tag wiki for links to more performance and microarchitectural data.

29
votes

This Intel Manual: Intel® 64 and IA-32 Architectures Optimization Reference Manual has a decent discussion of cache considerations.

enter image description here

Page 46, Section 2.2.5.1 Intel® 64 and IA-32 Architectures Optimization Reference Manual

Even MicroSlop is waking up to the need for more tools to monitor cache usage and performance, and has a GetLogicalProcessorInformation() function example (...while blazing new trails in creating ridiculously long function names in the process) I think I'll code up.

UPDATE I: Hazwell increases cache load performance 2X, from Inside the Tock; Haswell's Architecture

If there were any doubt how critical it is to make the best possible use of cache, this presentation by Cliff Click, formerly of Azul, should dispel any and all doubt. In his words, "memory is the new disk!".

Haswell’s URS (Unified Reservation Station)

UPDATE II: SkyLake's significantly improved cache performance specifications.

SkyLake Cache Specifications

8
votes

You are looking at the consumer specifications, not the developer specifications. Here is the documentation you want. The cache sizes vary by processor family sub-models, so they typically are not in the IA-32 development manuals, but you can easily look them up on NewEgg and such.

Edit: More specifically: Chapter 10 of Volume 3A (Systems Programming Guide), Chapter 7 of the Optimization Reference Manual, and potentially something in the TLB page-caching manual, although I would assume that one is further out from the L1 than you care about.

8
votes

I did some more investigating. There is a group at ETH Zurich who built a memory-performance evaluation tool which might be able to get information about the size at least (and maybe also associativity) of L1 and L2 caches. The program works by trying different read patterns experimentally and measuring the resulting throughput. A simplified version was used for the popular textbook by Bryant and O'Hallaron.

2
votes

L1 caches exist on these platforms. This will almost definitly remain true until memory and front side bus speeds exceed the speed of the CPU, which is a very likely a long way off.

On Windows, you can use the GetLogicalProcessorInformation to get some level of cache information (size, line size, associativity, etc.) The Ex version on Win7 will give even more data, like which cores share which cache. CpuZ also gives this information.

2
votes

Locality of Reference has a major impact on performance of some algorithms; The size and speed of L1, L2 (and on newer CPUs L3) cache obviously play a large part in this. Matrix multiplication is one such algorithm.

1
votes

Intel Manual Vol. 2 specifies the following formula to compute cache size:

This Cache Size in Bytes

= (Ways + 1) * (Partitions + 1) * (Line_Size + 1) * (Sets + 1)

= (EBX[31:22] + 1) * (EBX[21:12] + 1) * (EBX[11:0] + 1) * (ECX + 1)

Where the Ways, Partitions, Line_Size and Sets are queried using cpuid with eax set to 0x04.

Providing the header file declaration

x86_cache_size.h:

unsigned int get_cache_line_size(unsigned int cache_level);

The implementation looks as follows:

;1st argument - the cache level
get_cache_line_size:
    push rbx
    ;set line number argument to be used with CPUID instruction
    mov ecx, edi 
    ;set cpuid initial value
    mov eax, 0x04
    cpuid

    ;cache line size
    mov eax, ebx
    and eax, 0x7ff
    inc eax

    ;partitions
    shr ebx, 12
    mov edx, ebx
    and edx, 0x1ff
    inc edx
    mul edx

    ;ways of associativity
    shr ebx, 10
    mov edx, ebx
    and edx, 0x1ff
    inc edx
    mul edx

    ;number of sets
    inc ecx
    mul ecx

    pop rbx

    ret

Which on my machine works as follows:

#include "x86_cache_size.h"

int main(void){
    unsigned int L1_cache_size = get_cache_line_size(1);
    unsigned int L2_cache_size = get_cache_line_size(2);
    unsigned int L3_cache_size = get_cache_line_size(3);
    //L1 size = 32768, L2 size = 262144, L3 size = 8388608
    printf("L1 size = %u, L2 size = %u, L3 size = %u\n", L1_cache_size, L2_cache_size, L3_cache_size);
}