3
votes

I was reading the MDS attack paper RIDL: Rogue In-Flight Data Load. The set pages as write-back, write-through, write-combined or uncacheable and with different experiments determines that the Line Fill Buffer is the cause of the micro-architectural leaks.


On a tangent: I was aware that memory can be uncacheable, but I assumed that cacheable data was always cached in a write-back cache, i.e. I assumed that the L1, L2 and LLC were always write-back caches.

I read up on the differences between write-back and write-through caches in my Computer Architecture book. It says:

Write-through caches are simpler to implement and can use a write buffer that works independently of the cache to update memory. Furthermore, read misses are less expensive because they do not trigger a memory write. On the other hand, write-back caches result in fewer transfers, which allows more bandwidth to memory for I/O devices that perform DMA. Further, reducing the number of transfers becomes increasingly important as we move down the hierarchy and the transfer times increase. In general, caches further down the hierarchy are more likely to use write-back than write-through.

So a write-through cache is simpler to implement. I can see how that can be an advantage. But if the caching policy is settable by the page table attributes then there can't be an implementation advantage - every cache needs to be able to work in either write-back or write-through.

Questions

  1. Can every cache (L1, L2, LLC) work in either write-back or write-through mode? So if the page attribute is set to write-through, then they all will be write-through?
  2. Write combining is useful for GPU memory; Uncacheable is good when accessing hardware registers. When should a page be set to write-through? What are the advantages to that?
  3. Are there any write-through caches (if it really is a property of the hardware and not just something that is controlled by the pagetable attributes) or is the trend that all caches are created as write-back to reduce traffic?
1

1 Answers

2
votes

Can every cache (L1, L2, LLC) work in either write-back or write-through mode?

In most x86 microarchitectures, yes, all the data / unified caches are (capable of) write-back and used in that mode for all normal DRAM. Which cache mapping technique is used in intel core i7 processor? has some details and links. Unless otherwise specified, the default assumption by anyone talking about x86 is that DRAM pages will be WB.

AMD Bulldozer made the unconventional choice to use write-through L1d with a small 4k write-combining buffer between it and L2. (https://www.realworldtech.com/bulldozer/8/). This has many disadvantages and is I think widely regarded (in hindsight) as one of several weaknesses or even design mistakes of Bulldozer-family (which AMD fixed for Zen). Note also that Bulldozer was an experiment in CMT instead of SMT (two weak integer cores sharing an FPU/SIMD unit, each with separate L1d caches sharing an L2 cache) https://www.realworldtech.com/bulldozer/3/ shows the system architecture.

But of course Bulldozer L2 and L3 caches were still WB, the architects weren't insane. WB caching is essential to reduce bandwidth demands for shared LLC and memory. And even the write-through L1d needed a write-combining buffer to allow L2 cache to be larger and slower, thus serving its purpose of sometimes hitting when L1d misses. See also Why is the size of L1 cache smaller than that of the L2 cache in most of the processors?

Write-through caching can simplify a design (especially of a single-core system), but generally CPUs moved beyond that decades ago. (Write-back vs Write-Through caching?). IIRC, some non-CPU workloads sometimes benefit from write-through caching, especially without write-allocate so writes don't pollute cache. x86 has NT stores to avoid that problem.

So if the page attribute is set to write-through, then they all will be write-through?

Yes, every store has to go all the way to DRAM in a page that's marked WT.

The caches are optimized for WB because that's what everyone uses, but hopefully do support passing on the line to outer caches without evicting from L1d. (So WT doesn't necessarily turn stores into something like movntps cache-bypassing / evicting stores. But check on that; apparently on some CPUs, like Pentium Pro family at least, a WT store hit in L1 updates the line, but a WT hit in L2 evicts the line instead of bringing it in to L1d.)

When should a page be set to write-through? What are the advantages to that?

Basically never; (almost?) all CPU workloads do best with WB memory.

OSes don't even bother to make it easy (or possible?) for user-space to allocate WC or WT DRAM pages. (Although that certainly doesn't prove they're never useful.) e.g. on CPU cache inhibition, I found a link about a Linux patch that never made it into the mainline kernel that added the possibility of mapping a page WT.

WB, WC, and UC are common for normal DRAM, device memory (especially GPU), and MMIO respectively.

I have seen at least one paper that benchmarked WT vs. WB vs. UC vs. WC for some workload (googled but didn't find it, sorry). And people testing obscure x86 stuff will sometimes include it for completeness. e.g. The Microarchitecture Behind Meltdown is a good article in general (and related to what you're reading up on).

One of the few advantages of WT is that stores end up in L3 promptly where loads from other cores can hit. This may possibly be worth the extra cost for every store to that page, especially if you're careful to manually combine your writes into one large 32-byte AVX store. (Or 64-byte AVX512 full-line write.) And of course only use that page for shared data.

I haven't seen anyone ever recommend doing this, though, and it's not something I've tried. Probably because the extra DRAM bandwidth for writing through L3 as well isn't worth the benefit for most use-cases. But probably also because you might have to write a kernel module to get a page mapped that way.

And it might not even work quite this way, if CPUs evict from outer caches on an L2 or L3 hit for a WT store, like @Lewis comments that PPro is documented to do.

So maybe I'm wrong about the purpose of WT, and it's intended (or at least usable) for device-memory use-cases, like maybe parts of video RAM that GPU won't modify.