3
votes

I am playing around with mmap and /proc/mtrr in an effort to do some in-depth analysis on physical memory analysis. Here is the basic idea of what I am trying to do and a summary of what I have done so far. I am on Ubuntu kernel version 3.5.0-54-generic.

I am basically mmapping to a specific physical address (using hints from /proc/iomem) and measuring access latency to this physical address range. Here is what I have done so far:

  1. Created an entry in the /proc/mtrr to make the physical address range that I will be mmapping uncachable.
  2. mmaped to the specific address using /dev/mem. I had to relax the security restrictions in order to read more than 1 MB from /dev/mem.

While, I am able to execute the program with no issues, I have some doubts about whether the uncachable part actually works. Here is a snippet of code I am using. Note that I have used a pseudocode from a prior research paper to create this code.

  int main(int argc, char *argv[]) {  
    int fd; // file descriptor to open /dev/mem
    struct timespec time1, time2;
    fd = open("/dev/mem", O_RDWR|O_SYNC);
    if (fd == -1) {
        printf("\n Error opening /dev/mem");
        return 0;
    }
    struct timespec t1, t2;
    char *addr = (char*)mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0x20000);
    if (addr == MAP_FAILED) {
      printf("\n mmap() failed");
    } 
    // Begin accessing 
    char *addr1 = addr;
    char *addr2 = addr1 + 64; // add cache line

    unsigned int i = 0;
    unsigned int j = 0;
    // Begin accessing uncached region
    while(j < 8192){
        i = 0;
        while(i < 500) {
            *addr1 = *addr2 + i;
            *addr2 = *addr1 + i;
            i = i+1;
        }
        j = j + 64;
        addr2 = addr1 + j;
    }
    if (munmap(addr, 8192) == -1) {
         printf("\n Unmapping failed");
         return 0;
    }
    printf("\n Success......");
    return 0;
}

I use the offset 0x20000 based on the output /proc/iomem as shown below (showing only relevant info):

00000000-0000ffff : reserved
**00010000-0009e3ff : System RAM**
0009e400-0009ffff : RAM buffer
000a0000-000bffff : PCI Bus 0000:00
000a0000-000b0000 : PCI Bus 0000:20
000c0000-000effff : PCI Bus 0000:00

The following are the entries in /proc/mtrr:

reg00: base=0x0d3f00000 ( 3391MB), size=    1MB, count=1: uncachable
reg01: base=0x0d4000000 ( 3392MB), size=   64MB, count=1: uncachable
reg02: base=0x0d8000000 ( 3456MB), size=  128MB, count=1: uncachable
reg03: base=0x0e0000000 ( 3584MB), size=  512MB, count=1: uncachable
reg04: base=0x000020000 (    0MB), size=    8KB, count=1: uncachable

As you can see, the final entry makes the interested address region uncachable.

While I have no problems running the code, I have the following concerns:

  1. Is it correct to pick that particular physical address range denoted as System RAM to do read/write? My understanding is that that address range is used to store data and code. In addition from reading /dev/mem using hexdump, I observe that the address region is uninitialized (set to 0).
  2. To check if the accesses to the uncached region are actually uncached, I do a perf stat -e cache-misses:u to measure how many cache misses occurs. I get a number in the range of 128,200. To me this confirms that the addresses are not cached and are going to RAM as in the loop, I am doing (8192/64)*500*2 = 128,000 accesses. I did the same perf exercise with another similar piece of code with the mmap replaced with a dynamic memory allocation of a character array of the same length. In this case perf stat reported far far less cache misses.
  3. To re-check if I am indeed bypassing the cache and going to the memory, I change the offset to another value within the System RAM range (say 0x80000) and ran the perf command to measure how many cache misses occur. The confusion here is that it reports back almost the same number of cache misses in the previous case (around 128,200). I would expect something much less as I have not made that physical address region uncachable.

Any suggestions/feedback on this to understand this observation would be helpful.

Thanks

1
Anyone can help? Really curious to know why this observation? Thanks.Anirudh Kaushik

1 Answers

1
votes

I think I figured it out. MAP_PRIVATE from the man pages says that the changes are not reflected to the underlying file. On changing it to MAP_SHARED, and enabling the entry in /proc/mtrr, the change in the number of cache misses and hits change significantly.