3
votes

I'm using a Xilinx Zynq 7000 ARM-based SoC. I'm struggling with DMA buffers (Need help mapping pre-reserved **cacheable** DMA buffer on Xilinx/ARM SoC (Zynq 7000)), so one thing I pursued was faster memcpy.

I've been looking at writing a faster memcpy for ARM using Neon instructions and inline asm. Whatever glibc has, it's terrible, especially if we're copying from an ucached DMA buffer.

I've put together my own copy function from various sources, including:

The main difference for me is that I'm trying to copy from an uncached buffer because it's a DMA buffer, and ARM support for cached DMA buffers is nonexistent.

So here's what I wrote:

void my_copy(volatile unsigned char *dst, volatile unsigned char *src, int sz)
{
    if (sz & 63) {
        sz = (sz & -64) + 64;
    }
    asm volatile (
        "NEONCopyPLD:                          \n"
        "    VLDM %[src]!,{d0-d7}                 \n"
        "    VSTM %[dst]!,{d0-d7}                 \n"
        "    SUBS %[sz],%[sz],#0x40                 \n"
        "    BGT NEONCopyPLD                  \n"
        : [dst]"+r"(dst), [src]"+r"(src), [sz]"+r"(sz) : : "d0", "d1", "d2", "d3", "d4", "d5", "d6", "d7", "cc", "memory");
}

The main thing I did was leave out the prefetch instruction because I figured it would be worthless on uncached memory.

Doing this resulted in a speedup of 4.7x over the glibc memcpy. Speed went from about 70MB/sec to about 330MB/sec.

Unfortunately, this isn't nearly as fast as memcpy from cached memory, which runs at around 720MB/sec for system memcpy and 620MB/sec for the Neon version (probably slower because my memcpy doesn't do prefetching, perhaps).

Can anyone help me figure out what I can do make up for this performance gap?

I've tried a number of things like copying more at once, two loads followed by two stores. I could try prefetch just to prove that it's useless. Any other ideas?

2
Is your source a "multiples of the level 1 cache line size"?David Wohlferd
I've ensured that the data buffers are aligned on 64-byte boundaries and in 64-byte units. (Technically, the end of the last 64-byte unit may get ignored.)Timothy Miller
Is your uncached buffer located in DRAM? If so it's likely impossible to close the gap. Cache excels at hiding memory latency in this kind of workload. If your buffer size is small enough and bandwidth is a real concern, consider moving to an on-chip memory.Tony K
in my experience best is experimenting. may be don't use vldm but use single load / store variants, unroll further, do subs earlier. also I would do a non neon version to see if that gets better. sometime neon has its own memory port sometimes not.auselen
@TonyK So far, the largest data block we may want to transfer is just under 32MB. The chip we're working with is a Xilinx Zynq 7000, and there just isn't enough SRAM in the FPGA fabric. The big memory is the main DRAM.Timothy Miller

2 Answers

0
votes

You can try to use the buffered memory rather than non-cached memory.

0
votes

If you're trying to do large, fast transfers, cached memory will often outperform uncached memory, but as you pointed out, support for cached DMA buffer memory must be managed somewhere, and on <=ARMv7, that place is the kernel / kernel-driver.

I'm assuming two things about your design:

  • Userspace is reading a memory-mapped hardware buffer
  • There's some sort of signal/event/interrupt from the FGPA to the CortexA9 VIC/GIC that tells the CortexA9 when a new buffer is available to read.

Align your DMA buffers on cacheline boundaries and do not place anything between the end of the DMA buffer and the next cacheline. Invalidate the cache whenever the FPGA signals the CPU that a buffer is ready.

I don't think the A9 has a mechanism to control cachelines on all cores and layers together, so you may wish to pin the program doing this to one core so that you can skip maintaining caches on the other core.