3
votes

I read this in a book:

If the DMA controller in a system functions at a maximum rate of 5 MHz and we still use 100 ns memory, the maximum transfer rate is 5 MHz because the DMA controller is slower than the memory. In many cases, the DMA controller slows the speed of the system when DMA transfers occur.

I thought the whole reason for a DMA controller is to speed things up, not slow things down. So how is it helpful if it slows things down? Why not make the DMA controller as fast as the memory?

2
When was this book written ? 1979 ?Paul R

2 Answers

5
votes

The whole idea of DMA controller is that it works in parallel with the processor. So the processor can queue a long IO operation to DMA controller and happily continue running code. Even though the DMA controller is slower it will only affect the IO operation, not the overall performance. This ie very important when interfacing with slow devices - if the processor had to work with them directly it would never have any other processing done. With DMA it queues that slow IO onto the DMA and the IO is done in parallel.

3
votes

There are two different "transfer rates" involved. In a well-designed system, the DMA controller must be able to interface with the address and data bus(es) at their normal operating rate. On the other hand, the time that it takes between operations may be much slower than the CPU instruction cycle, which means that it does not transfer data from a source address to a destination address at the same pace that the CPU would. Since almost all hardware devices attached to a system operate at a much slower pace, this is completely acceptable.

The typical purpose of DMA is to offload the CPU from the mundane task of shoveling bytes from memory to I/O ports. Consider the normal I/O sequence in the middle of a transmission:
- get an interrupt from the port that it is ready for the next byte or word;
- perform interrupt handling, including stack operations and saving registers;
- pick up the pointers and counter from memory;
- load a data byte, store a data byte;
- increment both pointers and save them
- decrement the counter and save it; if zero, flag end of transmission;
- return-from-interrupt handling

With DMA in the system, the cpu spends a little more time up front programming the DMA controller, but then avoids all of the interrupts until the end of transmission. Of course, when the DMA accesses memory the CPU cannot; but typically the CPU is not accessing memory every instruction anyway (add, subtract, whatever, all takes place inside the CPU with no memory access). On average, then, each byte transfer should cost less than one memory cycle (allowing for the ones that don't interfere), rather than a full interrupt-handling operation.