Linux 4.1-rc3
[linux-2.6-block.git] / Documentation / DMA-API-HOWTO.txt
... / ...
CommitLineData
1 Dynamic DMA mapping Guide
2 =========================
3
4 David S. Miller <davem@redhat.com>
5 Richard Henderson <rth@cygnus.com>
6 Jakub Jelinek <jakub@redhat.com>
7
8This is a guide to device driver writers on how to use the DMA API
9with example pseudo-code. For a concise description of the API, see
10DMA-API.txt.
11
12 CPU and DMA addresses
13
14There are several kinds of addresses involved in the DMA API, and it's
15important to understand the differences.
16
17The kernel normally uses virtual addresses. Any address returned by
18kmalloc(), vmalloc(), and similar interfaces is a virtual address and can
19be stored in a "void *".
20
21The virtual memory system (TLB, page tables, etc.) translates virtual
22addresses to CPU physical addresses, which are stored as "phys_addr_t" or
23"resource_size_t". The kernel manages device resources like registers as
24physical addresses. These are the addresses in /proc/iomem. The physical
25address is not directly useful to a driver; it must use ioremap() to map
26the space and produce a virtual address.
27
28I/O devices use a third kind of address: a "bus address" or "DMA address".
29If a device has registers at an MMIO address, or if it performs DMA to read
30or write system memory, the addresses used by the device are bus addresses.
31In some systems, bus addresses are identical to CPU physical addresses, but
32in general they are not. IOMMUs and host bridges can produce arbitrary
33mappings between physical and bus addresses.
34
35Here's a picture and some examples:
36
37 CPU CPU Bus
38 Virtual Physical Address
39 Address Address Space
40 Space Space
41
42 +-------+ +------+ +------+
43 | | |MMIO | Offset | |
44 | | Virtual |Space | applied | |
45 C +-------+ --------> B +------+ ----------> +------+ A
46 | | mapping | | by host | |
47 +-----+ | | | | bridge | | +--------+
48 | | | | +------+ | | | |
49 | CPU | | | | RAM | | | | Device |
50 | | | | | | | | | |
51 +-----+ +-------+ +------+ +------+ +--------+
52 | | Virtual |Buffer| Mapping | |
53 X +-------+ --------> Y +------+ <---------- +------+ Z
54 | | mapping | RAM | by IOMMU
55 | | | |
56 | | | |
57 +-------+ +------+
58
59During the enumeration process, the kernel learns about I/O devices and
60their MMIO space and the host bridges that connect them to the system. For
61example, if a PCI device has a BAR, the kernel reads the bus address (A)
62from the BAR and converts it to a CPU physical address (B). The address B
63is stored in a struct resource and usually exposed via /proc/iomem. When a
64driver claims a device, it typically uses ioremap() to map physical address
65B at a virtual address (C). It can then use, e.g., ioread32(C), to access
66the device registers at bus address A.
67
68If the device supports DMA, the driver sets up a buffer using kmalloc() or
69a similar interface, which returns a virtual address (X). The virtual
70memory system maps X to a physical address (Y) in system RAM. The driver
71can use virtual address X to access the buffer, but the device itself
72cannot because DMA doesn't go through the CPU virtual memory system.
73
74In some simple systems, the device can do DMA directly to physical address
75Y. But in many others, there is IOMMU hardware that translates bus
76addresses to physical addresses, e.g., it translates Z to Y. This is part
77of the reason for the DMA API: the driver can give a virtual address X to
78an interface like dma_map_single(), which sets up any required IOMMU
79mapping and returns the bus address Z. The driver then tells the device to
80do DMA to Z, and the IOMMU maps it to the buffer at address Y in system
81RAM.
82
83So that Linux can use the dynamic DMA mapping, it needs some help from the
84drivers, namely it has to take into account that DMA addresses should be
85mapped only for the time they are actually used and unmapped after the DMA
86transfer.
87
88The following API will work of course even on platforms where no such
89hardware exists.
90
91Note that the DMA API works with any bus independent of the underlying
92microprocessor architecture. You should use the DMA API rather than the
93bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the
94pci_map_*() interfaces.
95
96First of all, you should make sure
97
98#include <linux/dma-mapping.h>
99
100is in your driver, which provides the definition of dma_addr_t. This type
101can hold any valid DMA or bus address for the platform and should be used
102everywhere you hold a DMA address returned from the DMA mapping functions.
103
104 What memory is DMA'able?
105
106The first piece of information you must know is what kernel memory can
107be used with the DMA mapping facilities. There has been an unwritten
108set of rules regarding this, and this text is an attempt to finally
109write them down.
110
111If you acquired your memory via the page allocator
112(i.e. __get_free_page*()) or the generic memory allocators
113(i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from
114that memory using the addresses returned from those routines.
115
116This means specifically that you may _not_ use the memory/addresses
117returned from vmalloc() for DMA. It is possible to DMA to the
118_underlying_ memory mapped into a vmalloc() area, but this requires
119walking page tables to get the physical addresses, and then
120translating each of those pages back to a kernel address using
121something like __va(). [ EDIT: Update this when we integrate
122Gerd Knorr's generic code which does this. ]
123
124This rule also means that you may use neither kernel image addresses
125(items in data/text/bss segments), nor module image addresses, nor
126stack addresses for DMA. These could all be mapped somewhere entirely
127different than the rest of physical memory. Even if those classes of
128memory could physically work with DMA, you'd need to ensure the I/O
129buffers were cacheline-aligned. Without that, you'd see cacheline
130sharing problems (data corruption) on CPUs with DMA-incoherent caches.
131(The CPU could write to one word, DMA would write to a different one
132in the same cache line, and one of them could be overwritten.)
133
134Also, this means that you cannot take the return of a kmap()
135call and DMA to/from that. This is similar to vmalloc().
136
137What about block I/O and networking buffers? The block I/O and
138networking subsystems make sure that the buffers they use are valid
139for you to DMA from/to.
140
141 DMA addressing limitations
142
143Does your device have any DMA addressing limitations? For example, is
144your device only capable of driving the low order 24-bits of address?
145If so, you need to inform the kernel of this fact.
146
147By default, the kernel assumes that your device can address the full
14832-bits. For a 64-bit capable device, this needs to be increased.
149And for a device with limitations, as discussed in the previous
150paragraph, it needs to be decreased.
151
152Special note about PCI: PCI-X specification requires PCI-X devices to
153support 64-bit addressing (DAC) for all transactions. And at least
154one platform (SGI SN2) requires 64-bit consistent allocations to
155operate correctly when the IO bus is in PCI-X mode.
156
157For correct operation, you must interrogate the kernel in your device
158probe routine to see if the DMA controller on the machine can properly
159support the DMA addressing limitation your device has. It is good
160style to do this even if your device holds the default setting,
161because this shows that you did think about these issues wrt. your
162device.
163
164The query is performed via a call to dma_set_mask_and_coherent():
165
166 int dma_set_mask_and_coherent(struct device *dev, u64 mask);
167
168which will query the mask for both streaming and coherent APIs together.
169If you have some special requirements, then the following two separate
170queries can be used instead:
171
172 The query for streaming mappings is performed via a call to
173 dma_set_mask():
174
175 int dma_set_mask(struct device *dev, u64 mask);
176
177 The query for consistent allocations is performed via a call
178 to dma_set_coherent_mask():
179
180 int dma_set_coherent_mask(struct device *dev, u64 mask);
181
182Here, dev is a pointer to the device struct of your device, and mask
183is a bit mask describing which bits of an address your device
184supports. It returns zero if your card can perform DMA properly on
185the machine given the address mask you provided. In general, the
186device struct of your device is embedded in the bus-specific device
187struct of your device. For example, &pdev->dev is a pointer to the
188device struct of a PCI device (pdev is a pointer to the PCI device
189struct of your device).
190
191If it returns non-zero, your device cannot perform DMA properly on
192this platform, and attempting to do so will result in undefined
193behavior. You must either use a different mask, or not use DMA.
194
195This means that in the failure case, you have three options:
196
1971) Use another DMA mask, if possible (see below).
1982) Use some non-DMA mode for data transfer, if possible.
1993) Ignore this device and do not initialize it.
200
201It is recommended that your driver print a kernel KERN_WARNING message
202when you end up performing either #2 or #3. In this manner, if a user
203of your driver reports that performance is bad or that the device is not
204even detected, you can ask them for the kernel messages to find out
205exactly why.
206
207The standard 32-bit addressing device would do something like this:
208
209 if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
210 dev_warn(dev, "mydev: No suitable DMA available\n");
211 goto ignore_this_device;
212 }
213
214Another common scenario is a 64-bit capable device. The approach here
215is to try for 64-bit addressing, but back down to a 32-bit mask that
216should not fail. The kernel may fail the 64-bit mask not because the
217platform is not capable of 64-bit addressing. Rather, it may fail in
218this case simply because 32-bit addressing is done more efficiently
219than 64-bit addressing. For example, Sparc64 PCI SAC addressing is
220more efficient than DAC addressing.
221
222Here is how you would handle a 64-bit capable device which can drive
223all 64-bits when accessing streaming DMA:
224
225 int using_dac;
226
227 if (!dma_set_mask(dev, DMA_BIT_MASK(64))) {
228 using_dac = 1;
229 } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) {
230 using_dac = 0;
231 } else {
232 dev_warn(dev, "mydev: No suitable DMA available\n");
233 goto ignore_this_device;
234 }
235
236If a card is capable of using 64-bit consistent allocations as well,
237the case would look like this:
238
239 int using_dac, consistent_using_dac;
240
241 if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) {
242 using_dac = 1;
243 consistent_using_dac = 1;
244 } else if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
245 using_dac = 0;
246 consistent_using_dac = 0;
247 } else {
248 dev_warn(dev, "mydev: No suitable DMA available\n");
249 goto ignore_this_device;
250 }
251
252The coherent mask will always be able to set the same or a smaller mask as
253the streaming mask. However for the rare case that a device driver only
254uses consistent allocations, one would have to check the return value from
255dma_set_coherent_mask().
256
257Finally, if your device can only drive the low 24-bits of
258address you might do something like:
259
260 if (dma_set_mask(dev, DMA_BIT_MASK(24))) {
261 dev_warn(dev, "mydev: 24-bit DMA addressing not available\n");
262 goto ignore_this_device;
263 }
264
265When dma_set_mask() or dma_set_mask_and_coherent() is successful, and
266returns zero, the kernel saves away this mask you have provided. The
267kernel will use this information later when you make DMA mappings.
268
269There is a case which we are aware of at this time, which is worth
270mentioning in this documentation. If your device supports multiple
271functions (for example a sound card provides playback and record
272functions) and the various different functions have _different_
273DMA addressing limitations, you may wish to probe each mask and
274only provide the functionality which the machine can handle. It
275is important that the last call to dma_set_mask() be for the
276most specific mask.
277
278Here is pseudo-code showing how this might be done:
279
280 #define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32)
281 #define RECORD_ADDRESS_BITS DMA_BIT_MASK(24)
282
283 struct my_sound_card *card;
284 struct device *dev;
285
286 ...
287 if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) {
288 card->playback_enabled = 1;
289 } else {
290 card->playback_enabled = 0;
291 dev_warn(dev, "%s: Playback disabled due to DMA limitations\n",
292 card->name);
293 }
294 if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) {
295 card->record_enabled = 1;
296 } else {
297 card->record_enabled = 0;
298 dev_warn(dev, "%s: Record disabled due to DMA limitations\n",
299 card->name);
300 }
301
302A sound card was used as an example here because this genre of PCI
303devices seems to be littered with ISA chips given a PCI front end,
304and thus retaining the 16MB DMA addressing limitations of ISA.
305
306 Types of DMA mappings
307
308There are two types of DMA mappings:
309
310- Consistent DMA mappings which are usually mapped at driver
311 initialization, unmapped at the end and for which the hardware should
312 guarantee that the device and the CPU can access the data
313 in parallel and will see updates made by each other without any
314 explicit software flushing.
315
316 Think of "consistent" as "synchronous" or "coherent".
317
318 The current default is to return consistent memory in the low 32
319 bits of the bus space. However, for future compatibility you should
320 set the consistent mask even if this default is fine for your
321 driver.
322
323 Good examples of what to use consistent mappings for are:
324
325 - Network card DMA ring descriptors.
326 - SCSI adapter mailbox command data structures.
327 - Device firmware microcode executed out of
328 main memory.
329
330 The invariant these examples all require is that any CPU store
331 to memory is immediately visible to the device, and vice
332 versa. Consistent mappings guarantee this.
333
334 IMPORTANT: Consistent DMA memory does not preclude the usage of
335 proper memory barriers. The CPU may reorder stores to
336 consistent memory just as it may normal memory. Example:
337 if it is important for the device to see the first word
338 of a descriptor updated before the second, you must do
339 something like:
340
341 desc->word0 = address;
342 wmb();
343 desc->word1 = DESC_VALID;
344
345 in order to get correct behavior on all platforms.
346
347 Also, on some platforms your driver may need to flush CPU write
348 buffers in much the same way as it needs to flush write buffers
349 found in PCI bridges (such as by reading a register's value
350 after writing it).
351
352- Streaming DMA mappings which are usually mapped for one DMA
353 transfer, unmapped right after it (unless you use dma_sync_* below)
354 and for which hardware can optimize for sequential accesses.
355
356 This of "streaming" as "asynchronous" or "outside the coherency
357 domain".
358
359 Good examples of what to use streaming mappings for are:
360
361 - Networking buffers transmitted/received by a device.
362 - Filesystem buffers written/read by a SCSI device.
363
364 The interfaces for using this type of mapping were designed in
365 such a way that an implementation can make whatever performance
366 optimizations the hardware allows. To this end, when using
367 such mappings you must be explicit about what you want to happen.
368
369Neither type of DMA mapping has alignment restrictions that come from
370the underlying bus, although some devices may have such restrictions.
371Also, systems with caches that aren't DMA-coherent will work better
372when the underlying buffers don't share cache lines with other data.
373
374
375 Using Consistent DMA mappings.
376
377To allocate and map large (PAGE_SIZE or so) consistent DMA regions,
378you should do:
379
380 dma_addr_t dma_handle;
381
382 cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp);
383
384where device is a struct device *. This may be called in interrupt
385context with the GFP_ATOMIC flag.
386
387Size is the length of the region you want to allocate, in bytes.
388
389This routine will allocate RAM for that region, so it acts similarly to
390__get_free_pages() (but takes size instead of a page order). If your
391driver needs regions sized smaller than a page, you may prefer using
392the dma_pool interface, described below.
393
394The consistent DMA mapping interfaces, for non-NULL dev, will by
395default return a DMA address which is 32-bit addressable. Even if the
396device indicates (via DMA mask) that it may address the upper 32-bits,
397consistent allocation will only return > 32-bit addresses for DMA if
398the consistent DMA mask has been explicitly changed via
399dma_set_coherent_mask(). This is true of the dma_pool interface as
400well.
401
402dma_alloc_coherent() returns two values: the virtual address which you
403can use to access it from the CPU and dma_handle which you pass to the
404card.
405
406The CPU virtual address and the DMA bus address are both
407guaranteed to be aligned to the smallest PAGE_SIZE order which
408is greater than or equal to the requested size. This invariant
409exists (for example) to guarantee that if you allocate a chunk
410which is smaller than or equal to 64 kilobytes, the extent of the
411buffer you receive will not cross a 64K boundary.
412
413To unmap and free such a DMA region, you call:
414
415 dma_free_coherent(dev, size, cpu_addr, dma_handle);
416
417where dev, size are the same as in the above call and cpu_addr and
418dma_handle are the values dma_alloc_coherent() returned to you.
419This function may not be called in interrupt context.
420
421If your driver needs lots of smaller memory regions, you can write
422custom code to subdivide pages returned by dma_alloc_coherent(),
423or you can use the dma_pool API to do that. A dma_pool is like
424a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages().
425Also, it understands common hardware constraints for alignment,
426like queue heads needing to be aligned on N byte boundaries.
427
428Create a dma_pool like this:
429
430 struct dma_pool *pool;
431
432 pool = dma_pool_create(name, dev, size, align, boundary);
433
434The "name" is for diagnostics (like a kmem_cache name); dev and size
435are as above. The device's hardware alignment requirement for this
436type of data is "align" (which is expressed in bytes, and must be a
437power of two). If your device has no boundary crossing restrictions,
438pass 0 for boundary; passing 4096 says memory allocated from this pool
439must not cross 4KByte boundaries (but at that time it may be better to
440use dma_alloc_coherent() directly instead).
441
442Allocate memory from a DMA pool like this:
443
444 cpu_addr = dma_pool_alloc(pool, flags, &dma_handle);
445
446flags are GFP_KERNEL if blocking is permitted (not in_interrupt nor
447holding SMP locks), GFP_ATOMIC otherwise. Like dma_alloc_coherent(),
448this returns two values, cpu_addr and dma_handle.
449
450Free memory that was allocated from a dma_pool like this:
451
452 dma_pool_free(pool, cpu_addr, dma_handle);
453
454where pool is what you passed to dma_pool_alloc(), and cpu_addr and
455dma_handle are the values dma_pool_alloc() returned. This function
456may be called in interrupt context.
457
458Destroy a dma_pool by calling:
459
460 dma_pool_destroy(pool);
461
462Make sure you've called dma_pool_free() for all memory allocated
463from a pool before you destroy the pool. This function may not
464be called in interrupt context.
465
466 DMA Direction
467
468The interfaces described in subsequent portions of this document
469take a DMA direction argument, which is an integer and takes on
470one of the following values:
471
472 DMA_BIDIRECTIONAL
473 DMA_TO_DEVICE
474 DMA_FROM_DEVICE
475 DMA_NONE
476
477You should provide the exact DMA direction if you know it.
478
479DMA_TO_DEVICE means "from main memory to the device"
480DMA_FROM_DEVICE means "from the device to main memory"
481It is the direction in which the data moves during the DMA
482transfer.
483
484You are _strongly_ encouraged to specify this as precisely
485as you possibly can.
486
487If you absolutely cannot know the direction of the DMA transfer,
488specify DMA_BIDIRECTIONAL. It means that the DMA can go in
489either direction. The platform guarantees that you may legally
490specify this, and that it will work, but this may be at the
491cost of performance for example.
492
493The value DMA_NONE is to be used for debugging. One can
494hold this in a data structure before you come to know the
495precise direction, and this will help catch cases where your
496direction tracking logic has failed to set things up properly.
497
498Another advantage of specifying this value precisely (outside of
499potential platform-specific optimizations of such) is for debugging.
500Some platforms actually have a write permission boolean which DMA
501mappings can be marked with, much like page protections in the user
502program address space. Such platforms can and do report errors in the
503kernel logs when the DMA controller hardware detects violation of the
504permission setting.
505
506Only streaming mappings specify a direction, consistent mappings
507implicitly have a direction attribute setting of
508DMA_BIDIRECTIONAL.
509
510The SCSI subsystem tells you the direction to use in the
511'sc_data_direction' member of the SCSI command your driver is
512working on.
513
514For Networking drivers, it's a rather simple affair. For transmit
515packets, map/unmap them with the DMA_TO_DEVICE direction
516specifier. For receive packets, just the opposite, map/unmap them
517with the DMA_FROM_DEVICE direction specifier.
518
519 Using Streaming DMA mappings
520
521The streaming DMA mapping routines can be called from interrupt
522context. There are two versions of each map/unmap, one which will
523map/unmap a single memory region, and one which will map/unmap a
524scatterlist.
525
526To map a single region, you do:
527
528 struct device *dev = &my_dev->dev;
529 dma_addr_t dma_handle;
530 void *addr = buffer->ptr;
531 size_t size = buffer->len;
532
533 dma_handle = dma_map_single(dev, addr, size, direction);
534 if (dma_mapping_error(dev, dma_handle)) {
535 /*
536 * reduce current DMA mapping usage,
537 * delay and try again later or
538 * reset driver.
539 */
540 goto map_error_handling;
541 }
542
543and to unmap it:
544
545 dma_unmap_single(dev, dma_handle, size, direction);
546
547You should call dma_mapping_error() as dma_map_single() could fail and return
548error. Not all DMA implementations support the dma_mapping_error() interface.
549However, it is a good practice to call dma_mapping_error() interface, which
550will invoke the generic mapping error check interface. Doing so will ensure
551that the mapping code will work correctly on all DMA implementations without
552any dependency on the specifics of the underlying implementation. Using the
553returned address without checking for errors could result in failures ranging
554from panics to silent data corruption. A couple of examples of incorrect ways
555to check for errors that make assumptions about the underlying DMA
556implementation are as follows and these are applicable to dma_map_page() as
557well.
558
559Incorrect example 1:
560 dma_addr_t dma_handle;
561
562 dma_handle = dma_map_single(dev, addr, size, direction);
563 if ((dma_handle & 0xffff != 0) || (dma_handle >= 0x1000000)) {
564 goto map_error;
565 }
566
567Incorrect example 2:
568 dma_addr_t dma_handle;
569
570 dma_handle = dma_map_single(dev, addr, size, direction);
571 if (dma_handle == DMA_ERROR_CODE) {
572 goto map_error;
573 }
574
575You should call dma_unmap_single() when the DMA activity is finished, e.g.,
576from the interrupt which told you that the DMA transfer is done.
577
578Using CPU pointers like this for single mappings has a disadvantage:
579you cannot reference HIGHMEM memory in this way. Thus, there is a
580map/unmap interface pair akin to dma_{map,unmap}_single(). These
581interfaces deal with page/offset pairs instead of CPU pointers.
582Specifically:
583
584 struct device *dev = &my_dev->dev;
585 dma_addr_t dma_handle;
586 struct page *page = buffer->page;
587 unsigned long offset = buffer->offset;
588 size_t size = buffer->len;
589
590 dma_handle = dma_map_page(dev, page, offset, size, direction);
591 if (dma_mapping_error(dev, dma_handle)) {
592 /*
593 * reduce current DMA mapping usage,
594 * delay and try again later or
595 * reset driver.
596 */
597 goto map_error_handling;
598 }
599
600 ...
601
602 dma_unmap_page(dev, dma_handle, size, direction);
603
604Here, "offset" means byte offset within the given page.
605
606You should call dma_mapping_error() as dma_map_page() could fail and return
607error as outlined under the dma_map_single() discussion.
608
609You should call dma_unmap_page() when the DMA activity is finished, e.g.,
610from the interrupt which told you that the DMA transfer is done.
611
612With scatterlists, you map a region gathered from several regions by:
613
614 int i, count = dma_map_sg(dev, sglist, nents, direction);
615 struct scatterlist *sg;
616
617 for_each_sg(sglist, sg, count, i) {
618 hw_address[i] = sg_dma_address(sg);
619 hw_len[i] = sg_dma_len(sg);
620 }
621
622where nents is the number of entries in the sglist.
623
624The implementation is free to merge several consecutive sglist entries
625into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any
626consecutive sglist entries can be merged into one provided the first one
627ends and the second one starts on a page boundary - in fact this is a huge
628advantage for cards which either cannot do scatter-gather or have very
629limited number of scatter-gather entries) and returns the actual number
630of sg entries it mapped them to. On failure 0 is returned.
631
632Then you should loop count times (note: this can be less than nents times)
633and use sg_dma_address() and sg_dma_len() macros where you previously
634accessed sg->address and sg->length as shown above.
635
636To unmap a scatterlist, just call:
637
638 dma_unmap_sg(dev, sglist, nents, direction);
639
640Again, make sure DMA activity has already finished.
641
642PLEASE NOTE: The 'nents' argument to the dma_unmap_sg call must be
643 the _same_ one you passed into the dma_map_sg call,
644 it should _NOT_ be the 'count' value _returned_ from the
645 dma_map_sg call.
646
647Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}()
648counterpart, because the bus address space is a shared resource and
649you could render the machine unusable by consuming all bus addresses.
650
651If you need to use the same streaming DMA region multiple times and touch
652the data in between the DMA transfers, the buffer needs to be synced
653properly in order for the CPU and device to see the most up-to-date and
654correct copy of the DMA buffer.
655
656So, firstly, just map it with dma_map_{single,sg}(), and after each DMA
657transfer call either:
658
659 dma_sync_single_for_cpu(dev, dma_handle, size, direction);
660
661or:
662
663 dma_sync_sg_for_cpu(dev, sglist, nents, direction);
664
665as appropriate.
666
667Then, if you wish to let the device get at the DMA area again,
668finish accessing the data with the CPU, and then before actually
669giving the buffer to the hardware call either:
670
671 dma_sync_single_for_device(dev, dma_handle, size, direction);
672
673or:
674
675 dma_sync_sg_for_device(dev, sglist, nents, direction);
676
677as appropriate.
678
679After the last DMA transfer call one of the DMA unmap routines
680dma_unmap_{single,sg}(). If you don't touch the data from the first
681dma_map_*() call till dma_unmap_*(), then you don't have to call the
682dma_sync_*() routines at all.
683
684Here is pseudo code which shows a situation in which you would need
685to use the dma_sync_*() interfaces.
686
687 my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len)
688 {
689 dma_addr_t mapping;
690
691 mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
692 if (dma_mapping_error(cp->dev, dma_handle)) {
693 /*
694 * reduce current DMA mapping usage,
695 * delay and try again later or
696 * reset driver.
697 */
698 goto map_error_handling;
699 }
700
701 cp->rx_buf = buffer;
702 cp->rx_len = len;
703 cp->rx_dma = mapping;
704
705 give_rx_buf_to_card(cp);
706 }
707
708 ...
709
710 my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs)
711 {
712 struct my_card *cp = devid;
713
714 ...
715 if (read_card_status(cp) == RX_BUF_TRANSFERRED) {
716 struct my_card_header *hp;
717
718 /* Examine the header to see if we wish
719 * to accept the data. But synchronize
720 * the DMA transfer with the CPU first
721 * so that we see updated contents.
722 */
723 dma_sync_single_for_cpu(&cp->dev, cp->rx_dma,
724 cp->rx_len,
725 DMA_FROM_DEVICE);
726
727 /* Now it is safe to examine the buffer. */
728 hp = (struct my_card_header *) cp->rx_buf;
729 if (header_is_ok(hp)) {
730 dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len,
731 DMA_FROM_DEVICE);
732 pass_to_upper_layers(cp->rx_buf);
733 make_and_setup_new_rx_buf(cp);
734 } else {
735 /* CPU should not write to
736 * DMA_FROM_DEVICE-mapped area,
737 * so dma_sync_single_for_device() is
738 * not needed here. It would be required
739 * for DMA_BIDIRECTIONAL mapping if
740 * the memory was modified.
741 */
742 give_rx_buf_to_card(cp);
743 }
744 }
745 }
746
747Drivers converted fully to this interface should not use virt_to_bus() any
748longer, nor should they use bus_to_virt(). Some drivers have to be changed a
749little bit, because there is no longer an equivalent to bus_to_virt() in the
750dynamic DMA mapping scheme - you have to always store the DMA addresses
751returned by the dma_alloc_coherent(), dma_pool_alloc(), and dma_map_single()
752calls (dma_map_sg() stores them in the scatterlist itself if the platform
753supports dynamic DMA mapping in hardware) in your driver structures and/or
754in the card registers.
755
756All drivers should be using these interfaces with no exceptions. It
757is planned to completely remove virt_to_bus() and bus_to_virt() as
758they are entirely deprecated. Some ports already do not provide these
759as it is impossible to correctly support them.
760
761 Handling Errors
762
763DMA address space is limited on some architectures and an allocation
764failure can be determined by:
765
766- checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0
767
768- checking the dma_addr_t returned from dma_map_single() and dma_map_page()
769 by using dma_mapping_error():
770
771 dma_addr_t dma_handle;
772
773 dma_handle = dma_map_single(dev, addr, size, direction);
774 if (dma_mapping_error(dev, dma_handle)) {
775 /*
776 * reduce current DMA mapping usage,
777 * delay and try again later or
778 * reset driver.
779 */
780 goto map_error_handling;
781 }
782
783- unmap pages that are already mapped, when mapping error occurs in the middle
784 of a multiple page mapping attempt. These example are applicable to
785 dma_map_page() as well.
786
787Example 1:
788 dma_addr_t dma_handle1;
789 dma_addr_t dma_handle2;
790
791 dma_handle1 = dma_map_single(dev, addr, size, direction);
792 if (dma_mapping_error(dev, dma_handle1)) {
793 /*
794 * reduce current DMA mapping usage,
795 * delay and try again later or
796 * reset driver.
797 */
798 goto map_error_handling1;
799 }
800 dma_handle2 = dma_map_single(dev, addr, size, direction);
801 if (dma_mapping_error(dev, dma_handle2)) {
802 /*
803 * reduce current DMA mapping usage,
804 * delay and try again later or
805 * reset driver.
806 */
807 goto map_error_handling2;
808 }
809
810 ...
811
812 map_error_handling2:
813 dma_unmap_single(dma_handle1);
814 map_error_handling1:
815
816Example 2: (if buffers are allocated in a loop, unmap all mapped buffers when
817 mapping error is detected in the middle)
818
819 dma_addr_t dma_addr;
820 dma_addr_t array[DMA_BUFFERS];
821 int save_index = 0;
822
823 for (i = 0; i < DMA_BUFFERS; i++) {
824
825 ...
826
827 dma_addr = dma_map_single(dev, addr, size, direction);
828 if (dma_mapping_error(dev, dma_addr)) {
829 /*
830 * reduce current DMA mapping usage,
831 * delay and try again later or
832 * reset driver.
833 */
834 goto map_error_handling;
835 }
836 array[i].dma_addr = dma_addr;
837 save_index++;
838 }
839
840 ...
841
842 map_error_handling:
843
844 for (i = 0; i < save_index; i++) {
845
846 ...
847
848 dma_unmap_single(array[i].dma_addr);
849 }
850
851Networking drivers must call dev_kfree_skb() to free the socket buffer
852and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook
853(ndo_start_xmit). This means that the socket buffer is just dropped in
854the failure case.
855
856SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping
857fails in the queuecommand hook. This means that the SCSI subsystem
858passes the command to the driver again later.
859
860 Optimizing Unmap State Space Consumption
861
862On many platforms, dma_unmap_{single,page}() is simply a nop.
863Therefore, keeping track of the mapping address and length is a waste
864of space. Instead of filling your drivers up with ifdefs and the like
865to "work around" this (which would defeat the whole purpose of a
866portable API) the following facilities are provided.
867
868Actually, instead of describing the macros one by one, we'll
869transform some example code.
870
8711) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures.
872 Example, before:
873
874 struct ring_state {
875 struct sk_buff *skb;
876 dma_addr_t mapping;
877 __u32 len;
878 };
879
880 after:
881
882 struct ring_state {
883 struct sk_buff *skb;
884 DEFINE_DMA_UNMAP_ADDR(mapping);
885 DEFINE_DMA_UNMAP_LEN(len);
886 };
887
8882) Use dma_unmap_{addr,len}_set() to set these values.
889 Example, before:
890
891 ringp->mapping = FOO;
892 ringp->len = BAR;
893
894 after:
895
896 dma_unmap_addr_set(ringp, mapping, FOO);
897 dma_unmap_len_set(ringp, len, BAR);
898
8993) Use dma_unmap_{addr,len}() to access these values.
900 Example, before:
901
902 dma_unmap_single(dev, ringp->mapping, ringp->len,
903 DMA_FROM_DEVICE);
904
905 after:
906
907 dma_unmap_single(dev,
908 dma_unmap_addr(ringp, mapping),
909 dma_unmap_len(ringp, len),
910 DMA_FROM_DEVICE);
911
912It really should be self-explanatory. We treat the ADDR and LEN
913separately, because it is possible for an implementation to only
914need the address in order to perform the unmap operation.
915
916 Platform Issues
917
918If you are just writing drivers for Linux and do not maintain
919an architecture port for the kernel, you can safely skip down
920to "Closing".
921
9221) Struct scatterlist requirements.
923
924 Don't invent the architecture specific struct scatterlist; just use
925 <asm-generic/scatterlist.h>. You need to enable
926 CONFIG_NEED_SG_DMA_LENGTH if the architecture supports IOMMUs
927 (including software IOMMU).
928
9292) ARCH_DMA_MINALIGN
930
931 Architectures must ensure that kmalloc'ed buffer is
932 DMA-safe. Drivers and subsystems depend on it. If an architecture
933 isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
934 the CPU cache is identical to data in main memory),
935 ARCH_DMA_MINALIGN must be set so that the memory allocator
936 makes sure that kmalloc'ed buffer doesn't share a cache line with
937 the others. See arch/arm/include/asm/cache.h as an example.
938
939 Note that ARCH_DMA_MINALIGN is about DMA memory alignment
940 constraints. You don't need to worry about the architecture data
941 alignment constraints (e.g. the alignment constraints about 64-bit
942 objects).
943
9443) Supporting multiple types of IOMMUs
945
946 If your architecture needs to support multiple types of IOMMUs, you
947 can use include/linux/asm-generic/dma-mapping-common.h. It's a
948 library to support the DMA API with multiple types of IOMMUs. Lots
949 of architectures (x86, powerpc, sh, alpha, ia64, microblaze and
950 sparc) use it. Choose one to see how it can be used. If you need to
951 support multiple types of IOMMUs in a single system, the example of
952 x86 or powerpc helps.
953
954 Closing
955
956This document, and the API itself, would not be in its current
957form without the feedback and suggestions from numerous individuals.
958We would like to specifically mention, in no particular order, the
959following people:
960
961 Russell King <rmk@arm.linux.org.uk>
962 Leo Dagum <dagum@barrel.engr.sgi.com>
963 Ralf Baechle <ralf@oss.sgi.com>
964 Grant Grundler <grundler@cup.hp.com>
965 Jay Estabrook <Jay.Estabrook@compaq.com>
966 Thomas Sailer <sailer@ife.ee.ethz.ch>
967 Andrea Arcangeli <andrea@suse.de>
968 Jens Axboe <jens.axboe@oracle.com>
969 David Mosberger-Tang <davidm@hpl.hp.com>