dma-mapping: remove DMA_ERROR_CODE
[linux-2.6-block.git] / Documentation / DMA-API-HOWTO.txt
CommitLineData
216bf58f
FT
1 Dynamic DMA mapping Guide
2 =========================
1da177e4
LT
3
4 David S. Miller <davem@redhat.com>
5 Richard Henderson <rth@cygnus.com>
6 Jakub Jelinek <jakub@redhat.com>
7
216bf58f
FT
8This is a guide to device driver writers on how to use the DMA API
9with example pseudo-code. For a concise description of the API, see
1da177e4
LT
10DMA-API.txt.
11
77f2ea2f
BH
12 CPU and DMA addresses
13
14There are several kinds of addresses involved in the DMA API, and it's
15important to understand the differences.
16
17The kernel normally uses virtual addresses. Any address returned by
18kmalloc(), vmalloc(), and similar interfaces is a virtual address and can
19be stored in a "void *".
20
21The virtual memory system (TLB, page tables, etc.) translates virtual
22addresses to CPU physical addresses, which are stored as "phys_addr_t" or
23"resource_size_t". The kernel manages device resources like registers as
24physical addresses. These are the addresses in /proc/iomem. The physical
25address is not directly useful to a driver; it must use ioremap() to map
26the space and produce a virtual address.
27
3a9ad0b4
YL
28I/O devices use a third kind of address: a "bus address". If a device has
29registers at an MMIO address, or if it performs DMA to read or write system
30memory, the addresses used by the device are bus addresses. In some
31systems, bus addresses are identical to CPU physical addresses, but in
32general they are not. IOMMUs and host bridges can produce arbitrary
77f2ea2f
BH
33mappings between physical and bus addresses.
34
3a9ad0b4
YL
35From a device's point of view, DMA uses the bus address space, but it may
36be restricted to a subset of that space. For example, even if a system
37supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU
38so devices only need to use 32-bit DMA addresses.
39
77f2ea2f
BH
40Here's a picture and some examples:
41
42 CPU CPU Bus
43 Virtual Physical Address
44 Address Address Space
45 Space Space
46
47 +-------+ +------+ +------+
48 | | |MMIO | Offset | |
49 | | Virtual |Space | applied | |
50 C +-------+ --------> B +------+ ----------> +------+ A
51 | | mapping | | by host | |
52 +-----+ | | | | bridge | | +--------+
53 | | | | +------+ | | | |
54 | CPU | | | | RAM | | | | Device |
55 | | | | | | | | | |
56 +-----+ +-------+ +------+ +------+ +--------+
57 | | Virtual |Buffer| Mapping | |
58 X +-------+ --------> Y +------+ <---------- +------+ Z
59 | | mapping | RAM | by IOMMU
60 | | | |
61 | | | |
62 +-------+ +------+
63
64During the enumeration process, the kernel learns about I/O devices and
65their MMIO space and the host bridges that connect them to the system. For
66example, if a PCI device has a BAR, the kernel reads the bus address (A)
67from the BAR and converts it to a CPU physical address (B). The address B
68is stored in a struct resource and usually exposed via /proc/iomem. When a
69driver claims a device, it typically uses ioremap() to map physical address
70B at a virtual address (C). It can then use, e.g., ioread32(C), to access
71the device registers at bus address A.
72
73If the device supports DMA, the driver sets up a buffer using kmalloc() or
74a similar interface, which returns a virtual address (X). The virtual
75memory system maps X to a physical address (Y) in system RAM. The driver
76can use virtual address X to access the buffer, but the device itself
77cannot because DMA doesn't go through the CPU virtual memory system.
78
79In some simple systems, the device can do DMA directly to physical address
3a9ad0b4 80Y. But in many others, there is IOMMU hardware that translates DMA
77f2ea2f
BH
81addresses to physical addresses, e.g., it translates Z to Y. This is part
82of the reason for the DMA API: the driver can give a virtual address X to
83an interface like dma_map_single(), which sets up any required IOMMU
3a9ad0b4 84mapping and returns the DMA address Z. The driver then tells the device to
77f2ea2f
BH
85do DMA to Z, and the IOMMU maps it to the buffer at address Y in system
86RAM.
1da177e4
LT
87
88So that Linux can use the dynamic DMA mapping, it needs some help from the
89drivers, namely it has to take into account that DMA addresses should be
90mapped only for the time they are actually used and unmapped after the DMA
91transfer.
92
93The following API will work of course even on platforms where no such
216bf58f
FT
94hardware exists.
95
96Note that the DMA API works with any bus independent of the underlying
77f2ea2f
BH
97microprocessor architecture. You should use the DMA API rather than the
98bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the
99pci_map_*() interfaces.
1da177e4
LT
100
101First of all, you should make sure
102
216bf58f 103#include <linux/dma-mapping.h>
1da177e4 104
77f2ea2f 105is in your driver, which provides the definition of dma_addr_t. This type
3a9ad0b4 106can hold any valid DMA address for the platform and should be used
77f2ea2f 107everywhere you hold a DMA address returned from the DMA mapping functions.
1da177e4
LT
108
109 What memory is DMA'able?
110
111The first piece of information you must know is what kernel memory can
112be used with the DMA mapping facilities. There has been an unwritten
113set of rules regarding this, and this text is an attempt to finally
114write them down.
115
116If you acquired your memory via the page allocator
117(i.e. __get_free_page*()) or the generic memory allocators
118(i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from
119that memory using the addresses returned from those routines.
120
121This means specifically that you may _not_ use the memory/addresses
122returned from vmalloc() for DMA. It is possible to DMA to the
123_underlying_ memory mapped into a vmalloc() area, but this requires
124walking page tables to get the physical addresses, and then
125translating each of those pages back to a kernel address using
126something like __va(). [ EDIT: Update this when we integrate
127Gerd Knorr's generic code which does this. ]
128
21440d31
DB
129This rule also means that you may use neither kernel image addresses
130(items in data/text/bss segments), nor module image addresses, nor
131stack addresses for DMA. These could all be mapped somewhere entirely
132different than the rest of physical memory. Even if those classes of
133memory could physically work with DMA, you'd need to ensure the I/O
134buffers were cacheline-aligned. Without that, you'd see cacheline
135sharing problems (data corruption) on CPUs with DMA-incoherent caches.
136(The CPU could write to one word, DMA would write to a different one
137in the same cache line, and one of them could be overwritten.)
1da177e4
LT
138
139Also, this means that you cannot take the return of a kmap()
140call and DMA to/from that. This is similar to vmalloc().
141
142What about block I/O and networking buffers? The block I/O and
143networking subsystems make sure that the buffers they use are valid
144for you to DMA from/to.
145
146 DMA addressing limitations
147
148Does your device have any DMA addressing limitations? For example, is
216bf58f
FT
149your device only capable of driving the low order 24-bits of address?
150If so, you need to inform the kernel of this fact.
1da177e4
LT
151
152By default, the kernel assumes that your device can address the full
216bf58f
FT
15332-bits. For a 64-bit capable device, this needs to be increased.
154And for a device with limitations, as discussed in the previous
155paragraph, it needs to be decreased.
156
157Special note about PCI: PCI-X specification requires PCI-X devices to
158support 64-bit addressing (DAC) for all transactions. And at least
159one platform (SGI SN2) requires 64-bit consistent allocations to
160operate correctly when the IO bus is in PCI-X mode.
161
162For correct operation, you must interrogate the kernel in your device
163probe routine to see if the DMA controller on the machine can properly
164support the DMA addressing limitation your device has. It is good
165style to do this even if your device holds the default setting,
1da177e4
LT
166because this shows that you did think about these issues wrt. your
167device.
168
4aa806b7 169The query is performed via a call to dma_set_mask_and_coherent():
1da177e4 170
4aa806b7 171 int dma_set_mask_and_coherent(struct device *dev, u64 mask);
1da177e4 172
4aa806b7
RK
173which will query the mask for both streaming and coherent APIs together.
174If you have some special requirements, then the following two separate
175queries can be used instead:
1da177e4 176
4aa806b7
RK
177 The query for streaming mappings is performed via a call to
178 dma_set_mask():
179
180 int dma_set_mask(struct device *dev, u64 mask);
181
182 The query for consistent allocations is performed via a call
183 to dma_set_coherent_mask():
184
185 int dma_set_coherent_mask(struct device *dev, u64 mask);
1da177e4 186
216bf58f
FT
187Here, dev is a pointer to the device struct of your device, and mask
188is a bit mask describing which bits of an address your device
189supports. It returns zero if your card can perform DMA properly on
190the machine given the address mask you provided. In general, the
77f2ea2f
BH
191device struct of your device is embedded in the bus-specific device
192struct of your device. For example, &pdev->dev is a pointer to the
193device struct of a PCI device (pdev is a pointer to the PCI device
216bf58f 194struct of your device).
1da177e4 195
84eb8d06 196If it returns non-zero, your device cannot perform DMA properly on
1da177e4
LT
197this platform, and attempting to do so will result in undefined
198behavior. You must either use a different mask, or not use DMA.
199
200This means that in the failure case, you have three options:
201
2021) Use another DMA mask, if possible (see below).
2032) Use some non-DMA mode for data transfer, if possible.
2043) Ignore this device and do not initialize it.
205
206It is recommended that your driver print a kernel KERN_WARNING message
207when you end up performing either #2 or #3. In this manner, if a user
208of your driver reports that performance is bad or that the device is not
209even detected, you can ask them for the kernel messages to find out
210exactly why.
211
216bf58f 212The standard 32-bit addressing device would do something like this:
1da177e4 213
4aa806b7 214 if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
77f2ea2f 215 dev_warn(dev, "mydev: No suitable DMA available\n");
1da177e4
LT
216 goto ignore_this_device;
217 }
218
216bf58f
FT
219Another common scenario is a 64-bit capable device. The approach here
220is to try for 64-bit addressing, but back down to a 32-bit mask that
221should not fail. The kernel may fail the 64-bit mask not because the
222platform is not capable of 64-bit addressing. Rather, it may fail in
223this case simply because 32-bit addressing is done more efficiently
224than 64-bit addressing. For example, Sparc64 PCI SAC addressing is
225more efficient than DAC addressing.
1da177e4
LT
226
227Here is how you would handle a 64-bit capable device which can drive
228all 64-bits when accessing streaming DMA:
229
230 int using_dac;
231
216bf58f 232 if (!dma_set_mask(dev, DMA_BIT_MASK(64))) {
1da177e4 233 using_dac = 1;
216bf58f 234 } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) {
1da177e4
LT
235 using_dac = 0;
236 } else {
77f2ea2f 237 dev_warn(dev, "mydev: No suitable DMA available\n");
1da177e4
LT
238 goto ignore_this_device;
239 }
240
241If a card is capable of using 64-bit consistent allocations as well,
242the case would look like this:
243
244 int using_dac, consistent_using_dac;
245
4aa806b7 246 if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) {
1da177e4 247 using_dac = 1;
11e285d8 248 consistent_using_dac = 1;
4aa806b7 249 } else if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
1da177e4
LT
250 using_dac = 0;
251 consistent_using_dac = 0;
1da177e4 252 } else {
77f2ea2f 253 dev_warn(dev, "mydev: No suitable DMA available\n");
1da177e4
LT
254 goto ignore_this_device;
255 }
256
34c815fb
EL
257The coherent mask will always be able to set the same or a smaller mask as
258the streaming mask. However for the rare case that a device driver only
259uses consistent allocations, one would have to check the return value from
260dma_set_coherent_mask().
1da177e4 261
1da177e4 262Finally, if your device can only drive the low 24-bits of
216bf58f 263address you might do something like:
1da177e4 264
216bf58f 265 if (dma_set_mask(dev, DMA_BIT_MASK(24))) {
77f2ea2f 266 dev_warn(dev, "mydev: 24-bit DMA addressing not available\n");
1da177e4
LT
267 goto ignore_this_device;
268 }
269
4aa806b7
RK
270When dma_set_mask() or dma_set_mask_and_coherent() is successful, and
271returns zero, the kernel saves away this mask you have provided. The
272kernel will use this information later when you make DMA mappings.
1da177e4
LT
273
274There is a case which we are aware of at this time, which is worth
275mentioning in this documentation. If your device supports multiple
276functions (for example a sound card provides playback and record
277functions) and the various different functions have _different_
278DMA addressing limitations, you may wish to probe each mask and
279only provide the functionality which the machine can handle. It
216bf58f 280is important that the last call to dma_set_mask() be for the
1da177e4
LT
281most specific mask.
282
283Here is pseudo-code showing how this might be done:
284
2c5510d4 285 #define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32)
038f7d00 286 #define RECORD_ADDRESS_BITS DMA_BIT_MASK(24)
1da177e4
LT
287
288 struct my_sound_card *card;
216bf58f 289 struct device *dev;
1da177e4
LT
290
291 ...
216bf58f 292 if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) {
1da177e4
LT
293 card->playback_enabled = 1;
294 } else {
295 card->playback_enabled = 0;
77f2ea2f 296 dev_warn(dev, "%s: Playback disabled due to DMA limitations\n",
1da177e4
LT
297 card->name);
298 }
216bf58f 299 if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) {
1da177e4
LT
300 card->record_enabled = 1;
301 } else {
302 card->record_enabled = 0;
77f2ea2f 303 dev_warn(dev, "%s: Record disabled due to DMA limitations\n",
1da177e4
LT
304 card->name);
305 }
306
307A sound card was used as an example here because this genre of PCI
308devices seems to be littered with ISA chips given a PCI front end,
309and thus retaining the 16MB DMA addressing limitations of ISA.
310
311 Types of DMA mappings
312
313There are two types of DMA mappings:
314
315- Consistent DMA mappings which are usually mapped at driver
316 initialization, unmapped at the end and for which the hardware should
317 guarantee that the device and the CPU can access the data
318 in parallel and will see updates made by each other without any
319 explicit software flushing.
320
321 Think of "consistent" as "synchronous" or "coherent".
322
323 The current default is to return consistent memory in the low 32
3a9ad0b4 324 bits of the DMA space. However, for future compatibility you should
216bf58f 325 set the consistent mask even if this default is fine for your
1da177e4
LT
326 driver.
327
328 Good examples of what to use consistent mappings for are:
329
330 - Network card DMA ring descriptors.
331 - SCSI adapter mailbox command data structures.
332 - Device firmware microcode executed out of
333 main memory.
334
335 The invariant these examples all require is that any CPU store
336 to memory is immediately visible to the device, and vice
337 versa. Consistent mappings guarantee this.
338
339 IMPORTANT: Consistent DMA memory does not preclude the usage of
340 proper memory barriers. The CPU may reorder stores to
341 consistent memory just as it may normal memory. Example:
342 if it is important for the device to see the first word
343 of a descriptor updated before the second, you must do
344 something like:
345
346 desc->word0 = address;
347 wmb();
348 desc->word1 = DESC_VALID;
349
350 in order to get correct behavior on all platforms.
351
21440d31
DB
352 Also, on some platforms your driver may need to flush CPU write
353 buffers in much the same way as it needs to flush write buffers
354 found in PCI bridges (such as by reading a register's value
355 after writing it).
356
216bf58f
FT
357- Streaming DMA mappings which are usually mapped for one DMA
358 transfer, unmapped right after it (unless you use dma_sync_* below)
359 and for which hardware can optimize for sequential accesses.
1da177e4 360
11e285d8 361 Think of "streaming" as "asynchronous" or "outside the coherency
1da177e4
LT
362 domain".
363
364 Good examples of what to use streaming mappings for are:
365
366 - Networking buffers transmitted/received by a device.
367 - Filesystem buffers written/read by a SCSI device.
368
369 The interfaces for using this type of mapping were designed in
370 such a way that an implementation can make whatever performance
371 optimizations the hardware allows. To this end, when using
372 such mappings you must be explicit about what you want to happen.
373
216bf58f
FT
374Neither type of DMA mapping has alignment restrictions that come from
375the underlying bus, although some devices may have such restrictions.
21440d31
DB
376Also, systems with caches that aren't DMA-coherent will work better
377when the underlying buffers don't share cache lines with other data.
378
1da177e4
LT
379
380 Using Consistent DMA mappings.
381
382To allocate and map large (PAGE_SIZE or so) consistent DMA regions,
383you should do:
384
385 dma_addr_t dma_handle;
386
216bf58f 387 cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp);
1da177e4 388
216bf58f
FT
389where device is a struct device *. This may be called in interrupt
390context with the GFP_ATOMIC flag.
1da177e4
LT
391
392Size is the length of the region you want to allocate, in bytes.
393
394This routine will allocate RAM for that region, so it acts similarly to
77f2ea2f 395__get_free_pages() (but takes size instead of a page order). If your
1da177e4 396driver needs regions sized smaller than a page, you may prefer using
216bf58f
FT
397the dma_pool interface, described below.
398
399The consistent DMA mapping interfaces, for non-NULL dev, will by
400default return a DMA address which is 32-bit addressable. Even if the
401device indicates (via DMA mask) that it may address the upper 32-bits,
402consistent allocation will only return > 32-bit addresses for DMA if
403the consistent DMA mask has been explicitly changed via
404dma_set_coherent_mask(). This is true of the dma_pool interface as
405well.
406
77f2ea2f 407dma_alloc_coherent() returns two values: the virtual address which you
1da177e4
LT
408can use to access it from the CPU and dma_handle which you pass to the
409card.
410
3a9ad0b4 411The CPU virtual address and the DMA address are both
1da177e4
LT
412guaranteed to be aligned to the smallest PAGE_SIZE order which
413is greater than or equal to the requested size. This invariant
414exists (for example) to guarantee that if you allocate a chunk
415which is smaller than or equal to 64 kilobytes, the extent of the
416buffer you receive will not cross a 64K boundary.
417
418To unmap and free such a DMA region, you call:
419
216bf58f 420 dma_free_coherent(dev, size, cpu_addr, dma_handle);
1da177e4 421
216bf58f 422where dev, size are the same as in the above call and cpu_addr and
77f2ea2f 423dma_handle are the values dma_alloc_coherent() returned to you.
1da177e4
LT
424This function may not be called in interrupt context.
425
426If your driver needs lots of smaller memory regions, you can write
77f2ea2f 427custom code to subdivide pages returned by dma_alloc_coherent(),
216bf58f 428or you can use the dma_pool API to do that. A dma_pool is like
77f2ea2f 429a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages().
1da177e4
LT
430Also, it understands common hardware constraints for alignment,
431like queue heads needing to be aligned on N byte boundaries.
432
216bf58f 433Create a dma_pool like this:
1da177e4 434
216bf58f 435 struct dma_pool *pool;
1da177e4 436
2af9da86 437 pool = dma_pool_create(name, dev, size, align, boundary);
1da177e4 438
216bf58f 439The "name" is for diagnostics (like a kmem_cache name); dev and size
1da177e4
LT
440are as above. The device's hardware alignment requirement for this
441type of data is "align" (which is expressed in bytes, and must be a
442power of two). If your device has no boundary crossing restrictions,
2af9da86 443pass 0 for boundary; passing 4096 says memory allocated from this pool
1da177e4 444must not cross 4KByte boundaries (but at that time it may be better to
77f2ea2f 445use dma_alloc_coherent() directly instead).
1da177e4 446
77f2ea2f 447Allocate memory from a DMA pool like this:
1da177e4 448
216bf58f 449 cpu_addr = dma_pool_alloc(pool, flags, &dma_handle);
1da177e4 450
2af9da86
GK
451flags are GFP_KERNEL if blocking is permitted (not in_interrupt nor
452holding SMP locks), GFP_ATOMIC otherwise. Like dma_alloc_coherent(),
1da177e4
LT
453this returns two values, cpu_addr and dma_handle.
454
216bf58f 455Free memory that was allocated from a dma_pool like this:
1da177e4 456
216bf58f 457 dma_pool_free(pool, cpu_addr, dma_handle);
1da177e4 458
77f2ea2f
BH
459where pool is what you passed to dma_pool_alloc(), and cpu_addr and
460dma_handle are the values dma_pool_alloc() returned. This function
1da177e4
LT
461may be called in interrupt context.
462
216bf58f 463Destroy a dma_pool by calling:
1da177e4 464
216bf58f 465 dma_pool_destroy(pool);
1da177e4 466
77f2ea2f 467Make sure you've called dma_pool_free() for all memory allocated
1da177e4
LT
468from a pool before you destroy the pool. This function may not
469be called in interrupt context.
470
471 DMA Direction
472
473The interfaces described in subsequent portions of this document
474take a DMA direction argument, which is an integer and takes on
475one of the following values:
476
216bf58f
FT
477 DMA_BIDIRECTIONAL
478 DMA_TO_DEVICE
479 DMA_FROM_DEVICE
480 DMA_NONE
1da177e4 481
77f2ea2f 482You should provide the exact DMA direction if you know it.
1da177e4 483
216bf58f
FT
484DMA_TO_DEVICE means "from main memory to the device"
485DMA_FROM_DEVICE means "from the device to main memory"
1da177e4
LT
486It is the direction in which the data moves during the DMA
487transfer.
488
489You are _strongly_ encouraged to specify this as precisely
490as you possibly can.
491
492If you absolutely cannot know the direction of the DMA transfer,
216bf58f 493specify DMA_BIDIRECTIONAL. It means that the DMA can go in
1da177e4
LT
494either direction. The platform guarantees that you may legally
495specify this, and that it will work, but this may be at the
496cost of performance for example.
497
216bf58f 498The value DMA_NONE is to be used for debugging. One can
1da177e4
LT
499hold this in a data structure before you come to know the
500precise direction, and this will help catch cases where your
501direction tracking logic has failed to set things up properly.
502
503Another advantage of specifying this value precisely (outside of
504potential platform-specific optimizations of such) is for debugging.
505Some platforms actually have a write permission boolean which DMA
506mappings can be marked with, much like page protections in the user
507program address space. Such platforms can and do report errors in the
216bf58f 508kernel logs when the DMA controller hardware detects violation of the
1da177e4
LT
509permission setting.
510
511Only streaming mappings specify a direction, consistent mappings
512implicitly have a direction attribute setting of
216bf58f 513DMA_BIDIRECTIONAL.
1da177e4 514
be7db055 515The SCSI subsystem tells you the direction to use in the
516'sc_data_direction' member of the SCSI command your driver is
517working on.
1da177e4
LT
518
519For Networking drivers, it's a rather simple affair. For transmit
216bf58f 520packets, map/unmap them with the DMA_TO_DEVICE direction
1da177e4 521specifier. For receive packets, just the opposite, map/unmap them
216bf58f 522with the DMA_FROM_DEVICE direction specifier.
1da177e4
LT
523
524 Using Streaming DMA mappings
525
526The streaming DMA mapping routines can be called from interrupt
527context. There are two versions of each map/unmap, one which will
528map/unmap a single memory region, and one which will map/unmap a
529scatterlist.
530
531To map a single region, you do:
532
216bf58f 533 struct device *dev = &my_dev->dev;
1da177e4
LT
534 dma_addr_t dma_handle;
535 void *addr = buffer->ptr;
536 size_t size = buffer->len;
537
216bf58f 538 dma_handle = dma_map_single(dev, addr, size, direction);
b2dd83b3 539 if (dma_mapping_error(dev, dma_handle)) {
8d7f62e6
SK
540 /*
541 * reduce current DMA mapping usage,
542 * delay and try again later or
543 * reset driver.
544 */
545 goto map_error_handling;
546 }
1da177e4
LT
547
548and to unmap it:
549
216bf58f 550 dma_unmap_single(dev, dma_handle, size, direction);
1da177e4 551
8d7f62e6 552You should call dma_mapping_error() as dma_map_single() could fail and return
f51f288e
CH
553error. Doing so will ensure that the mapping code will work correctly on all
554DMA implementations without any dependency on the specifics of the underlying
555implementation. Using the returned address without checking for errors could
556result in failures ranging from panics to silent data corruption. The same
557applies to dma_map_page() as well.
8d7f62e6 558
77f2ea2f 559You should call dma_unmap_single() when the DMA activity is finished, e.g.,
1da177e4
LT
560from the interrupt which told you that the DMA transfer is done.
561
f311a724 562Using CPU pointers like this for single mappings has a disadvantage:
1da177e4 563you cannot reference HIGHMEM memory in this way. Thus, there is a
77f2ea2f 564map/unmap interface pair akin to dma_{map,unmap}_single(). These
f311a724 565interfaces deal with page/offset pairs instead of CPU pointers.
1da177e4
LT
566Specifically:
567
216bf58f 568 struct device *dev = &my_dev->dev;
1da177e4
LT
569 dma_addr_t dma_handle;
570 struct page *page = buffer->page;
571 unsigned long offset = buffer->offset;
572 size_t size = buffer->len;
573
216bf58f 574 dma_handle = dma_map_page(dev, page, offset, size, direction);
b2dd83b3 575 if (dma_mapping_error(dev, dma_handle)) {
8d7f62e6
SK
576 /*
577 * reduce current DMA mapping usage,
578 * delay and try again later or
579 * reset driver.
580 */
581 goto map_error_handling;
582 }
1da177e4
LT
583
584 ...
585
216bf58f 586 dma_unmap_page(dev, dma_handle, size, direction);
1da177e4
LT
587
588Here, "offset" means byte offset within the given page.
589
8d7f62e6
SK
590You should call dma_mapping_error() as dma_map_page() could fail and return
591error as outlined under the dma_map_single() discussion.
592
77f2ea2f 593You should call dma_unmap_page() when the DMA activity is finished, e.g.,
8d7f62e6
SK
594from the interrupt which told you that the DMA transfer is done.
595
1da177e4
LT
596With scatterlists, you map a region gathered from several regions by:
597
216bf58f 598 int i, count = dma_map_sg(dev, sglist, nents, direction);
1da177e4
LT
599 struct scatterlist *sg;
600
4c2f6d4c 601 for_each_sg(sglist, sg, count, i) {
1da177e4
LT
602 hw_address[i] = sg_dma_address(sg);
603 hw_len[i] = sg_dma_len(sg);
604 }
605
606where nents is the number of entries in the sglist.
607
608The implementation is free to merge several consecutive sglist entries
609into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any
610consecutive sglist entries can be merged into one provided the first one
611ends and the second one starts on a page boundary - in fact this is a huge
612advantage for cards which either cannot do scatter-gather or have very
613limited number of scatter-gather entries) and returns the actual number
614of sg entries it mapped them to. On failure 0 is returned.
615
616Then you should loop count times (note: this can be less than nents times)
617and use sg_dma_address() and sg_dma_len() macros where you previously
618accessed sg->address and sg->length as shown above.
619
620To unmap a scatterlist, just call:
621
216bf58f 622 dma_unmap_sg(dev, sglist, nents, direction);
1da177e4
LT
623
624Again, make sure DMA activity has already finished.
625
216bf58f
FT
626PLEASE NOTE: The 'nents' argument to the dma_unmap_sg call must be
627 the _same_ one you passed into the dma_map_sg call,
1da177e4 628 it should _NOT_ be the 'count' value _returned_ from the
216bf58f 629 dma_map_sg call.
1da177e4 630
77f2ea2f 631Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}()
3a9ad0b4
YL
632counterpart, because the DMA address space is a shared resource and
633you could render the machine unusable by consuming all DMA addresses.
1da177e4
LT
634
635If you need to use the same streaming DMA region multiple times and touch
636the data in between the DMA transfers, the buffer needs to be synced
f311a724 637properly in order for the CPU and device to see the most up-to-date and
1da177e4
LT
638correct copy of the DMA buffer.
639
77f2ea2f 640So, firstly, just map it with dma_map_{single,sg}(), and after each DMA
1da177e4
LT
641transfer call either:
642
216bf58f 643 dma_sync_single_for_cpu(dev, dma_handle, size, direction);
1da177e4
LT
644
645or:
646
216bf58f 647 dma_sync_sg_for_cpu(dev, sglist, nents, direction);
1da177e4
LT
648
649as appropriate.
650
651Then, if you wish to let the device get at the DMA area again,
f311a724 652finish accessing the data with the CPU, and then before actually
1da177e4
LT
653giving the buffer to the hardware call either:
654
216bf58f 655 dma_sync_single_for_device(dev, dma_handle, size, direction);
1da177e4
LT
656
657or:
658
216bf58f 659 dma_sync_sg_for_device(dev, sglist, nents, direction);
1da177e4
LT
660
661as appropriate.
662
7bc590b2
SA
663PLEASE NOTE: The 'nents' argument to dma_sync_sg_for_cpu() and
664 dma_sync_sg_for_device() must be the same passed to
665 dma_map_sg(). It is _NOT_ the count returned by
666 dma_map_sg().
667
1da177e4 668After the last DMA transfer call one of the DMA unmap routines
77f2ea2f
BH
669dma_unmap_{single,sg}(). If you don't touch the data from the first
670dma_map_*() call till dma_unmap_*(), then you don't have to call the
671dma_sync_*() routines at all.
1da177e4
LT
672
673Here is pseudo code which shows a situation in which you would need
216bf58f 674to use the dma_sync_*() interfaces.
1da177e4
LT
675
676 my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len)
677 {
678 dma_addr_t mapping;
679
216bf58f 680 mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
be6c3095 681 if (dma_mapping_error(cp->dev, mapping)) {
8d7f62e6
SK
682 /*
683 * reduce current DMA mapping usage,
684 * delay and try again later or
685 * reset driver.
686 */
687 goto map_error_handling;
688 }
1da177e4
LT
689
690 cp->rx_buf = buffer;
691 cp->rx_len = len;
692 cp->rx_dma = mapping;
693
694 give_rx_buf_to_card(cp);
695 }
696
697 ...
698
699 my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs)
700 {
701 struct my_card *cp = devid;
702
703 ...
704 if (read_card_status(cp) == RX_BUF_TRANSFERRED) {
705 struct my_card_header *hp;
706
707 /* Examine the header to see if we wish
708 * to accept the data. But synchronize
709 * the DMA transfer with the CPU first
710 * so that we see updated contents.
711 */
216bf58f
FT
712 dma_sync_single_for_cpu(&cp->dev, cp->rx_dma,
713 cp->rx_len,
714 DMA_FROM_DEVICE);
1da177e4
LT
715
716 /* Now it is safe to examine the buffer. */
717 hp = (struct my_card_header *) cp->rx_buf;
718 if (header_is_ok(hp)) {
216bf58f
FT
719 dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len,
720 DMA_FROM_DEVICE);
1da177e4
LT
721 pass_to_upper_layers(cp->rx_buf);
722 make_and_setup_new_rx_buf(cp);
723 } else {
3f0fb4e8
MM
724 /* CPU should not write to
725 * DMA_FROM_DEVICE-mapped area,
726 * so dma_sync_single_for_device() is
727 * not needed here. It would be required
728 * for DMA_BIDIRECTIONAL mapping if
729 * the memory was modified.
1da177e4 730 */
1da177e4
LT
731 give_rx_buf_to_card(cp);
732 }
733 }
734 }
735
77f2ea2f
BH
736Drivers converted fully to this interface should not use virt_to_bus() any
737longer, nor should they use bus_to_virt(). Some drivers have to be changed a
738little bit, because there is no longer an equivalent to bus_to_virt() in the
1da177e4 739dynamic DMA mapping scheme - you have to always store the DMA addresses
77f2ea2f
BH
740returned by the dma_alloc_coherent(), dma_pool_alloc(), and dma_map_single()
741calls (dma_map_sg() stores them in the scatterlist itself if the platform
1da177e4
LT
742supports dynamic DMA mapping in hardware) in your driver structures and/or
743in the card registers.
744
216bf58f
FT
745All drivers should be using these interfaces with no exceptions. It
746is planned to completely remove virt_to_bus() and bus_to_virt() as
1da177e4
LT
747they are entirely deprecated. Some ports already do not provide these
748as it is impossible to correctly support them.
749
4ae9ca82
FT
750 Handling Errors
751
752DMA address space is limited on some architectures and an allocation
753failure can be determined by:
754
77f2ea2f 755- checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0
4ae9ca82 756
77f2ea2f 757- checking the dma_addr_t returned from dma_map_single() and dma_map_page()
4ae9ca82
FT
758 by using dma_mapping_error():
759
760 dma_addr_t dma_handle;
761
762 dma_handle = dma_map_single(dev, addr, size, direction);
763 if (dma_mapping_error(dev, dma_handle)) {
764 /*
765 * reduce current DMA mapping usage,
766 * delay and try again later or
767 * reset driver.
768 */
8d7f62e6
SK
769 goto map_error_handling;
770 }
771
772- unmap pages that are already mapped, when mapping error occurs in the middle
773 of a multiple page mapping attempt. These example are applicable to
774 dma_map_page() as well.
775
776Example 1:
777 dma_addr_t dma_handle1;
778 dma_addr_t dma_handle2;
779
780 dma_handle1 = dma_map_single(dev, addr, size, direction);
781 if (dma_mapping_error(dev, dma_handle1)) {
782 /*
783 * reduce current DMA mapping usage,
784 * delay and try again later or
785 * reset driver.
786 */
787 goto map_error_handling1;
788 }
789 dma_handle2 = dma_map_single(dev, addr, size, direction);
790 if (dma_mapping_error(dev, dma_handle2)) {
791 /*
792 * reduce current DMA mapping usage,
793 * delay and try again later or
794 * reset driver.
795 */
796 goto map_error_handling2;
797 }
798
799 ...
800
801 map_error_handling2:
802 dma_unmap_single(dma_handle1);
803 map_error_handling1:
804
11cd3db0 805Example 2: (if buffers are allocated in a loop, unmap all mapped buffers when
8d7f62e6
SK
806 mapping error is detected in the middle)
807
808 dma_addr_t dma_addr;
809 dma_addr_t array[DMA_BUFFERS];
810 int save_index = 0;
811
812 for (i = 0; i < DMA_BUFFERS; i++) {
813
814 ...
815
816 dma_addr = dma_map_single(dev, addr, size, direction);
817 if (dma_mapping_error(dev, dma_addr)) {
818 /*
819 * reduce current DMA mapping usage,
820 * delay and try again later or
821 * reset driver.
822 */
823 goto map_error_handling;
824 }
825 array[i].dma_addr = dma_addr;
826 save_index++;
827 }
828
829 ...
830
831 map_error_handling:
832
833 for (i = 0; i < save_index; i++) {
834
835 ...
836
837 dma_unmap_single(array[i].dma_addr);
4ae9ca82
FT
838 }
839
77f2ea2f 840Networking drivers must call dev_kfree_skb() to free the socket buffer
4ae9ca82
FT
841and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook
842(ndo_start_xmit). This means that the socket buffer is just dropped in
843the failure case.
844
845SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping
846fails in the queuecommand hook. This means that the SCSI subsystem
847passes the command to the driver again later.
848
1da177e4
LT
849 Optimizing Unmap State Space Consumption
850
216bf58f 851On many platforms, dma_unmap_{single,page}() is simply a nop.
1da177e4
LT
852Therefore, keeping track of the mapping address and length is a waste
853of space. Instead of filling your drivers up with ifdefs and the like
854to "work around" this (which would defeat the whole purpose of a
855portable API) the following facilities are provided.
856
857Actually, instead of describing the macros one by one, we'll
858transform some example code.
859
216bf58f 8601) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures.
1da177e4
LT
861 Example, before:
862
863 struct ring_state {
864 struct sk_buff *skb;
865 dma_addr_t mapping;
866 __u32 len;
867 };
868
869 after:
870
871 struct ring_state {
872 struct sk_buff *skb;
216bf58f
FT
873 DEFINE_DMA_UNMAP_ADDR(mapping);
874 DEFINE_DMA_UNMAP_LEN(len);
1da177e4
LT
875 };
876
77f2ea2f 8772) Use dma_unmap_{addr,len}_set() to set these values.
1da177e4
LT
878 Example, before:
879
880 ringp->mapping = FOO;
881 ringp->len = BAR;
882
883 after:
884
216bf58f
FT
885 dma_unmap_addr_set(ringp, mapping, FOO);
886 dma_unmap_len_set(ringp, len, BAR);
1da177e4 887
77f2ea2f 8883) Use dma_unmap_{addr,len}() to access these values.
1da177e4
LT
889 Example, before:
890
216bf58f
FT
891 dma_unmap_single(dev, ringp->mapping, ringp->len,
892 DMA_FROM_DEVICE);
1da177e4
LT
893
894 after:
895
216bf58f
FT
896 dma_unmap_single(dev,
897 dma_unmap_addr(ringp, mapping),
898 dma_unmap_len(ringp, len),
899 DMA_FROM_DEVICE);
1da177e4
LT
900
901It really should be self-explanatory. We treat the ADDR and LEN
902separately, because it is possible for an implementation to only
903need the address in order to perform the unmap operation.
904
905 Platform Issues
906
907If you are just writing drivers for Linux and do not maintain
908an architecture port for the kernel, you can safely skip down
909to "Closing".
910
9111) Struct scatterlist requirements.
912
e92ae527
CH
913 You need to enable CONFIG_NEED_SG_DMA_LENGTH if the architecture
914 supports IOMMUs (including software IOMMU).
1da177e4 915
ce00f7fe 9162) ARCH_DMA_MINALIGN
2fd74e25
FT
917
918 Architectures must ensure that kmalloc'ed buffer is
919 DMA-safe. Drivers and subsystems depend on it. If an architecture
920 isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
921 the CPU cache is identical to data in main memory),
ce00f7fe 922 ARCH_DMA_MINALIGN must be set so that the memory allocator
2fd74e25
FT
923 makes sure that kmalloc'ed buffer doesn't share a cache line with
924 the others. See arch/arm/include/asm/cache.h as an example.
925
ce00f7fe 926 Note that ARCH_DMA_MINALIGN is about DMA memory alignment
2fd74e25
FT
927 constraints. You don't need to worry about the architecture data
928 alignment constraints (e.g. the alignment constraints about 64-bit
929 objects).
1da177e4 930
1da177e4
LT
931 Closing
932
a33f3224 933This document, and the API itself, would not be in its current
1da177e4
LT
934form without the feedback and suggestions from numerous individuals.
935We would like to specifically mention, in no particular order, the
936following people:
937
938 Russell King <rmk@arm.linux.org.uk>
939 Leo Dagum <dagum@barrel.engr.sgi.com>
940 Ralf Baechle <ralf@oss.sgi.com>
941 Grant Grundler <grundler@cup.hp.com>
942 Jay Estabrook <Jay.Estabrook@compaq.com>
943 Thomas Sailer <sailer@ife.ee.ethz.ch>
944 Andrea Arcangeli <andrea@suse.de>
26bbb29a 945 Jens Axboe <jens.axboe@oracle.com>
1da177e4 946 David Mosberger-Tang <davidm@hpl.hp.com>