Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 1 | ========================= |
| 2 | Dynamic DMA mapping Guide |
| 3 | ========================= |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 4 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 5 | :Author: David S. Miller <davem@redhat.com> |
| 6 | :Author: Richard Henderson <rth@cygnus.com> |
| 7 | :Author: Jakub Jelinek <jakub@redhat.com> |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 8 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 9 | This is a guide to device driver writers on how to use the DMA API |
| 10 | with example pseudo-code. For a concise description of the API, see |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 11 | DMA-API.txt. |
| 12 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 13 | CPU and DMA addresses |
| 14 | ===================== |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 15 | |
| 16 | There are several kinds of addresses involved in the DMA API, and it's |
| 17 | important to understand the differences. |
| 18 | |
| 19 | The kernel normally uses virtual addresses. Any address returned by |
| 20 | kmalloc(), vmalloc(), and similar interfaces is a virtual address and can |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 21 | be stored in a ``void *``. |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 22 | |
| 23 | The virtual memory system (TLB, page tables, etc.) translates virtual |
| 24 | addresses to CPU physical addresses, which are stored as "phys_addr_t" or |
| 25 | "resource_size_t". The kernel manages device resources like registers as |
| 26 | physical addresses. These are the addresses in /proc/iomem. The physical |
| 27 | address is not directly useful to a driver; it must use ioremap() to map |
| 28 | the space and produce a virtual address. |
| 29 | |
Yinghai Lu | 3a9ad0b | 2015-05-27 17:23:51 -0700 | [diff] [blame] | 30 | I/O devices use a third kind of address: a "bus address". If a device has |
| 31 | registers at an MMIO address, or if it performs DMA to read or write system |
| 32 | memory, the addresses used by the device are bus addresses. In some |
| 33 | systems, bus addresses are identical to CPU physical addresses, but in |
| 34 | general they are not. IOMMUs and host bridges can produce arbitrary |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 35 | mappings between physical and bus addresses. |
| 36 | |
Yinghai Lu | 3a9ad0b | 2015-05-27 17:23:51 -0700 | [diff] [blame] | 37 | From a device's point of view, DMA uses the bus address space, but it may |
| 38 | be restricted to a subset of that space. For example, even if a system |
| 39 | supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU |
| 40 | so devices only need to use 32-bit DMA addresses. |
| 41 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 42 | Here's a picture and some examples:: |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 43 | |
| 44 | CPU CPU Bus |
| 45 | Virtual Physical Address |
| 46 | Address Address Space |
| 47 | Space Space |
| 48 | |
| 49 | +-------+ +------+ +------+ |
| 50 | | | |MMIO | Offset | | |
| 51 | | | Virtual |Space | applied | | |
| 52 | C +-------+ --------> B +------+ ----------> +------+ A |
| 53 | | | mapping | | by host | | |
| 54 | +-----+ | | | | bridge | | +--------+ |
| 55 | | | | | +------+ | | | | |
| 56 | | CPU | | | | RAM | | | | Device | |
| 57 | | | | | | | | | | | |
| 58 | +-----+ +-------+ +------+ +------+ +--------+ |
| 59 | | | Virtual |Buffer| Mapping | | |
| 60 | X +-------+ --------> Y +------+ <---------- +------+ Z |
| 61 | | | mapping | RAM | by IOMMU |
| 62 | | | | | |
| 63 | | | | | |
| 64 | +-------+ +------+ |
| 65 | |
| 66 | During the enumeration process, the kernel learns about I/O devices and |
| 67 | their MMIO space and the host bridges that connect them to the system. For |
| 68 | example, if a PCI device has a BAR, the kernel reads the bus address (A) |
| 69 | from the BAR and converts it to a CPU physical address (B). The address B |
| 70 | is stored in a struct resource and usually exposed via /proc/iomem. When a |
| 71 | driver claims a device, it typically uses ioremap() to map physical address |
| 72 | B at a virtual address (C). It can then use, e.g., ioread32(C), to access |
| 73 | the device registers at bus address A. |
| 74 | |
| 75 | If the device supports DMA, the driver sets up a buffer using kmalloc() or |
| 76 | a similar interface, which returns a virtual address (X). The virtual |
| 77 | memory system maps X to a physical address (Y) in system RAM. The driver |
| 78 | can use virtual address X to access the buffer, but the device itself |
| 79 | cannot because DMA doesn't go through the CPU virtual memory system. |
| 80 | |
| 81 | In some simple systems, the device can do DMA directly to physical address |
Yinghai Lu | 3a9ad0b | 2015-05-27 17:23:51 -0700 | [diff] [blame] | 82 | Y. But in many others, there is IOMMU hardware that translates DMA |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 83 | addresses to physical addresses, e.g., it translates Z to Y. This is part |
| 84 | of the reason for the DMA API: the driver can give a virtual address X to |
| 85 | an interface like dma_map_single(), which sets up any required IOMMU |
Yinghai Lu | 3a9ad0b | 2015-05-27 17:23:51 -0700 | [diff] [blame] | 86 | mapping and returns the DMA address Z. The driver then tells the device to |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 87 | do DMA to Z, and the IOMMU maps it to the buffer at address Y in system |
| 88 | RAM. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 89 | |
| 90 | So that Linux can use the dynamic DMA mapping, it needs some help from the |
| 91 | drivers, namely it has to take into account that DMA addresses should be |
| 92 | mapped only for the time they are actually used and unmapped after the DMA |
| 93 | transfer. |
| 94 | |
| 95 | The following API will work of course even on platforms where no such |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 96 | hardware exists. |
| 97 | |
| 98 | Note that the DMA API works with any bus independent of the underlying |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 99 | microprocessor architecture. You should use the DMA API rather than the |
| 100 | bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the |
| 101 | pci_map_*() interfaces. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 102 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 103 | First of all, you should make sure:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 104 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 105 | #include <linux/dma-mapping.h> |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 106 | |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 107 | is in your driver, which provides the definition of dma_addr_t. This type |
Yinghai Lu | 3a9ad0b | 2015-05-27 17:23:51 -0700 | [diff] [blame] | 108 | can hold any valid DMA address for the platform and should be used |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 109 | everywhere you hold a DMA address returned from the DMA mapping functions. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 110 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 111 | What memory is DMA'able? |
| 112 | ======================== |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 113 | |
| 114 | The first piece of information you must know is what kernel memory can |
| 115 | be used with the DMA mapping facilities. There has been an unwritten |
| 116 | set of rules regarding this, and this text is an attempt to finally |
| 117 | write them down. |
| 118 | |
| 119 | If you acquired your memory via the page allocator |
| 120 | (i.e. __get_free_page*()) or the generic memory allocators |
| 121 | (i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from |
| 122 | that memory using the addresses returned from those routines. |
| 123 | |
| 124 | This means specifically that you may _not_ use the memory/addresses |
| 125 | returned from vmalloc() for DMA. It is possible to DMA to the |
| 126 | _underlying_ memory mapped into a vmalloc() area, but this requires |
| 127 | walking page tables to get the physical addresses, and then |
| 128 | translating each of those pages back to a kernel address using |
| 129 | something like __va(). [ EDIT: Update this when we integrate |
| 130 | Gerd Knorr's generic code which does this. ] |
| 131 | |
David Brownell | 21440d3 | 2006-04-01 10:21:52 -0800 | [diff] [blame] | 132 | This rule also means that you may use neither kernel image addresses |
| 133 | (items in data/text/bss segments), nor module image addresses, nor |
| 134 | stack addresses for DMA. These could all be mapped somewhere entirely |
| 135 | different than the rest of physical memory. Even if those classes of |
| 136 | memory could physically work with DMA, you'd need to ensure the I/O |
| 137 | buffers were cacheline-aligned. Without that, you'd see cacheline |
| 138 | sharing problems (data corruption) on CPUs with DMA-incoherent caches. |
| 139 | (The CPU could write to one word, DMA would write to a different one |
| 140 | in the same cache line, and one of them could be overwritten.) |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 141 | |
| 142 | Also, this means that you cannot take the return of a kmap() |
| 143 | call and DMA to/from that. This is similar to vmalloc(). |
| 144 | |
| 145 | What about block I/O and networking buffers? The block I/O and |
| 146 | networking subsystems make sure that the buffers they use are valid |
| 147 | for you to DMA from/to. |
| 148 | |
Christoph Hellwig | 9eb9e96 | 2019-02-15 09:01:53 +0100 | [diff] [blame] | 149 | DMA addressing capabilities |
Mauro Carvalho Chehab | 9fda513 | 2019-04-10 06:56:20 -0300 | [diff] [blame] | 150 | =========================== |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 151 | |
Christoph Hellwig | 9eb9e96 | 2019-02-15 09:01:53 +0100 | [diff] [blame] | 152 | By default, the kernel assumes that your device can address 32-bits of DMA |
| 153 | addressing. For a 64-bit capable device, this needs to be increased, and for |
| 154 | a device with limitations, it needs to be decreased. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 155 | |
Christoph Hellwig | 9eb9e96 | 2019-02-15 09:01:53 +0100 | [diff] [blame] | 156 | Special note about PCI: PCI-X specification requires PCI-X devices to support |
| 157 | 64-bit addressing (DAC) for all transactions. And at least one platform (SGI |
| 158 | SN2) requires 64-bit consistent allocations to operate correctly when the IO |
| 159 | bus is in PCI-X mode. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 160 | |
Christoph Hellwig | 9eb9e96 | 2019-02-15 09:01:53 +0100 | [diff] [blame] | 161 | For correct operation, you must set the DMA mask to inform the kernel about |
| 162 | your devices DMA addressing capabilities. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 163 | |
Christoph Hellwig | 9eb9e96 | 2019-02-15 09:01:53 +0100 | [diff] [blame] | 164 | This is performed via a call to dma_set_mask_and_coherent():: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 165 | |
Russell King | 4aa806b | 2013-06-26 13:49:44 +0100 | [diff] [blame] | 166 | int dma_set_mask_and_coherent(struct device *dev, u64 mask); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 167 | |
Christoph Hellwig | 9eb9e96 | 2019-02-15 09:01:53 +0100 | [diff] [blame] | 168 | which will set the mask for both streaming and coherent APIs together. If you |
| 169 | have some special requirements, then the following two separate calls can be |
| 170 | used instead: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 171 | |
Christoph Hellwig | 9eb9e96 | 2019-02-15 09:01:53 +0100 | [diff] [blame] | 172 | The setup for streaming mappings is performed via a call to |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 173 | dma_set_mask():: |
Russell King | 4aa806b | 2013-06-26 13:49:44 +0100 | [diff] [blame] | 174 | |
| 175 | int dma_set_mask(struct device *dev, u64 mask); |
| 176 | |
Christoph Hellwig | 9eb9e96 | 2019-02-15 09:01:53 +0100 | [diff] [blame] | 177 | The setup for consistent allocations is performed via a call |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 178 | to dma_set_coherent_mask():: |
Russell King | 4aa806b | 2013-06-26 13:49:44 +0100 | [diff] [blame] | 179 | |
| 180 | int dma_set_coherent_mask(struct device *dev, u64 mask); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 181 | |
Christoph Hellwig | 9eb9e96 | 2019-02-15 09:01:53 +0100 | [diff] [blame] | 182 | Here, dev is a pointer to the device struct of your device, and mask is a bit |
| 183 | mask describing which bits of an address your device supports. Often the |
| 184 | device struct of your device is embedded in the bus-specific device struct of |
| 185 | your device. For example, &pdev->dev is a pointer to the device struct of a |
| 186 | PCI device (pdev is a pointer to the PCI device struct of your device). |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 187 | |
Christoph Hellwig | 9eb9e96 | 2019-02-15 09:01:53 +0100 | [diff] [blame] | 188 | These calls usually return zero to indicated your device can perform DMA |
| 189 | properly on the machine given the address mask you provided, but they might |
| 190 | return an error if the mask is too small to be supportable on the given |
| 191 | system. If it returns non-zero, your device cannot perform DMA properly on |
| 192 | this platform, and attempting to do so will result in undefined behavior. |
| 193 | You must not use DMA on this device unless the dma_set_mask family of |
| 194 | functions has returned success. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 195 | |
Christoph Hellwig | 9eb9e96 | 2019-02-15 09:01:53 +0100 | [diff] [blame] | 196 | This means that in the failure case, you have two options: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 197 | |
Christoph Hellwig | 9eb9e96 | 2019-02-15 09:01:53 +0100 | [diff] [blame] | 198 | 1) Use some non-DMA mode for data transfer, if possible. |
| 199 | 2) Ignore this device and do not initialize it. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 200 | |
Christoph Hellwig | 9eb9e96 | 2019-02-15 09:01:53 +0100 | [diff] [blame] | 201 | It is recommended that your driver print a kernel KERN_WARNING message when |
| 202 | setting the DMA mask fails. In this manner, if a user of your driver reports |
| 203 | that performance is bad or that the device is not even detected, you can ask |
| 204 | them for the kernel messages to find out exactly why. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 205 | |
Christoph Hellwig | 9eb9e96 | 2019-02-15 09:01:53 +0100 | [diff] [blame] | 206 | The standard 64-bit addressing device would do something like this:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 207 | |
Christoph Hellwig | 9eb9e96 | 2019-02-15 09:01:53 +0100 | [diff] [blame] | 208 | if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) { |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 209 | dev_warn(dev, "mydev: No suitable DMA available\n"); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 210 | goto ignore_this_device; |
| 211 | } |
| 212 | |
Christoph Hellwig | 9eb9e96 | 2019-02-15 09:01:53 +0100 | [diff] [blame] | 213 | If the device only supports 32-bit addressing for descriptors in the |
| 214 | coherent allocations, but supports full 64-bits for streaming mappings |
Mauro Carvalho Chehab | a36d053 | 2019-05-27 15:59:13 -0300 | [diff] [blame] | 215 | it would look like this:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 216 | |
Christoph Hellwig | 9eb9e96 | 2019-02-15 09:01:53 +0100 | [diff] [blame] | 217 | if (dma_set_mask(dev, DMA_BIT_MASK(64))) { |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 218 | dev_warn(dev, "mydev: No suitable DMA available\n"); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 219 | goto ignore_this_device; |
| 220 | } |
| 221 | |
Emilio López | 34c815f | 2014-05-20 16:54:22 -0600 | [diff] [blame] | 222 | The coherent mask will always be able to set the same or a smaller mask as |
| 223 | the streaming mask. However for the rare case that a device driver only |
| 224 | uses consistent allocations, one would have to check the return value from |
| 225 | dma_set_coherent_mask(). |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 226 | |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 227 | Finally, if your device can only drive the low 24-bits of |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 228 | address you might do something like:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 229 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 230 | if (dma_set_mask(dev, DMA_BIT_MASK(24))) { |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 231 | dev_warn(dev, "mydev: 24-bit DMA addressing not available\n"); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 232 | goto ignore_this_device; |
| 233 | } |
| 234 | |
Russell King | 4aa806b | 2013-06-26 13:49:44 +0100 | [diff] [blame] | 235 | When dma_set_mask() or dma_set_mask_and_coherent() is successful, and |
| 236 | returns zero, the kernel saves away this mask you have provided. The |
| 237 | kernel will use this information later when you make DMA mappings. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 238 | |
| 239 | There is a case which we are aware of at this time, which is worth |
| 240 | mentioning in this documentation. If your device supports multiple |
| 241 | functions (for example a sound card provides playback and record |
| 242 | functions) and the various different functions have _different_ |
| 243 | DMA addressing limitations, you may wish to probe each mask and |
| 244 | only provide the functionality which the machine can handle. It |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 245 | is important that the last call to dma_set_mask() be for the |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 246 | most specific mask. |
| 247 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 248 | Here is pseudo-code showing how this might be done:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 249 | |
Yang Hongyang | 2c5510d | 2009-04-06 19:01:19 -0700 | [diff] [blame] | 250 | #define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32) |
Marin Mitov | 038f7d0 | 2009-12-06 18:30:44 -0800 | [diff] [blame] | 251 | #define RECORD_ADDRESS_BITS DMA_BIT_MASK(24) |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 252 | |
| 253 | struct my_sound_card *card; |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 254 | struct device *dev; |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 255 | |
| 256 | ... |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 257 | if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) { |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 258 | card->playback_enabled = 1; |
| 259 | } else { |
| 260 | card->playback_enabled = 0; |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 261 | dev_warn(dev, "%s: Playback disabled due to DMA limitations\n", |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 262 | card->name); |
| 263 | } |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 264 | if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) { |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 265 | card->record_enabled = 1; |
| 266 | } else { |
| 267 | card->record_enabled = 0; |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 268 | dev_warn(dev, "%s: Record disabled due to DMA limitations\n", |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 269 | card->name); |
| 270 | } |
| 271 | |
| 272 | A sound card was used as an example here because this genre of PCI |
| 273 | devices seems to be littered with ISA chips given a PCI front end, |
| 274 | and thus retaining the 16MB DMA addressing limitations of ISA. |
| 275 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 276 | Types of DMA mappings |
| 277 | ===================== |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 278 | |
| 279 | There are two types of DMA mappings: |
| 280 | |
| 281 | - Consistent DMA mappings which are usually mapped at driver |
| 282 | initialization, unmapped at the end and for which the hardware should |
| 283 | guarantee that the device and the CPU can access the data |
| 284 | in parallel and will see updates made by each other without any |
| 285 | explicit software flushing. |
| 286 | |
| 287 | Think of "consistent" as "synchronous" or "coherent". |
| 288 | |
| 289 | The current default is to return consistent memory in the low 32 |
Yinghai Lu | 3a9ad0b | 2015-05-27 17:23:51 -0700 | [diff] [blame] | 290 | bits of the DMA space. However, for future compatibility you should |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 291 | set the consistent mask even if this default is fine for your |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 292 | driver. |
| 293 | |
| 294 | Good examples of what to use consistent mappings for are: |
| 295 | |
| 296 | - Network card DMA ring descriptors. |
| 297 | - SCSI adapter mailbox command data structures. |
| 298 | - Device firmware microcode executed out of |
| 299 | main memory. |
| 300 | |
| 301 | The invariant these examples all require is that any CPU store |
| 302 | to memory is immediately visible to the device, and vice |
| 303 | versa. Consistent mappings guarantee this. |
| 304 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 305 | .. important:: |
| 306 | |
| 307 | Consistent DMA memory does not preclude the usage of |
| 308 | proper memory barriers. The CPU may reorder stores to |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 309 | consistent memory just as it may normal memory. Example: |
| 310 | if it is important for the device to see the first word |
| 311 | of a descriptor updated before the second, you must do |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 312 | something like:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 313 | |
| 314 | desc->word0 = address; |
| 315 | wmb(); |
| 316 | desc->word1 = DESC_VALID; |
| 317 | |
| 318 | in order to get correct behavior on all platforms. |
| 319 | |
David Brownell | 21440d3 | 2006-04-01 10:21:52 -0800 | [diff] [blame] | 320 | Also, on some platforms your driver may need to flush CPU write |
| 321 | buffers in much the same way as it needs to flush write buffers |
| 322 | found in PCI bridges (such as by reading a register's value |
| 323 | after writing it). |
| 324 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 325 | - Streaming DMA mappings which are usually mapped for one DMA |
| 326 | transfer, unmapped right after it (unless you use dma_sync_* below) |
| 327 | and for which hardware can optimize for sequential accesses. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 328 | |
Geert Uytterhoeven | 11e285d | 2015-05-21 13:57:07 +0200 | [diff] [blame] | 329 | Think of "streaming" as "asynchronous" or "outside the coherency |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 330 | domain". |
| 331 | |
| 332 | Good examples of what to use streaming mappings for are: |
| 333 | |
| 334 | - Networking buffers transmitted/received by a device. |
| 335 | - Filesystem buffers written/read by a SCSI device. |
| 336 | |
| 337 | The interfaces for using this type of mapping were designed in |
| 338 | such a way that an implementation can make whatever performance |
| 339 | optimizations the hardware allows. To this end, when using |
| 340 | such mappings you must be explicit about what you want to happen. |
| 341 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 342 | Neither type of DMA mapping has alignment restrictions that come from |
| 343 | the underlying bus, although some devices may have such restrictions. |
David Brownell | 21440d3 | 2006-04-01 10:21:52 -0800 | [diff] [blame] | 344 | Also, systems with caches that aren't DMA-coherent will work better |
| 345 | when the underlying buffers don't share cache lines with other data. |
| 346 | |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 347 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 348 | Using Consistent DMA mappings |
| 349 | ============================= |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 350 | |
| 351 | To allocate and map large (PAGE_SIZE or so) consistent DMA regions, |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 352 | you should do:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 353 | |
| 354 | dma_addr_t dma_handle; |
| 355 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 356 | cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 357 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 358 | where device is a ``struct device *``. This may be called in interrupt |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 359 | context with the GFP_ATOMIC flag. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 360 | |
| 361 | Size is the length of the region you want to allocate, in bytes. |
| 362 | |
| 363 | This routine will allocate RAM for that region, so it acts similarly to |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 364 | __get_free_pages() (but takes size instead of a page order). If your |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 365 | driver needs regions sized smaller than a page, you may prefer using |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 366 | the dma_pool interface, described below. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 367 | |
Christoph Hellwig | d7e02a9 | 2019-03-13 18:45:21 +0100 | [diff] [blame] | 368 | The consistent DMA mapping interfaces, will by default return a DMA address |
| 369 | which is 32-bit addressable. Even if the device indicates (via the DMA mask) |
| 370 | that it may address the upper 32-bits, consistent allocation will only |
| 371 | return > 32-bit addresses for DMA if the consistent DMA mask has been |
| 372 | explicitly changed via dma_set_coherent_mask(). This is true of the |
| 373 | dma_pool interface as well. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 374 | |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 375 | dma_alloc_coherent() returns two values: the virtual address which you |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 376 | can use to access it from the CPU and dma_handle which you pass to the |
| 377 | card. |
| 378 | |
Yinghai Lu | 3a9ad0b | 2015-05-27 17:23:51 -0700 | [diff] [blame] | 379 | The CPU virtual address and the DMA address are both |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 380 | guaranteed to be aligned to the smallest PAGE_SIZE order which |
| 381 | is greater than or equal to the requested size. This invariant |
| 382 | exists (for example) to guarantee that if you allocate a chunk |
| 383 | which is smaller than or equal to 64 kilobytes, the extent of the |
| 384 | buffer you receive will not cross a 64K boundary. |
| 385 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 386 | To unmap and free such a DMA region, you call:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 387 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 388 | dma_free_coherent(dev, size, cpu_addr, dma_handle); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 389 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 390 | where dev, size are the same as in the above call and cpu_addr and |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 391 | dma_handle are the values dma_alloc_coherent() returned to you. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 392 | This function may not be called in interrupt context. |
| 393 | |
| 394 | If your driver needs lots of smaller memory regions, you can write |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 395 | custom code to subdivide pages returned by dma_alloc_coherent(), |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 396 | or you can use the dma_pool API to do that. A dma_pool is like |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 397 | a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages(). |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 398 | Also, it understands common hardware constraints for alignment, |
| 399 | like queue heads needing to be aligned on N byte boundaries. |
| 400 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 401 | Create a dma_pool like this:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 402 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 403 | struct dma_pool *pool; |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 404 | |
Gioh Kim | 2af9da8 | 2014-05-20 17:09:35 -0600 | [diff] [blame] | 405 | pool = dma_pool_create(name, dev, size, align, boundary); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 406 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 407 | The "name" is for diagnostics (like a kmem_cache name); dev and size |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 408 | are as above. The device's hardware alignment requirement for this |
| 409 | type of data is "align" (which is expressed in bytes, and must be a |
| 410 | power of two). If your device has no boundary crossing restrictions, |
Gioh Kim | 2af9da8 | 2014-05-20 17:09:35 -0600 | [diff] [blame] | 411 | pass 0 for boundary; passing 4096 says memory allocated from this pool |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 412 | must not cross 4KByte boundaries (but at that time it may be better to |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 413 | use dma_alloc_coherent() directly instead). |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 414 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 415 | Allocate memory from a DMA pool like this:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 416 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 417 | cpu_addr = dma_pool_alloc(pool, flags, &dma_handle); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 418 | |
Gioh Kim | 2af9da8 | 2014-05-20 17:09:35 -0600 | [diff] [blame] | 419 | flags are GFP_KERNEL if blocking is permitted (not in_interrupt nor |
| 420 | holding SMP locks), GFP_ATOMIC otherwise. Like dma_alloc_coherent(), |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 421 | this returns two values, cpu_addr and dma_handle. |
| 422 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 423 | Free memory that was allocated from a dma_pool like this:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 424 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 425 | dma_pool_free(pool, cpu_addr, dma_handle); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 426 | |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 427 | where pool is what you passed to dma_pool_alloc(), and cpu_addr and |
| 428 | dma_handle are the values dma_pool_alloc() returned. This function |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 429 | may be called in interrupt context. |
| 430 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 431 | Destroy a dma_pool by calling:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 432 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 433 | dma_pool_destroy(pool); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 434 | |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 435 | Make sure you've called dma_pool_free() for all memory allocated |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 436 | from a pool before you destroy the pool. This function may not |
| 437 | be called in interrupt context. |
| 438 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 439 | DMA Direction |
| 440 | ============= |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 441 | |
| 442 | The interfaces described in subsequent portions of this document |
| 443 | take a DMA direction argument, which is an integer and takes on |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 444 | one of the following values:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 445 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 446 | DMA_BIDIRECTIONAL |
| 447 | DMA_TO_DEVICE |
| 448 | DMA_FROM_DEVICE |
| 449 | DMA_NONE |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 450 | |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 451 | You should provide the exact DMA direction if you know it. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 452 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 453 | DMA_TO_DEVICE means "from main memory to the device" |
| 454 | DMA_FROM_DEVICE means "from the device to main memory" |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 455 | It is the direction in which the data moves during the DMA |
| 456 | transfer. |
| 457 | |
| 458 | You are _strongly_ encouraged to specify this as precisely |
| 459 | as you possibly can. |
| 460 | |
| 461 | If you absolutely cannot know the direction of the DMA transfer, |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 462 | specify DMA_BIDIRECTIONAL. It means that the DMA can go in |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 463 | either direction. The platform guarantees that you may legally |
| 464 | specify this, and that it will work, but this may be at the |
| 465 | cost of performance for example. |
| 466 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 467 | The value DMA_NONE is to be used for debugging. One can |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 468 | hold this in a data structure before you come to know the |
| 469 | precise direction, and this will help catch cases where your |
| 470 | direction tracking logic has failed to set things up properly. |
| 471 | |
| 472 | Another advantage of specifying this value precisely (outside of |
| 473 | potential platform-specific optimizations of such) is for debugging. |
| 474 | Some platforms actually have a write permission boolean which DMA |
| 475 | mappings can be marked with, much like page protections in the user |
| 476 | program address space. Such platforms can and do report errors in the |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 477 | kernel logs when the DMA controller hardware detects violation of the |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 478 | permission setting. |
| 479 | |
| 480 | Only streaming mappings specify a direction, consistent mappings |
| 481 | implicitly have a direction attribute setting of |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 482 | DMA_BIDIRECTIONAL. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 483 | |
| be7db05 | 2005-04-17 15:26:13 -0500 | [diff] [blame] | 484 | The SCSI subsystem tells you the direction to use in the |
| 485 | 'sc_data_direction' member of the SCSI command your driver is |
| 486 | working on. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 487 | |
| 488 | For Networking drivers, it's a rather simple affair. For transmit |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 489 | packets, map/unmap them with the DMA_TO_DEVICE direction |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 490 | specifier. For receive packets, just the opposite, map/unmap them |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 491 | with the DMA_FROM_DEVICE direction specifier. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 492 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 493 | Using Streaming DMA mappings |
| 494 | ============================ |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 495 | |
| 496 | The streaming DMA mapping routines can be called from interrupt |
| 497 | context. There are two versions of each map/unmap, one which will |
| 498 | map/unmap a single memory region, and one which will map/unmap a |
| 499 | scatterlist. |
| 500 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 501 | To map a single region, you do:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 502 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 503 | struct device *dev = &my_dev->dev; |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 504 | dma_addr_t dma_handle; |
| 505 | void *addr = buffer->ptr; |
| 506 | size_t size = buffer->len; |
| 507 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 508 | dma_handle = dma_map_single(dev, addr, size, direction); |
Liu Hua | b2dd83b | 2014-09-18 12:15:28 +0800 | [diff] [blame] | 509 | if (dma_mapping_error(dev, dma_handle)) { |
Shuah Khan | 8d7f62e | 2012-10-18 14:00:58 -0600 | [diff] [blame] | 510 | /* |
| 511 | * reduce current DMA mapping usage, |
| 512 | * delay and try again later or |
| 513 | * reset driver. |
| 514 | */ |
| 515 | goto map_error_handling; |
| 516 | } |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 517 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 518 | and to unmap it:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 519 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 520 | dma_unmap_single(dev, dma_handle, size, direction); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 521 | |
Shuah Khan | 8d7f62e | 2012-10-18 14:00:58 -0600 | [diff] [blame] | 522 | You should call dma_mapping_error() as dma_map_single() could fail and return |
Christoph Hellwig | f51f288 | 2017-05-22 10:58:49 +0200 | [diff] [blame] | 523 | error. Doing so will ensure that the mapping code will work correctly on all |
| 524 | DMA implementations without any dependency on the specifics of the underlying |
| 525 | implementation. Using the returned address without checking for errors could |
| 526 | result in failures ranging from panics to silent data corruption. The same |
| 527 | applies to dma_map_page() as well. |
Shuah Khan | 8d7f62e | 2012-10-18 14:00:58 -0600 | [diff] [blame] | 528 | |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 529 | You should call dma_unmap_single() when the DMA activity is finished, e.g., |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 530 | from the interrupt which told you that the DMA transfer is done. |
| 531 | |
Bjorn Helgaas | f311a72 | 2014-05-20 16:56:27 -0600 | [diff] [blame] | 532 | Using CPU pointers like this for single mappings has a disadvantage: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 533 | you cannot reference HIGHMEM memory in this way. Thus, there is a |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 534 | map/unmap interface pair akin to dma_{map,unmap}_single(). These |
Bjorn Helgaas | f311a72 | 2014-05-20 16:56:27 -0600 | [diff] [blame] | 535 | interfaces deal with page/offset pairs instead of CPU pointers. |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 536 | Specifically:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 537 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 538 | struct device *dev = &my_dev->dev; |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 539 | dma_addr_t dma_handle; |
| 540 | struct page *page = buffer->page; |
| 541 | unsigned long offset = buffer->offset; |
| 542 | size_t size = buffer->len; |
| 543 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 544 | dma_handle = dma_map_page(dev, page, offset, size, direction); |
Liu Hua | b2dd83b | 2014-09-18 12:15:28 +0800 | [diff] [blame] | 545 | if (dma_mapping_error(dev, dma_handle)) { |
Shuah Khan | 8d7f62e | 2012-10-18 14:00:58 -0600 | [diff] [blame] | 546 | /* |
| 547 | * reduce current DMA mapping usage, |
| 548 | * delay and try again later or |
| 549 | * reset driver. |
| 550 | */ |
| 551 | goto map_error_handling; |
| 552 | } |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 553 | |
| 554 | ... |
| 555 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 556 | dma_unmap_page(dev, dma_handle, size, direction); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 557 | |
| 558 | Here, "offset" means byte offset within the given page. |
| 559 | |
Shuah Khan | 8d7f62e | 2012-10-18 14:00:58 -0600 | [diff] [blame] | 560 | You should call dma_mapping_error() as dma_map_page() could fail and return |
| 561 | error as outlined under the dma_map_single() discussion. |
| 562 | |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 563 | You should call dma_unmap_page() when the DMA activity is finished, e.g., |
Shuah Khan | 8d7f62e | 2012-10-18 14:00:58 -0600 | [diff] [blame] | 564 | from the interrupt which told you that the DMA transfer is done. |
| 565 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 566 | With scatterlists, you map a region gathered from several regions by:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 567 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 568 | int i, count = dma_map_sg(dev, sglist, nents, direction); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 569 | struct scatterlist *sg; |
| 570 | |
saeed bishara | 4c2f6d4 | 2007-08-08 13:09:00 +0200 | [diff] [blame] | 571 | for_each_sg(sglist, sg, count, i) { |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 572 | hw_address[i] = sg_dma_address(sg); |
| 573 | hw_len[i] = sg_dma_len(sg); |
| 574 | } |
| 575 | |
| 576 | where nents is the number of entries in the sglist. |
| 577 | |
| 578 | The implementation is free to merge several consecutive sglist entries |
| 579 | into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any |
| 580 | consecutive sglist entries can be merged into one provided the first one |
| 581 | ends and the second one starts on a page boundary - in fact this is a huge |
| 582 | advantage for cards which either cannot do scatter-gather or have very |
| 583 | limited number of scatter-gather entries) and returns the actual number |
| 584 | of sg entries it mapped them to. On failure 0 is returned. |
| 585 | |
| 586 | Then you should loop count times (note: this can be less than nents times) |
| 587 | and use sg_dma_address() and sg_dma_len() macros where you previously |
| 588 | accessed sg->address and sg->length as shown above. |
| 589 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 590 | To unmap a scatterlist, just call:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 591 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 592 | dma_unmap_sg(dev, sglist, nents, direction); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 593 | |
| 594 | Again, make sure DMA activity has already finished. |
| 595 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 596 | .. note:: |
| 597 | |
| 598 | The 'nents' argument to the dma_unmap_sg call must be |
| 599 | the _same_ one you passed into the dma_map_sg call, |
| 600 | it should _NOT_ be the 'count' value _returned_ from the |
| 601 | dma_map_sg call. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 602 | |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 603 | Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}() |
Yinghai Lu | 3a9ad0b | 2015-05-27 17:23:51 -0700 | [diff] [blame] | 604 | counterpart, because the DMA address space is a shared resource and |
| 605 | you could render the machine unusable by consuming all DMA addresses. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 606 | |
| 607 | If you need to use the same streaming DMA region multiple times and touch |
| 608 | the data in between the DMA transfers, the buffer needs to be synced |
Bjorn Helgaas | f311a72 | 2014-05-20 16:56:27 -0600 | [diff] [blame] | 609 | properly in order for the CPU and device to see the most up-to-date and |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 610 | correct copy of the DMA buffer. |
| 611 | |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 612 | So, firstly, just map it with dma_map_{single,sg}(), and after each DMA |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 613 | transfer call either:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 614 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 615 | dma_sync_single_for_cpu(dev, dma_handle, size, direction); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 616 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 617 | or:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 618 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 619 | dma_sync_sg_for_cpu(dev, sglist, nents, direction); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 620 | |
| 621 | as appropriate. |
| 622 | |
| 623 | Then, if you wish to let the device get at the DMA area again, |
Bjorn Helgaas | f311a72 | 2014-05-20 16:56:27 -0600 | [diff] [blame] | 624 | finish accessing the data with the CPU, and then before actually |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 625 | giving the buffer to the hardware call either:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 626 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 627 | dma_sync_single_for_device(dev, dma_handle, size, direction); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 628 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 629 | or:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 630 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 631 | dma_sync_sg_for_device(dev, sglist, nents, direction); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 632 | |
| 633 | as appropriate. |
| 634 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 635 | .. note:: |
| 636 | |
| 637 | The 'nents' argument to dma_sync_sg_for_cpu() and |
Sakari Ailus | 7bc590b | 2015-09-23 14:41:09 +0300 | [diff] [blame] | 638 | dma_sync_sg_for_device() must be the same passed to |
| 639 | dma_map_sg(). It is _NOT_ the count returned by |
| 640 | dma_map_sg(). |
| 641 | |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 642 | After the last DMA transfer call one of the DMA unmap routines |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 643 | dma_unmap_{single,sg}(). If you don't touch the data from the first |
| 644 | dma_map_*() call till dma_unmap_*(), then you don't have to call the |
| 645 | dma_sync_*() routines at all. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 646 | |
| 647 | Here is pseudo code which shows a situation in which you would need |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 648 | to use the dma_sync_*() interfaces:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 649 | |
| 650 | my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len) |
| 651 | { |
| 652 | dma_addr_t mapping; |
| 653 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 654 | mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE); |
Andrey Smirnov | be6c309 | 2016-09-20 09:04:20 -0700 | [diff] [blame] | 655 | if (dma_mapping_error(cp->dev, mapping)) { |
Shuah Khan | 8d7f62e | 2012-10-18 14:00:58 -0600 | [diff] [blame] | 656 | /* |
| 657 | * reduce current DMA mapping usage, |
| 658 | * delay and try again later or |
| 659 | * reset driver. |
| 660 | */ |
| 661 | goto map_error_handling; |
| 662 | } |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 663 | |
| 664 | cp->rx_buf = buffer; |
| 665 | cp->rx_len = len; |
| 666 | cp->rx_dma = mapping; |
| 667 | |
| 668 | give_rx_buf_to_card(cp); |
| 669 | } |
| 670 | |
| 671 | ... |
| 672 | |
| 673 | my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs) |
| 674 | { |
| 675 | struct my_card *cp = devid; |
| 676 | |
| 677 | ... |
| 678 | if (read_card_status(cp) == RX_BUF_TRANSFERRED) { |
| 679 | struct my_card_header *hp; |
| 680 | |
| 681 | /* Examine the header to see if we wish |
| 682 | * to accept the data. But synchronize |
| 683 | * the DMA transfer with the CPU first |
| 684 | * so that we see updated contents. |
| 685 | */ |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 686 | dma_sync_single_for_cpu(&cp->dev, cp->rx_dma, |
| 687 | cp->rx_len, |
| 688 | DMA_FROM_DEVICE); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 689 | |
| 690 | /* Now it is safe to examine the buffer. */ |
| 691 | hp = (struct my_card_header *) cp->rx_buf; |
| 692 | if (header_is_ok(hp)) { |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 693 | dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len, |
| 694 | DMA_FROM_DEVICE); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 695 | pass_to_upper_layers(cp->rx_buf); |
| 696 | make_and_setup_new_rx_buf(cp); |
| 697 | } else { |
Michal Miroslaw | 3f0fb4e | 2011-07-26 16:08:51 -0700 | [diff] [blame] | 698 | /* CPU should not write to |
| 699 | * DMA_FROM_DEVICE-mapped area, |
| 700 | * so dma_sync_single_for_device() is |
| 701 | * not needed here. It would be required |
| 702 | * for DMA_BIDIRECTIONAL mapping if |
| 703 | * the memory was modified. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 704 | */ |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 705 | give_rx_buf_to_card(cp); |
| 706 | } |
| 707 | } |
| 708 | } |
| 709 | |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 710 | Drivers converted fully to this interface should not use virt_to_bus() any |
| 711 | longer, nor should they use bus_to_virt(). Some drivers have to be changed a |
| 712 | little bit, because there is no longer an equivalent to bus_to_virt() in the |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 713 | dynamic DMA mapping scheme - you have to always store the DMA addresses |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 714 | returned by the dma_alloc_coherent(), dma_pool_alloc(), and dma_map_single() |
| 715 | calls (dma_map_sg() stores them in the scatterlist itself if the platform |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 716 | supports dynamic DMA mapping in hardware) in your driver structures and/or |
| 717 | in the card registers. |
| 718 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 719 | All drivers should be using these interfaces with no exceptions. It |
| 720 | is planned to completely remove virt_to_bus() and bus_to_virt() as |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 721 | they are entirely deprecated. Some ports already do not provide these |
| 722 | as it is impossible to correctly support them. |
| 723 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 724 | Handling Errors |
| 725 | =============== |
FUJITA Tomonori | 4ae9ca8 | 2010-05-26 14:44:22 -0700 | [diff] [blame] | 726 | |
| 727 | DMA address space is limited on some architectures and an allocation |
| 728 | failure can be determined by: |
| 729 | |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 730 | - checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0 |
FUJITA Tomonori | 4ae9ca8 | 2010-05-26 14:44:22 -0700 | [diff] [blame] | 731 | |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 732 | - checking the dma_addr_t returned from dma_map_single() and dma_map_page() |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 733 | by using dma_mapping_error():: |
FUJITA Tomonori | 4ae9ca8 | 2010-05-26 14:44:22 -0700 | [diff] [blame] | 734 | |
| 735 | dma_addr_t dma_handle; |
| 736 | |
| 737 | dma_handle = dma_map_single(dev, addr, size, direction); |
| 738 | if (dma_mapping_error(dev, dma_handle)) { |
| 739 | /* |
| 740 | * reduce current DMA mapping usage, |
| 741 | * delay and try again later or |
| 742 | * reset driver. |
| 743 | */ |
Shuah Khan | 8d7f62e | 2012-10-18 14:00:58 -0600 | [diff] [blame] | 744 | goto map_error_handling; |
| 745 | } |
| 746 | |
| 747 | - unmap pages that are already mapped, when mapping error occurs in the middle |
| 748 | of a multiple page mapping attempt. These example are applicable to |
| 749 | dma_map_page() as well. |
| 750 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 751 | Example 1:: |
| 752 | |
Shuah Khan | 8d7f62e | 2012-10-18 14:00:58 -0600 | [diff] [blame] | 753 | dma_addr_t dma_handle1; |
| 754 | dma_addr_t dma_handle2; |
| 755 | |
| 756 | dma_handle1 = dma_map_single(dev, addr, size, direction); |
| 757 | if (dma_mapping_error(dev, dma_handle1)) { |
| 758 | /* |
| 759 | * reduce current DMA mapping usage, |
| 760 | * delay and try again later or |
| 761 | * reset driver. |
| 762 | */ |
| 763 | goto map_error_handling1; |
| 764 | } |
| 765 | dma_handle2 = dma_map_single(dev, addr, size, direction); |
| 766 | if (dma_mapping_error(dev, dma_handle2)) { |
| 767 | /* |
| 768 | * reduce current DMA mapping usage, |
| 769 | * delay and try again later or |
| 770 | * reset driver. |
| 771 | */ |
| 772 | goto map_error_handling2; |
| 773 | } |
| 774 | |
| 775 | ... |
| 776 | |
| 777 | map_error_handling2: |
| 778 | dma_unmap_single(dma_handle1); |
| 779 | map_error_handling1: |
| 780 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 781 | Example 2:: |
| 782 | |
| 783 | /* |
| 784 | * if buffers are allocated in a loop, unmap all mapped buffers when |
| 785 | * mapping error is detected in the middle |
| 786 | */ |
Shuah Khan | 8d7f62e | 2012-10-18 14:00:58 -0600 | [diff] [blame] | 787 | |
| 788 | dma_addr_t dma_addr; |
| 789 | dma_addr_t array[DMA_BUFFERS]; |
| 790 | int save_index = 0; |
| 791 | |
| 792 | for (i = 0; i < DMA_BUFFERS; i++) { |
| 793 | |
| 794 | ... |
| 795 | |
| 796 | dma_addr = dma_map_single(dev, addr, size, direction); |
| 797 | if (dma_mapping_error(dev, dma_addr)) { |
| 798 | /* |
| 799 | * reduce current DMA mapping usage, |
| 800 | * delay and try again later or |
| 801 | * reset driver. |
| 802 | */ |
| 803 | goto map_error_handling; |
| 804 | } |
| 805 | array[i].dma_addr = dma_addr; |
| 806 | save_index++; |
| 807 | } |
| 808 | |
| 809 | ... |
| 810 | |
| 811 | map_error_handling: |
| 812 | |
| 813 | for (i = 0; i < save_index; i++) { |
| 814 | |
| 815 | ... |
| 816 | |
| 817 | dma_unmap_single(array[i].dma_addr); |
FUJITA Tomonori | 4ae9ca8 | 2010-05-26 14:44:22 -0700 | [diff] [blame] | 818 | } |
| 819 | |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 820 | Networking drivers must call dev_kfree_skb() to free the socket buffer |
FUJITA Tomonori | 4ae9ca8 | 2010-05-26 14:44:22 -0700 | [diff] [blame] | 821 | and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook |
| 822 | (ndo_start_xmit). This means that the socket buffer is just dropped in |
| 823 | the failure case. |
| 824 | |
| 825 | SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping |
| 826 | fails in the queuecommand hook. This means that the SCSI subsystem |
| 827 | passes the command to the driver again later. |
| 828 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 829 | Optimizing Unmap State Space Consumption |
| 830 | ======================================== |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 831 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 832 | On many platforms, dma_unmap_{single,page}() is simply a nop. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 833 | Therefore, keeping track of the mapping address and length is a waste |
| 834 | of space. Instead of filling your drivers up with ifdefs and the like |
| 835 | to "work around" this (which would defeat the whole purpose of a |
| 836 | portable API) the following facilities are provided. |
| 837 | |
| 838 | Actually, instead of describing the macros one by one, we'll |
| 839 | transform some example code. |
| 840 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 841 | 1) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures. |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 842 | Example, before:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 843 | |
| 844 | struct ring_state { |
| 845 | struct sk_buff *skb; |
| 846 | dma_addr_t mapping; |
| 847 | __u32 len; |
| 848 | }; |
| 849 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 850 | after:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 851 | |
| 852 | struct ring_state { |
| 853 | struct sk_buff *skb; |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 854 | DEFINE_DMA_UNMAP_ADDR(mapping); |
| 855 | DEFINE_DMA_UNMAP_LEN(len); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 856 | }; |
| 857 | |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 858 | 2) Use dma_unmap_{addr,len}_set() to set these values. |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 859 | Example, before:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 860 | |
| 861 | ringp->mapping = FOO; |
| 862 | ringp->len = BAR; |
| 863 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 864 | after:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 865 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 866 | dma_unmap_addr_set(ringp, mapping, FOO); |
| 867 | dma_unmap_len_set(ringp, len, BAR); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 868 | |
Bjorn Helgaas | 77f2ea2 | 2014-04-30 11:20:53 -0600 | [diff] [blame] | 869 | 3) Use dma_unmap_{addr,len}() to access these values. |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 870 | Example, before:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 871 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 872 | dma_unmap_single(dev, ringp->mapping, ringp->len, |
| 873 | DMA_FROM_DEVICE); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 874 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 875 | after:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 876 | |
FUJITA Tomonori | 216bf58 | 2010-03-10 15:23:42 -0800 | [diff] [blame] | 877 | dma_unmap_single(dev, |
| 878 | dma_unmap_addr(ringp, mapping), |
| 879 | dma_unmap_len(ringp, len), |
| 880 | DMA_FROM_DEVICE); |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 881 | |
| 882 | It really should be self-explanatory. We treat the ADDR and LEN |
| 883 | separately, because it is possible for an implementation to only |
| 884 | need the address in order to perform the unmap operation. |
| 885 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 886 | Platform Issues |
| 887 | =============== |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 888 | |
| 889 | If you are just writing drivers for Linux and do not maintain |
| 890 | an architecture port for the kernel, you can safely skip down |
| 891 | to "Closing". |
| 892 | |
| 893 | 1) Struct scatterlist requirements. |
| 894 | |
Christoph Hellwig | e92ae52 | 2016-09-11 15:58:53 +0200 | [diff] [blame] | 895 | You need to enable CONFIG_NEED_SG_DMA_LENGTH if the architecture |
| 896 | supports IOMMUs (including software IOMMU). |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 897 | |
FUJITA Tomonori | ce00f7f | 2010-08-14 16:36:17 +0900 | [diff] [blame] | 898 | 2) ARCH_DMA_MINALIGN |
FUJITA Tomonori | 2fd74e2 | 2010-05-26 14:44:23 -0700 | [diff] [blame] | 899 | |
| 900 | Architectures must ensure that kmalloc'ed buffer is |
| 901 | DMA-safe. Drivers and subsystems depend on it. If an architecture |
| 902 | isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in |
| 903 | the CPU cache is identical to data in main memory), |
FUJITA Tomonori | ce00f7f | 2010-08-14 16:36:17 +0900 | [diff] [blame] | 904 | ARCH_DMA_MINALIGN must be set so that the memory allocator |
FUJITA Tomonori | 2fd74e2 | 2010-05-26 14:44:23 -0700 | [diff] [blame] | 905 | makes sure that kmalloc'ed buffer doesn't share a cache line with |
| 906 | the others. See arch/arm/include/asm/cache.h as an example. |
| 907 | |
FUJITA Tomonori | ce00f7f | 2010-08-14 16:36:17 +0900 | [diff] [blame] | 908 | Note that ARCH_DMA_MINALIGN is about DMA memory alignment |
FUJITA Tomonori | 2fd74e2 | 2010-05-26 14:44:23 -0700 | [diff] [blame] | 909 | constraints. You don't need to worry about the architecture data |
| 910 | alignment constraints (e.g. the alignment constraints about 64-bit |
| 911 | objects). |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 912 | |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 913 | Closing |
| 914 | ======= |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 915 | |
Francis Galiegue | a33f322 | 2010-04-23 00:08:02 +0200 | [diff] [blame] | 916 | This document, and the API itself, would not be in its current |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 917 | form without the feedback and suggestions from numerous individuals. |
| 918 | We would like to specifically mention, in no particular order, the |
Mauro Carvalho Chehab | 266921b | 2017-05-17 10:27:28 -0300 | [diff] [blame] | 919 | following people:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 920 | |
| 921 | Russell King <rmk@arm.linux.org.uk> |
| 922 | Leo Dagum <dagum@barrel.engr.sgi.com> |
| 923 | Ralf Baechle <ralf@oss.sgi.com> |
| 924 | Grant Grundler <grundler@cup.hp.com> |
| 925 | Jay Estabrook <Jay.Estabrook@compaq.com> |
| 926 | Thomas Sailer <sailer@ife.ee.ethz.ch> |
| 927 | Andrea Arcangeli <andrea@suse.de> |
Rob Landley | 26bbb29 | 2007-10-15 11:42:52 +0200 | [diff] [blame] | 928 | Jens Axboe <jens.axboe@oracle.com> |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 929 | David Mosberger-Tang <davidm@hpl.hp.com> |