Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 1 | .. SPDX-License-Identifier: GPL-2.0 |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 2 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 3 | ============================== |
| 4 | How To Write Linux PCI Drivers |
| 5 | ============================== |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 6 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 7 | :Authors: - Martin Mares <mj@ucw.cz> |
| 8 | - Grant Grundler <grundler@parisc-linux.org> |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 9 | |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 10 | The world of PCI is vast and full of (mostly unpleasant) surprises. |
| 11 | Since each CPU architecture implements different chip-sets and PCI devices |
| 12 | have different requirements (erm, "features"), the result is the PCI support |
| 13 | in the Linux kernel is not as trivial as one would wish. This short paper |
| 14 | tries to introduce all potential driver authors to Linux APIs for |
| 15 | PCI device drivers. |
| 16 | |
| 17 | A more complete resource is the third edition of "Linux Device Drivers" |
| 18 | by Jonathan Corbet, Alessandro Rubini, and Greg Kroah-Hartman. |
| 19 | LDD3 is available for free (under Creative Commons License) from: |
Alexander A. Klimov | 7ecd4a8 | 2020-06-27 12:30:50 +0200 | [diff] [blame] | 20 | https://lwn.net/Kernel/LDD3/. |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 21 | |
| 22 | However, keep in mind that all documents are subject to "bit rot". |
| 23 | Refer to the source code if things are not working as described here. |
| 24 | |
| 25 | Please send questions/comments/patches about Linux PCI API to the |
| 26 | "Linux PCI" <linux-pci@atrey.karlin.mff.cuni.cz> mailing list. |
| 27 | |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 28 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 29 | Structure of PCI drivers |
| 30 | ======================== |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 31 | PCI drivers "discover" PCI devices in a system via pci_register_driver(). |
| 32 | Actually, it's the other way around. When the PCI generic code discovers |
| 33 | a new device, the driver with a matching "description" will be notified. |
| 34 | Details on this below. |
| 35 | |
| 36 | pci_register_driver() leaves most of the probing for devices to |
| 37 | the PCI layer and supports online insertion/removal of devices [thus |
| 38 | supporting hot-pluggable PCI, CardBus, and Express-Card in a single driver]. |
| 39 | pci_register_driver() call requires passing in a table of function |
| 40 | pointers and thus dictates the high level structure of a driver. |
| 41 | |
| 42 | Once the driver knows about a PCI device and takes ownership, the |
| 43 | driver generally needs to perform the following initialization: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 44 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 45 | - Enable the device |
| 46 | - Request MMIO/IOP resources |
| 47 | - Set the DMA mask size (for both coherent and streaming DMA) |
| 48 | - Allocate and initialize shared control data (pci_allocate_coherent()) |
| 49 | - Access device configuration space (if needed) |
| 50 | - Register IRQ handler (request_irq()) |
| 51 | - Initialize non-PCI (i.e. LAN/SCSI/etc parts of the chip) |
| 52 | - Enable DMA/processing engines |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 53 | |
| 54 | When done using the device, and perhaps the module needs to be unloaded, |
| 55 | the driver needs to take the follow steps: |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 56 | |
| 57 | - Disable the device from generating IRQs |
| 58 | - Release the IRQ (free_irq()) |
| 59 | - Stop all DMA activity |
| 60 | - Release DMA buffers (both streaming and coherent) |
| 61 | - Unregister from other subsystems (e.g. scsi or netdev) |
| 62 | - Release MMIO/IOP resources |
| 63 | - Disable the device |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 64 | |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 65 | Most of these topics are covered in the following sections. |
| 66 | For the rest look at LDD3 or <linux/pci.h> . |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 67 | |
| 68 | If the PCI subsystem is not configured (CONFIG_PCI is not set), most of |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 69 | the PCI functions described below are defined as inline functions either |
| 70 | completely empty or just returning an appropriate error codes to avoid |
| 71 | lots of ifdefs in the drivers. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 72 | |
| 73 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 74 | pci_register_driver() call |
| 75 | ========================== |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 76 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 77 | PCI device drivers call ``pci_register_driver()`` during their |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 78 | initialization with a pointer to a structure describing the driver |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 79 | (``struct pci_driver``): |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 80 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 81 | .. kernel-doc:: include/linux/pci.h |
| 82 | :functions: pci_driver |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 83 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 84 | The ID table is an array of ``struct pci_device_id`` entries ending with an |
Joe Perches | 92e112f | 2013-12-13 11:36:22 -0700 | [diff] [blame] | 85 | all-zero entry. Definitions with static const are generally preferred. |
Joe Perches | 92e112f | 2013-12-13 11:36:22 -0700 | [diff] [blame] | 86 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 87 | .. kernel-doc:: include/linux/mod_devicetable.h |
| 88 | :functions: pci_device_id |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 89 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 90 | Most drivers only need ``PCI_DEVICE()`` or ``PCI_DEVICE_CLASS()`` to set up |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 91 | a pci_device_id table. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 92 | |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 93 | New PCI IDs may be added to a device driver pci_ids table at runtime |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 94 | as shown below:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 95 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 96 | echo "vendor device subvendor subdevice class class_mask driver_data" > \ |
| 97 | /sys/bus/pci/drivers/{driver}/new_id |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 98 | |
| 99 | All fields are passed in as hexadecimal values (no leading 0x). |
Jean Delvare | 6ba1863 | 2007-04-07 17:21:28 +0200 | [diff] [blame] | 100 | The vendor and device fields are mandatory, the others are optional. Users |
| 101 | need pass only as many optional fields as necessary: |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 102 | |
| 103 | - subvendor and subdevice fields default to PCI_ANY_ID (FFFFFFFF) |
| 104 | - class and classmask fields default to 0 |
| 105 | - driver_data defaults to 0UL. |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 106 | |
Jean Delvare | b41d6cf | 2008-08-17 21:06:59 +0200 | [diff] [blame] | 107 | Note that driver_data must match the value used by any of the pci_device_id |
| 108 | entries defined in the driver. This makes the driver_data field mandatory |
| 109 | if all the pci_device_id entries have a non-zero driver_data value. |
| 110 | |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 111 | Once added, the driver probe routine will be invoked for any unclaimed |
| 112 | PCI devices listed in its (newly updated) pci_ids list. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 113 | |
| 114 | When the driver exits, it just calls pci_unregister_driver() and the PCI layer |
| 115 | automatically calls the remove hook for all devices handled by the driver. |
| 116 | |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 117 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 118 | "Attributes" for driver functions/data |
| 119 | -------------------------------------- |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 120 | |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 121 | Please mark the initialization and cleanup functions where appropriate |
| 122 | (the corresponding macros are defined in <linux/init.h>): |
| 123 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 124 | ====== ================================================= |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 125 | __init Initialization code. Thrown away after the driver |
| 126 | initializes. |
| 127 | __exit Exit code. Ignored for non-modular drivers. |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 128 | ====== ================================================= |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 129 | |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 130 | Tips on when/where to use the above attributes: |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 131 | - The module_init()/module_exit() functions (and all |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 132 | initialization functions called _only_ from these) |
| 133 | should be marked __init/__exit. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 134 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 135 | - Do not mark the struct pci_driver. |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 136 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 137 | - Do NOT mark a function if you are not sure which mark to use. |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 138 | Better to not mark the function than mark the function wrong. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 139 | |
| 140 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 141 | How to find PCI devices manually |
| 142 | ================================ |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 143 | |
| 144 | PCI drivers should have a really good reason for not using the |
| 145 | pci_register_driver() interface to search for PCI devices. |
| 146 | The main reason PCI devices are controlled by multiple drivers |
| 147 | is because one PCI device implements several different HW services. |
| 148 | E.g. combined serial/parallel port/floppy controller. |
| 149 | |
| 150 | A manual search may be performed using the following constructs: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 151 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 152 | Searching by vendor and device ID:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 153 | |
| 154 | struct pci_dev *dev = NULL; |
| 155 | while (dev = pci_get_device(VENDOR_ID, DEVICE_ID, dev)) |
| 156 | configure_device(dev); |
| 157 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 158 | Searching by class ID (iterate in a similar way):: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 159 | |
| 160 | pci_get_class(CLASS_ID, dev) |
| 161 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 162 | Searching by both vendor/device and subsystem vendor/device ID:: |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 163 | |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 164 | pci_get_subsys(VENDOR_ID,DEVICE_ID, SUBSYS_VENDOR_ID, SUBSYS_DEVICE_ID, dev). |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 165 | |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 166 | You can use the constant PCI_ANY_ID as a wildcard replacement for |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 167 | VENDOR_ID or DEVICE_ID. This allows searching for any device from a |
| 168 | specific vendor, for example. |
| 169 | |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 170 | These functions are hotplug-safe. They increment the reference count on |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 171 | the pci_dev that they return. You must eventually (possibly at module unload) |
| 172 | decrement the reference count on these devices by calling pci_dev_put(). |
| 173 | |
| 174 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 175 | Device Initialization Steps |
| 176 | =========================== |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 177 | |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 178 | As noted in the introduction, most PCI drivers need the following steps |
| 179 | for device initialization: |
| 180 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 181 | - Enable the device |
| 182 | - Request MMIO/IOP resources |
| 183 | - Set the DMA mask size (for both coherent and streaming DMA) |
| 184 | - Allocate and initialize shared control data (pci_allocate_coherent()) |
| 185 | - Access device configuration space (if needed) |
| 186 | - Register IRQ handler (request_irq()) |
| 187 | - Initialize non-PCI (i.e. LAN/SCSI/etc parts of the chip) |
| 188 | - Enable DMA/processing engines. |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 189 | |
| 190 | The driver can access PCI config space registers at any time. |
| 191 | (Well, almost. When running BIST, config space can go away...but |
| 192 | that will just result in a PCI Bus Master Abort and config reads |
| 193 | will return garbage). |
| 194 | |
| 195 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 196 | Enable the PCI device |
| 197 | --------------------- |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 198 | Before touching any device registers, the driver needs to enable |
| 199 | the PCI device by calling pci_enable_device(). This will: |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 200 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 201 | - wake up the device if it was in suspended state, |
| 202 | - allocate I/O and memory regions of the device (if BIOS did not), |
| 203 | - allocate an IRQ (if BIOS did not). |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 204 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 205 | .. note:: |
| 206 | pci_enable_device() can fail! Check the return value. |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 207 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 208 | .. warning:: |
| 209 | OS BUG: we don't check resource allocations before enabling those |
| 210 | resources. The sequence would make more sense if we called |
| 211 | pci_request_resources() before calling pci_enable_device(). |
Randy Dunlap | abccb9d | 2020-07-03 14:21:56 -0700 | [diff] [blame] | 212 | Currently, the device drivers can't detect the bug when two |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 213 | devices have been allocated the same range. This is not a common |
| 214 | problem and unlikely to get fixed soon. |
| 215 | |
| 216 | This has been discussed before but not changed as of 2.6.19: |
Bjorn Helgaas | 16bbbc8 | 2020-06-30 12:41:39 -0500 | [diff] [blame] | 217 | https://lore.kernel.org/r/20060302180025.GC28895@flint.arm.linux.org.uk/ |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 218 | |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 219 | |
| 220 | pci_set_master() will enable DMA by setting the bus master bit |
| 221 | in the PCI_COMMAND register. It also fixes the latency timer value if |
Ben Hutchings | 6a47907 | 2008-12-23 03:08:29 +0000 | [diff] [blame] | 222 | it's set to something bogus by the BIOS. pci_clear_master() will |
| 223 | disable DMA by clearing the bus master bit. |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 224 | |
| 225 | If the PCI device can use the PCI Memory-Write-Invalidate transaction, |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 226 | call pci_set_mwi(). This enables the PCI_COMMAND bit for Mem-Wr-Inval |
| 227 | and also ensures that the cache line size register is set correctly. |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 228 | Check the return value of pci_set_mwi() as not all architectures |
Randy Dunlap | 694625c | 2007-07-09 11:55:54 -0700 | [diff] [blame] | 229 | or chip-sets may support Memory-Write-Invalidate. Alternatively, |
| 230 | if Mem-Wr-Inval would be nice to have but is not required, call |
| 231 | pci_try_set_mwi() to have the system do its best effort at enabling |
| 232 | Mem-Wr-Inval. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 233 | |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 234 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 235 | Request MMIO/IOP resources |
| 236 | -------------------------- |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 237 | Memory (MMIO), and I/O port addresses should NOT be read directly |
| 238 | from the PCI device config space. Use the values in the pci_dev structure |
| 239 | as the PCI "bus address" might have been remapped to a "host physical" |
| 240 | address by the arch/chip-set specific kernel support. |
| 241 | |
Mauro Carvalho Chehab | 7d3d325 | 2020-03-11 12:51:17 +0100 | [diff] [blame] | 242 | See Documentation/driver-api/io-mapping.rst for how to access device registers |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 243 | or device memory. |
| 244 | |
| 245 | The device driver needs to call pci_request_region() to verify |
| 246 | no other device is already using the same address resource. |
| 247 | Conversely, drivers should call pci_release_region() AFTER |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 248 | calling pci_disable_device(). |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 249 | The idea is to prevent two devices colliding on the same address range. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 250 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 251 | .. tip:: |
| 252 | See OS BUG comment above. Currently (2.6.19), The driver can only |
| 253 | determine MMIO and IO Port resource availability _after_ calling |
| 254 | pci_enable_device(). |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 255 | |
| 256 | Generic flavors of pci_request_region() are request_mem_region() |
| 257 | (for MMIO ranges) and request_region() (for IO Port ranges). |
| 258 | Use these for address resources that are not described by "normal" PCI |
| 259 | BARs. |
| 260 | |
| 261 | Also see pci_request_selected_regions() below. |
| 262 | |
| 263 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 264 | Set the DMA mask size |
| 265 | --------------------- |
| 266 | .. note:: |
| 267 | If anything below doesn't make sense, please refer to |
Mauro Carvalho Chehab | 985098a | 2020-06-23 09:09:10 +0200 | [diff] [blame] | 268 | :doc:`/core-api/dma-api`. This section is just a reminder that |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 269 | drivers need to indicate DMA capabilities of the device and is not |
| 270 | an authoritative source for DMA interfaces. |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 271 | |
| 272 | While all drivers should explicitly indicate the DMA capability |
| 273 | (e.g. 32 or 64 bit) of the PCI bus master, devices with more than |
| 274 | 32-bit bus master capability for streaming data need the driver |
| 275 | to "register" this capability by calling pci_set_dma_mask() with |
| 276 | appropriate parameters. In general this allows more efficient DMA |
| 277 | on systems where System RAM exists above 4G _physical_ address. |
| 278 | |
| 279 | Drivers for all PCI-X and PCIe compliant devices must call |
| 280 | pci_set_dma_mask() as they are 64-bit DMA devices. |
| 281 | |
| 282 | Similarly, drivers must also "register" this capability if the device |
| 283 | can directly address "consistent memory" in System RAM above 4G physical |
| 284 | address by calling pci_set_consistent_dma_mask(). |
| 285 | Again, this includes drivers for all PCI-X and PCIe compliant devices. |
| 286 | Many 64-bit "PCI" devices (before PCI-X) and some PCI-X devices are |
| 287 | 64-bit DMA capable for payload ("streaming") data but not control |
| 288 | ("consistent") data. |
| 289 | |
| 290 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 291 | Setup shared control data |
| 292 | ------------------------- |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 293 | Once the DMA masks are set, the driver can allocate "consistent" (a.k.a. shared) |
Mauro Carvalho Chehab | 985098a | 2020-06-23 09:09:10 +0200 | [diff] [blame] | 294 | memory. See :doc:`/core-api/dma-api` for a full description of |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 295 | the DMA APIs. This section is just a reminder that it needs to be done |
| 296 | before enabling DMA on the device. |
| 297 | |
| 298 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 299 | Initialize device registers |
| 300 | --------------------------- |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 301 | Some drivers will need specific "capability" fields programmed |
| 302 | or other "vendor specific" register initialized or reset. |
| 303 | E.g. clearing pending interrupts. |
| 304 | |
| 305 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 306 | Register IRQ handler |
| 307 | -------------------- |
Michael Opdenacker | 59c5159 | 2007-05-09 08:57:56 +0200 | [diff] [blame] | 308 | While calling request_irq() is the last step described here, |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 309 | this is often just another intermediate step to initialize a device. |
| 310 | This step can often be deferred until the device is opened for use. |
| 311 | |
| 312 | All interrupt handlers for IRQ lines should be registered with IRQF_SHARED |
| 313 | and use the devid to map IRQs to devices (remember that all PCI IRQ lines |
| 314 | can be shared). |
| 315 | |
| 316 | request_irq() will associate an interrupt handler and device handle |
| 317 | with an interrupt number. Historically interrupt numbers represent |
| 318 | IRQ lines which run from the PCI device to the Interrupt controller. |
| 319 | With MSI and MSI-X (more below) the interrupt number is a CPU "vector". |
| 320 | |
| 321 | request_irq() also enables the interrupt. Make sure the device is |
| 322 | quiesced and does not have any interrupts pending before registering |
| 323 | the interrupt handler. |
| 324 | |
| 325 | MSI and MSI-X are PCI capabilities. Both are "Message Signaled Interrupts" |
| 326 | which deliver interrupts to the CPU via a DMA write to a Local APIC. |
| 327 | The fundamental difference between MSI and MSI-X is how multiple |
| 328 | "vectors" get allocated. MSI requires contiguous blocks of vectors |
| 329 | while MSI-X can allocate several individual ones. |
| 330 | |
Christoph Hellwig | c3cf2c6 | 2017-02-15 08:58:22 +0100 | [diff] [blame] | 331 | MSI capability can be enabled by calling pci_alloc_irq_vectors() with the |
| 332 | PCI_IRQ_MSI and/or PCI_IRQ_MSIX flags before calling request_irq(). This |
| 333 | causes the PCI support to program CPU vector data into the PCI device |
| 334 | capability registers. Many architectures, chip-sets, or BIOSes do NOT |
| 335 | support MSI or MSI-X and a call to pci_alloc_irq_vectors with just |
| 336 | the PCI_IRQ_MSI and PCI_IRQ_MSIX flags will fail, so try to always |
| 337 | specify PCI_IRQ_LEGACY as well. |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 338 | |
Christoph Hellwig | c3cf2c6 | 2017-02-15 08:58:22 +0100 | [diff] [blame] | 339 | Drivers that have different interrupt handlers for MSI/MSI-X and |
| 340 | legacy INTx should chose the right one based on the msi_enabled |
| 341 | and msix_enabled flags in the pci_dev structure after calling |
| 342 | pci_alloc_irq_vectors. |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 343 | |
| 344 | There are (at least) two really good reasons for using MSI: |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 345 | |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 346 | 1) MSI is an exclusive interrupt vector by definition. |
| 347 | This means the interrupt handler doesn't have to verify |
| 348 | its device caused the interrupt. |
| 349 | |
| 350 | 2) MSI avoids DMA/IRQ race conditions. DMA to host memory is guaranteed |
| 351 | to be visible to the host CPU(s) when the MSI is delivered. This |
| 352 | is important for both data coherency and avoiding stale control data. |
| 353 | This guarantee allows the driver to omit MMIO reads to flush |
| 354 | the DMA stream. |
| 355 | |
| 356 | See drivers/infiniband/hw/mthca/ or drivers/net/tg3.c for examples |
| 357 | of MSI/MSI-X usage. |
| 358 | |
| 359 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 360 | PCI device shutdown |
| 361 | =================== |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 362 | |
| 363 | When a PCI device driver is being unloaded, most of the following |
| 364 | steps need to be performed: |
| 365 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 366 | - Disable the device from generating IRQs |
| 367 | - Release the IRQ (free_irq()) |
| 368 | - Stop all DMA activity |
| 369 | - Release DMA buffers (both streaming and consistent) |
| 370 | - Unregister from other subsystems (e.g. scsi or netdev) |
| 371 | - Disable device from responding to MMIO/IO Port addresses |
| 372 | - Release MMIO/IO Port resource(s) |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 373 | |
| 374 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 375 | Stop IRQs on the device |
| 376 | ----------------------- |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 377 | How to do this is chip/device specific. If it's not done, it opens |
| 378 | the possibility of a "screaming interrupt" if (and only if) |
| 379 | the IRQ is shared with another device. |
| 380 | |
| 381 | When the shared IRQ handler is "unhooked", the remaining devices |
| 382 | using the same IRQ line will still need the IRQ enabled. Thus if the |
| 383 | "unhooked" device asserts IRQ line, the system will respond assuming |
| 384 | it was one of the remaining devices asserted the IRQ line. Since none |
| 385 | of the other devices will handle the IRQ, the system will "hang" until |
| 386 | it decides the IRQ isn't going to get handled and masks the IRQ (100,000 |
| 387 | iterations later). Once the shared IRQ is masked, the remaining devices |
| 388 | will stop functioning properly. Not a nice situation. |
| 389 | |
| 390 | This is another reason to use MSI or MSI-X if it's available. |
| 391 | MSI and MSI-X are defined to be exclusive interrupts and thus |
| 392 | are not susceptible to the "screaming interrupt" problem. |
| 393 | |
| 394 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 395 | Release the IRQ |
| 396 | --------------- |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 397 | Once the device is quiesced (no more IRQs), one can call free_irq(). |
| 398 | This function will return control once any pending IRQs are handled, |
| 399 | "unhook" the drivers IRQ handler from that IRQ, and finally release |
| 400 | the IRQ if no one else is using it. |
| 401 | |
| 402 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 403 | Stop all DMA activity |
| 404 | --------------------- |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 405 | It's extremely important to stop all DMA operations BEFORE attempting |
| 406 | to deallocate DMA control data. Failure to do so can result in memory |
| 407 | corruption, hangs, and on some chip-sets a hard crash. |
| 408 | |
| 409 | Stopping DMA after stopping the IRQs can avoid races where the |
| 410 | IRQ handler might restart DMA engines. |
| 411 | |
| 412 | While this step sounds obvious and trivial, several "mature" drivers |
| 413 | didn't get this step right in the past. |
| 414 | |
| 415 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 416 | Release DMA buffers |
| 417 | ------------------- |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 418 | Once DMA is stopped, clean up streaming DMA first. |
| 419 | I.e. unmap data buffers and return buffers to "upstream" |
| 420 | owners if there is one. |
| 421 | |
| 422 | Then clean up "consistent" buffers which contain the control data. |
| 423 | |
Mauro Carvalho Chehab | 985098a | 2020-06-23 09:09:10 +0200 | [diff] [blame] | 424 | See :doc:`/core-api/dma-api` for details on unmapping interfaces. |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 425 | |
| 426 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 427 | Unregister from other subsystems |
| 428 | -------------------------------- |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 429 | Most low level PCI device drivers support some other subsystem |
| 430 | like USB, ALSA, SCSI, NetDev, Infiniband, etc. Make sure your |
| 431 | driver isn't losing resources from that other subsystem. |
| 432 | If this happens, typically the symptom is an Oops (panic) when |
| 433 | the subsystem attempts to call into a driver that has been unloaded. |
| 434 | |
| 435 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 436 | Disable Device from responding to MMIO/IO Port addresses |
| 437 | -------------------------------------------------------- |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 438 | io_unmap() MMIO or IO Port resources and then call pci_disable_device(). |
| 439 | This is the symmetric opposite of pci_enable_device(). |
| 440 | Do not access device registers after calling pci_disable_device(). |
| 441 | |
| 442 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 443 | Release MMIO/IO Port Resource(s) |
| 444 | -------------------------------- |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 445 | Call pci_release_region() to mark the MMIO or IO Port range as available. |
| 446 | Failure to do so usually results in the inability to reload the driver. |
| 447 | |
| 448 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 449 | How to access PCI config space |
| 450 | ============================== |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 451 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 452 | You can use `pci_(read|write)_config_(byte|word|dword)` to access the config |
| 453 | space of a device represented by `struct pci_dev *`. All these functions return |
| 454 | 0 when successful or an error code (`PCIBIOS_...`) which can be translated to a |
| 455 | text string by pcibios_strerror. Most drivers expect that accesses to valid PCI |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 456 | devices don't fail. |
| 457 | |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 458 | If you don't have a struct pci_dev available, you can call |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 459 | `pci_bus_(read|write)_config_(byte|word|dword)` to access a given device |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 460 | and function on that bus. |
| 461 | |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 462 | If you access fields in the standard portion of the config header, please |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 463 | use symbolic names of locations and bits declared in <linux/pci.h>. |
| 464 | |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 465 | If you need to access Extended PCI Capability registers, just call |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 466 | pci_find_capability() for the particular capability and it will find the |
| 467 | corresponding register block for you. |
| 468 | |
| 469 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 470 | Other interesting functions |
| 471 | =========================== |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 472 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 473 | ============================= ================================================ |
Yijing Wang | a37bee7 | 2013-09-02 14:34:40 +0800 | [diff] [blame] | 474 | pci_get_domain_bus_and_slot() Find pci_dev corresponding to given domain, |
| 475 | bus and slot and number. If the device is |
| 476 | found, its reference count is increased. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 477 | pci_set_power_state() Set PCI Power Management state (0=D0 ... 3=D3) |
| 478 | pci_find_capability() Find specified capability in device's capability |
| 479 | list. |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 480 | pci_resource_start() Returns bus start address for a given PCI region |
| 481 | pci_resource_end() Returns bus end address for a given PCI region |
| 482 | pci_resource_len() Returns the byte length of a PCI region |
| 483 | pci_set_drvdata() Set private driver data pointer for a pci_dev |
| 484 | pci_get_drvdata() Return private driver data pointer for a pci_dev |
| 485 | pci_set_mwi() Enable Memory-Write-Invalidate transactions. |
| 486 | pci_clear_mwi() Disable Memory-Write-Invalidate transactions. |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 487 | ============================= ================================================ |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 488 | |
| 489 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 490 | Miscellaneous hints |
| 491 | =================== |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 492 | |
| 493 | When displaying PCI device names to the user (for example when a driver wants |
| 494 | to tell the user what card has it found), please use pci_name(pci_dev). |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 495 | |
| 496 | Always refer to the PCI devices by a pointer to the pci_dev structure. |
| 497 | All PCI layer functions use this identification and it's the only |
| 498 | reasonable one. Don't use bus/slot/function numbers except for very |
| 499 | special purposes -- on systems with multiple primary buses their semantics |
| 500 | can be pretty complex. |
| 501 | |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 502 | Don't try to turn on Fast Back to Back writes in your driver. All devices |
| 503 | on the bus need to be capable of doing it, so this is something which needs |
| 504 | to be handled by platform and generic code, not individual drivers. |
| 505 | |
| 506 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 507 | Vendor and device identifications |
| 508 | ================================= |
Ingo Oeser | 9b860b8 | 2006-04-18 11:20:55 +0200 | [diff] [blame] | 509 | |
Michael S. Tsirkin | 37a9c50 | 2015-03-30 10:32:34 +0200 | [diff] [blame] | 510 | Do not add new device or vendor IDs to include/linux/pci_ids.h unless they |
| 511 | are shared across multiple drivers. You can add private definitions in |
| 512 | your driver if they're helpful, or just use plain hex constants. |
Ingo Oeser | 9b860b8 | 2006-04-18 11:20:55 +0200 | [diff] [blame] | 513 | |
Michael S. Tsirkin | 37a9c50 | 2015-03-30 10:32:34 +0200 | [diff] [blame] | 514 | The device IDs are arbitrary hex numbers (vendor controlled) and normally used |
| 515 | only in a single location, the pci_device_id table. |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 516 | |
Alexander A. Klimov | 7ecd4a8 | 2020-06-27 12:30:50 +0200 | [diff] [blame] | 517 | Please DO submit new vendor/device IDs to https://pci-ids.ucw.cz/. |
| 518 | There's a mirror of the pci.ids file at https://github.com/pciutils/pciids. |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 519 | |
| 520 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 521 | Obsolete functions |
| 522 | ================== |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 523 | |
Linus Torvalds | 1da177e | 2005-04-16 15:20:36 -0700 | [diff] [blame] | 524 | There are several functions which you might come across when trying to |
| 525 | port an old driver to the new PCI interface. They are no longer present |
| 526 | in the kernel as they aren't compatible with hotplug or PCI domains or |
| 527 | having sane locking. |
| 528 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 529 | ================= =========================================== |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 530 | pci_find_device() Superseded by pci_get_device() |
| 531 | pci_find_subsys() Superseded by pci_get_subsys() |
Yijing Wang | a37bee7 | 2013-09-02 14:34:40 +0800 | [diff] [blame] | 532 | pci_find_slot() Superseded by pci_get_domain_bus_and_slot() |
| 533 | pci_get_slot() Superseded by pci_get_domain_bus_and_slot() |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 534 | ================= =========================================== |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 535 | |
| 536 | The alternative is the traditional PCI device driver that walks PCI |
| 537 | device lists. This is still possible but discouraged. |
| 538 | |
| 539 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 540 | MMIO Space and "Write Posting" |
| 541 | ============================== |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 542 | |
| 543 | Converting a driver from using I/O Port space to using MMIO space |
| 544 | often requires some additional changes. Specifically, "write posting" |
| 545 | needs to be handled. Many drivers (e.g. tg3, acenic, sym53c8xx_2) |
| 546 | already do this. I/O Port space guarantees write transactions reach the PCI |
| 547 | device before the CPU can continue. Writes to MMIO space allow the CPU |
| 548 | to continue before the transaction reaches the PCI device. HW weenies |
| 549 | call this "Write Posting" because the write completion is "posted" to |
| 550 | the CPU before the transaction has reached its destination. |
| 551 | |
| 552 | Thus, timing sensitive code should add readl() where the CPU is |
| 553 | expected to wait before doing other work. The classic "bit banging" |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 554 | sequence works fine for I/O Port space:: |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 555 | |
| 556 | for (i = 8; --i; val >>= 1) { |
| 557 | outb(val & 1, ioport_reg); /* write bit */ |
| 558 | udelay(10); |
| 559 | } |
| 560 | |
Changbin Du | 229b4e0 | 2019-05-14 22:47:24 +0800 | [diff] [blame] | 561 | The same sequence for MMIO space should be:: |
Grant Grundler | 74da15e | 2006-12-25 01:06:35 -0700 | [diff] [blame] | 562 | |
| 563 | for (i = 8; --i; val >>= 1) { |
| 564 | writeb(val & 1, mmio_reg); /* write bit */ |
| 565 | readb(safe_mmio_reg); /* flush posted write */ |
| 566 | udelay(10); |
| 567 | } |
| 568 | |
| 569 | It is important that "safe_mmio_reg" not have any side effects that |
| 570 | interferes with the correct operation of the device. |
| 571 | |
| 572 | Another case to watch out for is when resetting a PCI device. Use PCI |
| 573 | Configuration space reads to flush the writel(). This will gracefully |
| 574 | handle the PCI master abort on all platforms if the PCI device is |
| 575 | expected to not respond to a readl(). Most x86 platforms will allow |
| 576 | MMIO reads to master abort (a.k.a. "Soft Fail") and return garbage |
| 577 | (e.g. ~0). But many RISC platforms will crash (a.k.a."Hard Fail"). |