Mauro Carvalho Chehab | c6ebaf6 | 2017-05-17 09:16:19 -0300 | [diff] [blame] | 1 | ========================= |
Mauro Carvalho Chehab | 0da3e3e | 2019-04-10 06:56:24 -0300 | [diff] [blame] | 2 | Unaligned Memory Accesses |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 3 | ========================= |
| 4 | |
Mauro Carvalho Chehab | c6ebaf6 | 2017-05-17 09:16:19 -0300 | [diff] [blame] | 5 | :Author: Daniel Drake <dsd@gentoo.org>, |
| 6 | :Author: Johannes Berg <johannes@sipsolutions.net> |
| 7 | |
| 8 | :With help from: Alan Cox, Avuton Olrich, Heikki Orsila, Jan Engelhardt, |
| 9 | Kyle McMartin, Kyle Moffett, Randy Dunlap, Robert Hancock, Uli Kunitz, |
| 10 | Vadim Lobanov |
| 11 | |
| 12 | |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 13 | Linux runs on a wide variety of architectures which have varying behaviour |
| 14 | when it comes to memory access. This document presents some details about |
| 15 | unaligned accesses, why you need to write code that doesn't cause them, |
| 16 | and how to write such code! |
| 17 | |
| 18 | |
| 19 | The definition of an unaligned access |
| 20 | ===================================== |
| 21 | |
| 22 | Unaligned memory accesses occur when you try to read N bytes of data starting |
| 23 | from an address that is not evenly divisible by N (i.e. addr % N != 0). |
| 24 | For example, reading 4 bytes of data from address 0x10004 is fine, but |
| 25 | reading 4 bytes of data from address 0x10005 would be an unaligned memory |
| 26 | access. |
| 27 | |
| 28 | The above may seem a little vague, as memory access can happen in different |
| 29 | ways. The context here is at the machine code level: certain instructions read |
| 30 | or write a number of bytes to or from memory (e.g. movb, movw, movl in x86 |
| 31 | assembly). As will become clear, it is relatively easy to spot C statements |
| 32 | which will compile to multiple-byte memory access instructions, namely when |
| 33 | dealing with types such as u16, u32 and u64. |
| 34 | |
| 35 | |
| 36 | Natural alignment |
| 37 | ================= |
| 38 | |
| 39 | The rule mentioned above forms what we refer to as natural alignment: |
| 40 | When accessing N bytes of memory, the base memory address must be evenly |
| 41 | divisible by N, i.e. addr % N == 0. |
| 42 | |
| 43 | When writing code, assume the target architecture has natural alignment |
| 44 | requirements. |
| 45 | |
| 46 | In reality, only a few architectures require natural alignment on all sizes |
| 47 | of memory access. However, we must consider ALL supported architectures; |
| 48 | writing code that satisfies natural alignment requirements is the easiest way |
| 49 | to achieve full portability. |
| 50 | |
| 51 | |
| 52 | Why unaligned access is bad |
| 53 | =========================== |
| 54 | |
| 55 | The effects of performing an unaligned memory access vary from architecture |
| 56 | to architecture. It would be easy to write a whole document on the differences |
| 57 | here; a summary of the common scenarios is presented below: |
| 58 | |
| 59 | - Some architectures are able to perform unaligned memory accesses |
| 60 | transparently, but there is usually a significant performance cost. |
| 61 | - Some architectures raise processor exceptions when unaligned accesses |
| 62 | happen. The exception handler is able to correct the unaligned access, |
| 63 | at significant cost to performance. |
| 64 | - Some architectures raise processor exceptions when unaligned accesses |
| 65 | happen, but the exceptions do not contain enough information for the |
| 66 | unaligned access to be corrected. |
| 67 | - Some architectures are not capable of unaligned memory access, but will |
| 68 | silently perform a different memory access to the one that was requested, |
Dmitri Vorobiev | e8d49f3 | 2008-04-02 13:04:45 -0700 | [diff] [blame] | 69 | resulting in a subtle code bug that is hard to detect! |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 70 | |
| 71 | It should be obvious from the above that if your code causes unaligned |
| 72 | memory accesses to happen, your code will not work correctly on certain |
| 73 | platforms and will cause performance problems on others. |
| 74 | |
| 75 | |
| 76 | Code that does not cause unaligned access |
| 77 | ========================================= |
| 78 | |
| 79 | At first, the concepts above may seem a little hard to relate to actual |
| 80 | coding practice. After all, you don't have a great deal of control over |
| 81 | memory addresses of certain variables, etc. |
| 82 | |
| 83 | Fortunately things are not too complex, as in most cases, the compiler |
| 84 | ensures that things will work for you. For example, take the following |
Mauro Carvalho Chehab | c6ebaf6 | 2017-05-17 09:16:19 -0300 | [diff] [blame] | 85 | structure:: |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 86 | |
| 87 | struct foo { |
| 88 | u16 field1; |
| 89 | u32 field2; |
| 90 | u8 field3; |
| 91 | }; |
| 92 | |
| 93 | Let us assume that an instance of the above structure resides in memory |
| 94 | starting at address 0x10000. With a basic level of understanding, it would |
| 95 | not be unreasonable to expect that accessing field2 would cause an unaligned |
| 96 | access. You'd be expecting field2 to be located at offset 2 bytes into the |
| 97 | structure, i.e. address 0x10002, but that address is not evenly divisible |
| 98 | by 4 (remember, we're reading a 4 byte value here). |
| 99 | |
| 100 | Fortunately, the compiler understands the alignment constraints, so in the |
| 101 | above case it would insert 2 bytes of padding in between field1 and field2. |
| 102 | Therefore, for standard structure types you can always rely on the compiler |
| 103 | to pad structures so that accesses to fields are suitably aligned (assuming |
| 104 | you do not cast the field to a type of different length). |
| 105 | |
| 106 | Similarly, you can also rely on the compiler to align variables and function |
| 107 | parameters to a naturally aligned scheme, based on the size of the type of |
| 108 | the variable. |
| 109 | |
| 110 | At this point, it should be clear that accessing a single byte (u8 or char) |
| 111 | will never cause an unaligned access, because all memory addresses are evenly |
| 112 | divisible by one. |
| 113 | |
| 114 | On a related topic, with the above considerations in mind you may observe |
| 115 | that you could reorder the fields in the structure in order to place fields |
| 116 | where padding would otherwise be inserted, and hence reduce the overall |
| 117 | resident memory size of structure instances. The optimal layout of the |
Mauro Carvalho Chehab | c6ebaf6 | 2017-05-17 09:16:19 -0300 | [diff] [blame] | 118 | above example is:: |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 119 | |
| 120 | struct foo { |
| 121 | u32 field2; |
| 122 | u16 field1; |
| 123 | u8 field3; |
| 124 | }; |
| 125 | |
| 126 | For a natural alignment scheme, the compiler would only have to add a single |
| 127 | byte of padding at the end of the structure. This padding is added in order |
| 128 | to satisfy alignment constraints for arrays of these structures. |
| 129 | |
| 130 | Another point worth mentioning is the use of __attribute__((packed)) on a |
| 131 | structure type. This GCC-specific attribute tells the compiler never to |
| 132 | insert any padding within structures, useful when you want to use a C struct |
| 133 | to represent some data that comes in a fixed arrangement 'off the wire'. |
| 134 | |
| 135 | You might be inclined to believe that usage of this attribute can easily |
| 136 | lead to unaligned accesses when accessing fields that do not satisfy |
| 137 | architectural alignment requirements. However, again, the compiler is aware |
| 138 | of the alignment constraints and will generate extra instructions to perform |
| 139 | the memory access in a way that does not cause unaligned access. Of course, |
| 140 | the extra instructions obviously cause a loss in performance compared to the |
| 141 | non-packed case, so the packed attribute should only be used when avoiding |
| 142 | structure padding is of importance. |
| 143 | |
| 144 | |
| 145 | Code that causes unaligned access |
| 146 | ================================= |
| 147 | |
| 148 | With the above in mind, let's move onto a real life example of a function |
Joe Perches | 0d74c42 | 2013-12-05 14:54:38 -0800 | [diff] [blame] | 149 | that can cause an unaligned memory access. The following function taken |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 150 | from include/linux/etherdevice.h is an optimized routine to compare two |
Mauro Carvalho Chehab | c6ebaf6 | 2017-05-17 09:16:19 -0300 | [diff] [blame] | 151 | ethernet MAC addresses for equality:: |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 152 | |
Mauro Carvalho Chehab | c6ebaf6 | 2017-05-17 09:16:19 -0300 | [diff] [blame] | 153 | bool ether_addr_equal(const u8 *addr1, const u8 *addr2) |
| 154 | { |
| 155 | #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS |
Joe Perches | 0d74c42 | 2013-12-05 14:54:38 -0800 | [diff] [blame] | 156 | u32 fold = ((*(const u32 *)addr1) ^ (*(const u32 *)addr2)) | |
| 157 | ((*(const u16 *)(addr1 + 4)) ^ (*(const u16 *)(addr2 + 4))); |
| 158 | |
| 159 | return fold == 0; |
Mauro Carvalho Chehab | c6ebaf6 | 2017-05-17 09:16:19 -0300 | [diff] [blame] | 160 | #else |
Joe Perches | 0d74c42 | 2013-12-05 14:54:38 -0800 | [diff] [blame] | 161 | const u16 *a = (const u16 *)addr1; |
| 162 | const u16 *b = (const u16 *)addr2; |
Cihangir Akturk | 36f671b | 2016-12-17 19:42:17 +0200 | [diff] [blame] | 163 | return ((a[0] ^ b[0]) | (a[1] ^ b[1]) | (a[2] ^ b[2])) == 0; |
Mauro Carvalho Chehab | c6ebaf6 | 2017-05-17 09:16:19 -0300 | [diff] [blame] | 164 | #endif |
| 165 | } |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 166 | |
Joe Perches | 0d74c42 | 2013-12-05 14:54:38 -0800 | [diff] [blame] | 167 | In the above function, when the hardware has efficient unaligned access |
| 168 | capability, there is no issue with this code. But when the hardware isn't |
| 169 | able to access memory on arbitrary boundaries, the reference to a[0] causes |
| 170 | 2 bytes (16 bits) to be read from memory starting at address addr1. |
| 171 | |
| 172 | Think about what would happen if addr1 was an odd address such as 0x10003. |
| 173 | (Hint: it'd be an unaligned access.) |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 174 | |
| 175 | Despite the potential unaligned access problems with the above function, it |
Joe Perches | 0d74c42 | 2013-12-05 14:54:38 -0800 | [diff] [blame] | 176 | is included in the kernel anyway but is understood to only work normally on |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 177 | 16-bit-aligned addresses. It is up to the caller to ensure this alignment or |
| 178 | not use this function at all. This alignment-unsafe function is still useful |
| 179 | as it is a decent optimization for the cases when you can ensure alignment, |
| 180 | which is true almost all of the time in ethernet networking context. |
| 181 | |
| 182 | |
Mauro Carvalho Chehab | c6ebaf6 | 2017-05-17 09:16:19 -0300 | [diff] [blame] | 183 | Here is another example of some code that could cause unaligned accesses:: |
| 184 | |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 185 | void myfunc(u8 *data, u32 value) |
| 186 | { |
| 187 | [...] |
| 188 | *((u32 *) data) = cpu_to_le32(value); |
| 189 | [...] |
| 190 | } |
| 191 | |
| 192 | This code will cause unaligned accesses every time the data parameter points |
| 193 | to an address that is not evenly divisible by 4. |
| 194 | |
| 195 | In summary, the 2 main scenarios where you may run into unaligned access |
| 196 | problems involve: |
Mauro Carvalho Chehab | c6ebaf6 | 2017-05-17 09:16:19 -0300 | [diff] [blame] | 197 | |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 198 | 1. Casting variables to types of different lengths |
| 199 | 2. Pointer arithmetic followed by access to at least 2 bytes of data |
| 200 | |
| 201 | |
| 202 | Avoiding unaligned accesses |
| 203 | =========================== |
| 204 | |
| 205 | The easiest way to avoid unaligned access is to use the get_unaligned() and |
| 206 | put_unaligned() macros provided by the <asm/unaligned.h> header file. |
| 207 | |
| 208 | Going back to an earlier example of code that potentially causes unaligned |
Mauro Carvalho Chehab | c6ebaf6 | 2017-05-17 09:16:19 -0300 | [diff] [blame] | 209 | access:: |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 210 | |
| 211 | void myfunc(u8 *data, u32 value) |
| 212 | { |
| 213 | [...] |
| 214 | *((u32 *) data) = cpu_to_le32(value); |
| 215 | [...] |
| 216 | } |
| 217 | |
Mauro Carvalho Chehab | c6ebaf6 | 2017-05-17 09:16:19 -0300 | [diff] [blame] | 218 | To avoid the unaligned memory access, you would rewrite it as follows:: |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 219 | |
| 220 | void myfunc(u8 *data, u32 value) |
| 221 | { |
| 222 | [...] |
| 223 | value = cpu_to_le32(value); |
| 224 | put_unaligned(value, (u32 *) data); |
| 225 | [...] |
| 226 | } |
| 227 | |
| 228 | The get_unaligned() macro works similarly. Assuming 'data' is a pointer to |
Mauro Carvalho Chehab | c6ebaf6 | 2017-05-17 09:16:19 -0300 | [diff] [blame] | 229 | memory and you wish to avoid unaligned access, its usage is as follows:: |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 230 | |
| 231 | u32 value = get_unaligned((u32 *) data); |
| 232 | |
Dmitri Vorobiev | e8d49f3 | 2008-04-02 13:04:45 -0700 | [diff] [blame] | 233 | These macros work for memory accesses of any length (not just 32 bits as |
Daniel Drake | d156042 | 2008-02-06 01:37:30 -0800 | [diff] [blame] | 234 | in the examples above). Be aware that when compared to standard access of |
| 235 | aligned memory, using these macros to access unaligned memory can be costly in |
| 236 | terms of performance. |
| 237 | |
| 238 | If use of such macros is not convenient, another option is to use memcpy(), |
| 239 | where the source or destination (or both) are of type u8* or unsigned char*. |
| 240 | Due to the byte-wise nature of this operation, unaligned accesses are avoided. |
| 241 | |
Johannes Berg | 58340a0 | 2008-07-25 01:45:33 -0700 | [diff] [blame] | 242 | |
| 243 | Alignment vs. Networking |
| 244 | ======================== |
| 245 | |
| 246 | On architectures that require aligned loads, networking requires that the IP |
| 247 | header is aligned on a four-byte boundary to optimise the IP stack. For |
| 248 | regular ethernet hardware, the constant NET_IP_ALIGN is used. On most |
| 249 | architectures this constant has the value 2 because the normal ethernet |
| 250 | header is 14 bytes long, so in order to get proper alignment one needs to |
| 251 | DMA to an address which can be expressed as 4*n + 2. One notable exception |
| 252 | here is powerpc which defines NET_IP_ALIGN to 0 because DMA to unaligned |
| 253 | addresses can be very expensive and dwarf the cost of unaligned loads. |
| 254 | |
| 255 | For some ethernet hardware that cannot DMA to unaligned addresses like |
| 256 | 4*n+2 or non-ethernet hardware, this can be a problem, and it is then |
| 257 | required to copy the incoming frame into an aligned buffer. Because this is |
| 258 | unnecessary on architectures that can do unaligned accesses, the code can be |
Mauro Carvalho Chehab | c6ebaf6 | 2017-05-17 09:16:19 -0300 | [diff] [blame] | 259 | made dependent on CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS like so:: |
Johannes Berg | 58340a0 | 2008-07-25 01:45:33 -0700 | [diff] [blame] | 260 | |
Mauro Carvalho Chehab | c6ebaf6 | 2017-05-17 09:16:19 -0300 | [diff] [blame] | 261 | #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS |
| 262 | skb = original skb |
| 263 | #else |
| 264 | skb = copy skb |
| 265 | #endif |