blob: 70a85f9cfe05ed6535aeb9594cbb8cb0bffa7cc2 [file] [log] [blame]
Tom Rini958a8f82014-02-25 10:27:01 -05001Editors note: This document is _heavily_ cribbed from the Linux Kernel, with
2really only the section about "Alignment vs. Networking" removed.
3
4UNALIGNED MEMORY ACCESSES
5=========================
6
7Linux runs on a wide variety of architectures which have varying behaviour
8when it comes to memory access. This document presents some details about
9unaligned accesses, why you need to write code that doesn't cause them,
10and how to write such code!
11
12
13The definition of an unaligned access
14=====================================
15
16Unaligned memory accesses occur when you try to read N bytes of data starting
17from an address that is not evenly divisible by N (i.e. addr % N != 0).
18For example, reading 4 bytes of data from address 0x10004 is fine, but
19reading 4 bytes of data from address 0x10005 would be an unaligned memory
20access.
21
22The above may seem a little vague, as memory access can happen in different
23ways. The context here is at the machine code level: certain instructions read
24or write a number of bytes to or from memory (e.g. movb, movw, movl in x86
25assembly). As will become clear, it is relatively easy to spot C statements
26which will compile to multiple-byte memory access instructions, namely when
27dealing with types such as u16, u32 and u64.
28
29
30Natural alignment
31=================
32
33The rule mentioned above forms what we refer to as natural alignment:
34When accessing N bytes of memory, the base memory address must be evenly
35divisible by N, i.e. addr % N == 0.
36
37When writing code, assume the target architecture has natural alignment
38requirements.
39
40In reality, only a few architectures require natural alignment on all sizes
41of memory access. However, we must consider ALL supported architectures;
42writing code that satisfies natural alignment requirements is the easiest way
43to achieve full portability.
44
45
46Why unaligned access is bad
47===========================
48
49The effects of performing an unaligned memory access vary from architecture
50to architecture. It would be easy to write a whole document on the differences
51here; a summary of the common scenarios is presented below:
52
53 - Some architectures are able to perform unaligned memory accesses
54 transparently, but there is usually a significant performance cost.
55 - Some architectures raise processor exceptions when unaligned accesses
56 happen. The exception handler is able to correct the unaligned access,
57 at significant cost to performance.
58 - Some architectures raise processor exceptions when unaligned accesses
59 happen, but the exceptions do not contain enough information for the
60 unaligned access to be corrected.
61 - Some architectures are not capable of unaligned memory access, but will
62 silently perform a different memory access to the one that was requested,
63 resulting in a subtle code bug that is hard to detect!
64
65It should be obvious from the above that if your code causes unaligned
66memory accesses to happen, your code will not work correctly on certain
67platforms and will cause performance problems on others.
68
69
70Code that does not cause unaligned access
71=========================================
72
73At first, the concepts above may seem a little hard to relate to actual
74coding practice. After all, you don't have a great deal of control over
75memory addresses of certain variables, etc.
76
77Fortunately things are not too complex, as in most cases, the compiler
78ensures that things will work for you. For example, take the following
79structure:
80
81 struct foo {
82 u16 field1;
83 u32 field2;
84 u8 field3;
85 };
86
87Let us assume that an instance of the above structure resides in memory
88starting at address 0x10000. With a basic level of understanding, it would
89not be unreasonable to expect that accessing field2 would cause an unaligned
90access. You'd be expecting field2 to be located at offset 2 bytes into the
91structure, i.e. address 0x10002, but that address is not evenly divisible
92by 4 (remember, we're reading a 4 byte value here).
93
94Fortunately, the compiler understands the alignment constraints, so in the
95above case it would insert 2 bytes of padding in between field1 and field2.
96Therefore, for standard structure types you can always rely on the compiler
97to pad structures so that accesses to fields are suitably aligned (assuming
98you do not cast the field to a type of different length).
99
100Similarly, you can also rely on the compiler to align variables and function
101parameters to a naturally aligned scheme, based on the size of the type of
102the variable.
103
104At this point, it should be clear that accessing a single byte (u8 or char)
105will never cause an unaligned access, because all memory addresses are evenly
106divisible by one.
107
108On a related topic, with the above considerations in mind you may observe
109that you could reorder the fields in the structure in order to place fields
110where padding would otherwise be inserted, and hence reduce the overall
111resident memory size of structure instances. The optimal layout of the
112above example is:
113
114 struct foo {
115 u32 field2;
116 u16 field1;
117 u8 field3;
118 };
119
120For a natural alignment scheme, the compiler would only have to add a single
121byte of padding at the end of the structure. This padding is added in order
122to satisfy alignment constraints for arrays of these structures.
123
124Another point worth mentioning is the use of __attribute__((packed)) on a
125structure type. This GCC-specific attribute tells the compiler never to
126insert any padding within structures, useful when you want to use a C struct
127to represent some data that comes in a fixed arrangement 'off the wire'.
128
129You might be inclined to believe that usage of this attribute can easily
130lead to unaligned accesses when accessing fields that do not satisfy
131architectural alignment requirements. However, again, the compiler is aware
132of the alignment constraints and will generate extra instructions to perform
133the memory access in a way that does not cause unaligned access. Of course,
134the extra instructions obviously cause a loss in performance compared to the
135non-packed case, so the packed attribute should only be used when avoiding
136structure padding is of importance.
137
138
139Code that causes unaligned access
140=================================
141
142With the above in mind, let's move onto a real life example of a function
143that can cause an unaligned memory access. The following function taken
144from the Linux Kernel's include/linux/etherdevice.h is an optimized routine
145to compare two ethernet MAC addresses for equality.
146
147bool ether_addr_equal(const u8 *addr1, const u8 *addr2)
148{
149#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
150 u32 fold = ((*(const u32 *)addr1) ^ (*(const u32 *)addr2)) |
151 ((*(const u16 *)(addr1 + 4)) ^ (*(const u16 *)(addr2 + 4)));
152
153 return fold == 0;
154#else
155 const u16 *a = (const u16 *)addr1;
156 const u16 *b = (const u16 *)addr2;
Olaf Mandel06212d02014-10-11 00:25:46 +0200157 return ((a[0] ^ b[0]) | (a[1] ^ b[1]) | (a[2] ^ b[2])) == 0;
Tom Rini958a8f82014-02-25 10:27:01 -0500158#endif
159}
160
161In the above function, when the hardware has efficient unaligned access
162capability, there is no issue with this code. But when the hardware isn't
163able to access memory on arbitrary boundaries, the reference to a[0] causes
1642 bytes (16 bits) to be read from memory starting at address addr1.
165
166Think about what would happen if addr1 was an odd address such as 0x10003.
167(Hint: it'd be an unaligned access.)
168
169Despite the potential unaligned access problems with the above function, it
170is included in the kernel anyway but is understood to only work normally on
17116-bit-aligned addresses. It is up to the caller to ensure this alignment or
172not use this function at all. This alignment-unsafe function is still useful
173as it is a decent optimization for the cases when you can ensure alignment,
174which is true almost all of the time in ethernet networking context.
175
176
177Here is another example of some code that could cause unaligned accesses:
178 void myfunc(u8 *data, u32 value)
179 {
180 [...]
181 *((u32 *) data) = cpu_to_le32(value);
182 [...]
183 }
184
185This code will cause unaligned accesses every time the data parameter points
186to an address that is not evenly divisible by 4.
187
188In summary, the 2 main scenarios where you may run into unaligned access
189problems involve:
190 1. Casting variables to types of different lengths
191 2. Pointer arithmetic followed by access to at least 2 bytes of data
192
193
194Avoiding unaligned accesses
195===========================
196
197The easiest way to avoid unaligned access is to use the get_unaligned() and
198put_unaligned() macros provided by the <asm/unaligned.h> header file.
199
200Going back to an earlier example of code that potentially causes unaligned
201access:
202
203 void myfunc(u8 *data, u32 value)
204 {
205 [...]
206 *((u32 *) data) = cpu_to_le32(value);
207 [...]
208 }
209
210To avoid the unaligned memory access, you would rewrite it as follows:
211
212 void myfunc(u8 *data, u32 value)
213 {
214 [...]
215 value = cpu_to_le32(value);
216 put_unaligned(value, (u32 *) data);
217 [...]
218 }
219
220The get_unaligned() macro works similarly. Assuming 'data' is a pointer to
221memory and you wish to avoid unaligned access, its usage is as follows:
222
223 u32 value = get_unaligned((u32 *) data);
224
225These macros work for memory accesses of any length (not just 32 bits as
226in the examples above). Be aware that when compared to standard access of
227aligned memory, using these macros to access unaligned memory can be costly in
228terms of performance.
229
230If use of such macros is not convenient, another option is to use memcpy(),
231where the source or destination (or both) are of type u8* or unsigned char*.
232Due to the byte-wise nature of this operation, unaligned accesses are avoided.
233
234--
235In the Linux Kernel,
236Authors: Daniel Drake <dsd@gentoo.org>,
237 Johannes Berg <johannes@sipsolutions.net>
238With help from: Alan Cox, Avuton Olrich, Heikki Orsila, Jan Engelhardt,
239Kyle McMartin, Kyle Moffett, Randy Dunlap, Robert Hancock, Uli Kunitz,
240Vadim Lobanov