Back to home page

OSCL-LXR

 
 

    


0001 =========================
0002 Unaligned Memory Accesses
0003 =========================
0004 
0005 :Author: Daniel Drake <dsd@gentoo.org>,
0006 :Author: Johannes Berg <johannes@sipsolutions.net>
0007 
0008 :With help from: Alan Cox, Avuton Olrich, Heikki Orsila, Jan Engelhardt,
0009   Kyle McMartin, Kyle Moffett, Randy Dunlap, Robert Hancock, Uli Kunitz,
0010   Vadim Lobanov
0011 
0012 
0013 Linux runs on a wide variety of architectures which have varying behaviour
0014 when it comes to memory access. This document presents some details about
0015 unaligned accesses, why you need to write code that doesn't cause them,
0016 and how to write such code!
0017 
0018 
0019 The definition of an unaligned access
0020 =====================================
0021 
0022 Unaligned memory accesses occur when you try to read N bytes of data starting
0023 from an address that is not evenly divisible by N (i.e. addr % N != 0).
0024 For example, reading 4 bytes of data from address 0x10004 is fine, but
0025 reading 4 bytes of data from address 0x10005 would be an unaligned memory
0026 access.
0027 
0028 The above may seem a little vague, as memory access can happen in different
0029 ways. The context here is at the machine code level: certain instructions read
0030 or write a number of bytes to or from memory (e.g. movb, movw, movl in x86
0031 assembly). As will become clear, it is relatively easy to spot C statements
0032 which will compile to multiple-byte memory access instructions, namely when
0033 dealing with types such as u16, u32 and u64.
0034 
0035 
0036 Natural alignment
0037 =================
0038 
0039 The rule mentioned above forms what we refer to as natural alignment:
0040 When accessing N bytes of memory, the base memory address must be evenly
0041 divisible by N, i.e. addr % N == 0.
0042 
0043 When writing code, assume the target architecture has natural alignment
0044 requirements.
0045 
0046 In reality, only a few architectures require natural alignment on all sizes
0047 of memory access. However, we must consider ALL supported architectures;
0048 writing code that satisfies natural alignment requirements is the easiest way
0049 to achieve full portability.
0050 
0051 
0052 Why unaligned access is bad
0053 ===========================
0054 
0055 The effects of performing an unaligned memory access vary from architecture
0056 to architecture. It would be easy to write a whole document on the differences
0057 here; a summary of the common scenarios is presented below:
0058 
0059  - Some architectures are able to perform unaligned memory accesses
0060    transparently, but there is usually a significant performance cost.
0061  - Some architectures raise processor exceptions when unaligned accesses
0062    happen. The exception handler is able to correct the unaligned access,
0063    at significant cost to performance.
0064  - Some architectures raise processor exceptions when unaligned accesses
0065    happen, but the exceptions do not contain enough information for the
0066    unaligned access to be corrected.
0067  - Some architectures are not capable of unaligned memory access, but will
0068    silently perform a different memory access to the one that was requested,
0069    resulting in a subtle code bug that is hard to detect!
0070 
0071 It should be obvious from the above that if your code causes unaligned
0072 memory accesses to happen, your code will not work correctly on certain
0073 platforms and will cause performance problems on others.
0074 
0075 
0076 Code that does not cause unaligned access
0077 =========================================
0078 
0079 At first, the concepts above may seem a little hard to relate to actual
0080 coding practice. After all, you don't have a great deal of control over
0081 memory addresses of certain variables, etc.
0082 
0083 Fortunately things are not too complex, as in most cases, the compiler
0084 ensures that things will work for you. For example, take the following
0085 structure::
0086 
0087         struct foo {
0088                 u16 field1;
0089                 u32 field2;
0090                 u8 field3;
0091         };
0092 
0093 Let us assume that an instance of the above structure resides in memory
0094 starting at address 0x10000. With a basic level of understanding, it would
0095 not be unreasonable to expect that accessing field2 would cause an unaligned
0096 access. You'd be expecting field2 to be located at offset 2 bytes into the
0097 structure, i.e. address 0x10002, but that address is not evenly divisible
0098 by 4 (remember, we're reading a 4 byte value here).
0099 
0100 Fortunately, the compiler understands the alignment constraints, so in the
0101 above case it would insert 2 bytes of padding in between field1 and field2.
0102 Therefore, for standard structure types you can always rely on the compiler
0103 to pad structures so that accesses to fields are suitably aligned (assuming
0104 you do not cast the field to a type of different length).
0105 
0106 Similarly, you can also rely on the compiler to align variables and function
0107 parameters to a naturally aligned scheme, based on the size of the type of
0108 the variable.
0109 
0110 At this point, it should be clear that accessing a single byte (u8 or char)
0111 will never cause an unaligned access, because all memory addresses are evenly
0112 divisible by one.
0113 
0114 On a related topic, with the above considerations in mind you may observe
0115 that you could reorder the fields in the structure in order to place fields
0116 where padding would otherwise be inserted, and hence reduce the overall
0117 resident memory size of structure instances. The optimal layout of the
0118 above example is::
0119 
0120         struct foo {
0121                 u32 field2;
0122                 u16 field1;
0123                 u8 field3;
0124         };
0125 
0126 For a natural alignment scheme, the compiler would only have to add a single
0127 byte of padding at the end of the structure. This padding is added in order
0128 to satisfy alignment constraints for arrays of these structures.
0129 
0130 Another point worth mentioning is the use of __attribute__((packed)) on a
0131 structure type. This GCC-specific attribute tells the compiler never to
0132 insert any padding within structures, useful when you want to use a C struct
0133 to represent some data that comes in a fixed arrangement 'off the wire'.
0134 
0135 You might be inclined to believe that usage of this attribute can easily
0136 lead to unaligned accesses when accessing fields that do not satisfy
0137 architectural alignment requirements. However, again, the compiler is aware
0138 of the alignment constraints and will generate extra instructions to perform
0139 the memory access in a way that does not cause unaligned access. Of course,
0140 the extra instructions obviously cause a loss in performance compared to the
0141 non-packed case, so the packed attribute should only be used when avoiding
0142 structure padding is of importance.
0143 
0144 
0145 Code that causes unaligned access
0146 =================================
0147 
0148 With the above in mind, let's move onto a real life example of a function
0149 that can cause an unaligned memory access. The following function taken
0150 from include/linux/etherdevice.h is an optimized routine to compare two
0151 ethernet MAC addresses for equality::
0152 
0153   bool ether_addr_equal(const u8 *addr1, const u8 *addr2)
0154   {
0155   #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
0156         u32 fold = ((*(const u32 *)addr1) ^ (*(const u32 *)addr2)) |
0157                    ((*(const u16 *)(addr1 + 4)) ^ (*(const u16 *)(addr2 + 4)));
0158 
0159         return fold == 0;
0160   #else
0161         const u16 *a = (const u16 *)addr1;
0162         const u16 *b = (const u16 *)addr2;
0163         return ((a[0] ^ b[0]) | (a[1] ^ b[1]) | (a[2] ^ b[2])) == 0;
0164   #endif
0165   }
0166 
0167 In the above function, when the hardware has efficient unaligned access
0168 capability, there is no issue with this code.  But when the hardware isn't
0169 able to access memory on arbitrary boundaries, the reference to a[0] causes
0170 2 bytes (16 bits) to be read from memory starting at address addr1.
0171 
0172 Think about what would happen if addr1 was an odd address such as 0x10003.
0173 (Hint: it'd be an unaligned access.)
0174 
0175 Despite the potential unaligned access problems with the above function, it
0176 is included in the kernel anyway but is understood to only work normally on
0177 16-bit-aligned addresses. It is up to the caller to ensure this alignment or
0178 not use this function at all. This alignment-unsafe function is still useful
0179 as it is a decent optimization for the cases when you can ensure alignment,
0180 which is true almost all of the time in ethernet networking context.
0181 
0182 
0183 Here is another example of some code that could cause unaligned accesses::
0184 
0185         void myfunc(u8 *data, u32 value)
0186         {
0187                 [...]
0188                 *((u32 *) data) = cpu_to_le32(value);
0189                 [...]
0190         }
0191 
0192 This code will cause unaligned accesses every time the data parameter points
0193 to an address that is not evenly divisible by 4.
0194 
0195 In summary, the 2 main scenarios where you may run into unaligned access
0196 problems involve:
0197 
0198  1. Casting variables to types of different lengths
0199  2. Pointer arithmetic followed by access to at least 2 bytes of data
0200 
0201 
0202 Avoiding unaligned accesses
0203 ===========================
0204 
0205 The easiest way to avoid unaligned access is to use the get_unaligned() and
0206 put_unaligned() macros provided by the <asm/unaligned.h> header file.
0207 
0208 Going back to an earlier example of code that potentially causes unaligned
0209 access::
0210 
0211         void myfunc(u8 *data, u32 value)
0212         {
0213                 [...]
0214                 *((u32 *) data) = cpu_to_le32(value);
0215                 [...]
0216         }
0217 
0218 To avoid the unaligned memory access, you would rewrite it as follows::
0219 
0220         void myfunc(u8 *data, u32 value)
0221         {
0222                 [...]
0223                 value = cpu_to_le32(value);
0224                 put_unaligned(value, (u32 *) data);
0225                 [...]
0226         }
0227 
0228 The get_unaligned() macro works similarly. Assuming 'data' is a pointer to
0229 memory and you wish to avoid unaligned access, its usage is as follows::
0230 
0231         u32 value = get_unaligned((u32 *) data);
0232 
0233 These macros work for memory accesses of any length (not just 32 bits as
0234 in the examples above). Be aware that when compared to standard access of
0235 aligned memory, using these macros to access unaligned memory can be costly in
0236 terms of performance.
0237 
0238 If use of such macros is not convenient, another option is to use memcpy(),
0239 where the source or destination (or both) are of type u8* or unsigned char*.
0240 Due to the byte-wise nature of this operation, unaligned accesses are avoided.
0241 
0242 
0243 Alignment vs. Networking
0244 ========================
0245 
0246 On architectures that require aligned loads, networking requires that the IP
0247 header is aligned on a four-byte boundary to optimise the IP stack. For
0248 regular ethernet hardware, the constant NET_IP_ALIGN is used. On most
0249 architectures this constant has the value 2 because the normal ethernet
0250 header is 14 bytes long, so in order to get proper alignment one needs to
0251 DMA to an address which can be expressed as 4*n + 2. One notable exception
0252 here is powerpc which defines NET_IP_ALIGN to 0 because DMA to unaligned
0253 addresses can be very expensive and dwarf the cost of unaligned loads.
0254 
0255 For some ethernet hardware that cannot DMA to unaligned addresses like
0256 4*n+2 or non-ethernet hardware, this can be a problem, and it is then
0257 required to copy the incoming frame into an aligned buffer. Because this is
0258 unnecessary on architectures that can do unaligned accesses, the code can be
0259 made dependent on CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS like so::
0260 
0261         #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
0262                 skb = original skb
0263         #else
0264                 skb = copy skb
0265         #endif