Back to home page

OSCL-LXR

 
 

    


0001 ==========================
0002 NAND Error-correction Code
0003 ==========================
0004 
0005 Introduction
0006 ============
0007 
0008 Having looked at the linux mtd/nand Hamming software ECC engine driver
0009 I felt there was room for optimisation. I bashed the code for a few hours
0010 performing tricks like table lookup removing superfluous code etc.
0011 After that the speed was increased by 35-40%.
0012 Still I was not too happy as I felt there was additional room for improvement.
0013 
0014 Bad! I was hooked.
0015 I decided to annotate my steps in this file. Perhaps it is useful to someone
0016 or someone learns something from it.
0017 
0018 
0019 The problem
0020 ===========
0021 
0022 NAND flash (at least SLC one) typically has sectors of 256 bytes.
0023 However NAND flash is not extremely reliable so some error detection
0024 (and sometimes correction) is needed.
0025 
0026 This is done by means of a Hamming code. I'll try to explain it in
0027 laymans terms (and apologies to all the pro's in the field in case I do
0028 not use the right terminology, my coding theory class was almost 30
0029 years ago, and I must admit it was not one of my favourites).
0030 
0031 As I said before the ecc calculation is performed on sectors of 256
0032 bytes. This is done by calculating several parity bits over the rows and
0033 columns. The parity used is even parity which means that the parity bit = 1
0034 if the data over which the parity is calculated is 1 and the parity bit = 0
0035 if the data over which the parity is calculated is 0. So the total
0036 number of bits over the data over which the parity is calculated + the
0037 parity bit is even. (see wikipedia if you can't follow this).
0038 Parity is often calculated by means of an exclusive or operation,
0039 sometimes also referred to as xor. In C the operator for xor is ^
0040 
0041 Back to ecc.
0042 Let's give a small figure:
0043 
0044 =========  ==== ==== ==== ==== ==== ==== ==== ====   === === === === ====
0045 byte   0:  bit7 bit6 bit5 bit4 bit3 bit2 bit1 bit0   rp0 rp2 rp4 ... rp14
0046 byte   1:  bit7 bit6 bit5 bit4 bit3 bit2 bit1 bit0   rp1 rp2 rp4 ... rp14
0047 byte   2:  bit7 bit6 bit5 bit4 bit3 bit2 bit1 bit0   rp0 rp3 rp4 ... rp14
0048 byte   3:  bit7 bit6 bit5 bit4 bit3 bit2 bit1 bit0   rp1 rp3 rp4 ... rp14
0049 byte   4:  bit7 bit6 bit5 bit4 bit3 bit2 bit1 bit0   rp0 rp2 rp5 ... rp14
0050 ...
0051 byte 254:  bit7 bit6 bit5 bit4 bit3 bit2 bit1 bit0   rp0 rp3 rp5 ... rp15
0052 byte 255:  bit7 bit6 bit5 bit4 bit3 bit2 bit1 bit0   rp1 rp3 rp5 ... rp15
0053            cp1  cp0  cp1  cp0  cp1  cp0  cp1  cp0
0054            cp3  cp3  cp2  cp2  cp3  cp3  cp2  cp2
0055            cp5  cp5  cp5  cp5  cp4  cp4  cp4  cp4
0056 =========  ==== ==== ==== ==== ==== ==== ==== ====   === === === === ====
0057 
0058 This figure represents a sector of 256 bytes.
0059 cp is my abbreviation for column parity, rp for row parity.
0060 
0061 Let's start to explain column parity.
0062 
0063 - cp0 is the parity that belongs to all bit0, bit2, bit4, bit6.
0064 
0065   so the sum of all bit0, bit2, bit4 and bit6 values + cp0 itself is even.
0066 
0067 Similarly cp1 is the sum of all bit1, bit3, bit5 and bit7.
0068 
0069 - cp2 is the parity over bit0, bit1, bit4 and bit5
0070 - cp3 is the parity over bit2, bit3, bit6 and bit7.
0071 - cp4 is the parity over bit0, bit1, bit2 and bit3.
0072 - cp5 is the parity over bit4, bit5, bit6 and bit7.
0073 
0074 Note that each of cp0 .. cp5 is exactly one bit.
0075 
0076 Row parity actually works almost the same.
0077 
0078 - rp0 is the parity of all even bytes (0, 2, 4, 6, ... 252, 254)
0079 - rp1 is the parity of all odd bytes (1, 3, 5, 7, ..., 253, 255)
0080 - rp2 is the parity of all bytes 0, 1, 4, 5, 8, 9, ...
0081   (so handle two bytes, then skip 2 bytes).
0082 - rp3 is covers the half rp2 does not cover (bytes 2, 3, 6, 7, 10, 11, ...)
0083 - for rp4 the rule is cover 4 bytes, skip 4 bytes, cover 4 bytes, skip 4 etc.
0084 
0085   so rp4 calculates parity over bytes 0, 1, 2, 3, 8, 9, 10, 11, 16, ...)
0086 - and rp5 covers the other half, so bytes 4, 5, 6, 7, 12, 13, 14, 15, 20, ..
0087 
0088 The story now becomes quite boring. I guess you get the idea.
0089 
0090 - rp6 covers 8 bytes then skips 8 etc
0091 - rp7 skips 8 bytes then covers 8 etc
0092 - rp8 covers 16 bytes then skips 16 etc
0093 - rp9 skips 16 bytes then covers 16 etc
0094 - rp10 covers 32 bytes then skips 32 etc
0095 - rp11 skips 32 bytes then covers 32 etc
0096 - rp12 covers 64 bytes then skips 64 etc
0097 - rp13 skips 64 bytes then covers 64 etc
0098 - rp14 covers 128 bytes then skips 128
0099 - rp15 skips 128 bytes then covers 128
0100 
0101 In the end the parity bits are grouped together in three bytes as
0102 follows:
0103 
0104 =====  ===== ===== ===== ===== ===== ===== ===== =====
0105 ECC    Bit 7 Bit 6 Bit 5 Bit 4 Bit 3 Bit 2 Bit 1 Bit 0
0106 =====  ===== ===== ===== ===== ===== ===== ===== =====
0107 ECC 0   rp07  rp06  rp05  rp04  rp03  rp02  rp01  rp00
0108 ECC 1   rp15  rp14  rp13  rp12  rp11  rp10  rp09  rp08
0109 ECC 2   cp5   cp4   cp3   cp2   cp1   cp0      1     1
0110 =====  ===== ===== ===== ===== ===== ===== ===== =====
0111 
0112 I detected after writing this that ST application note AN1823
0113 (http://www.st.com/stonline/) gives a much
0114 nicer picture.(but they use line parity as term where I use row parity)
0115 Oh well, I'm graphically challenged, so suffer with me for a moment :-)
0116 
0117 And I could not reuse the ST picture anyway for copyright reasons.
0118 
0119 
0120 Attempt 0
0121 =========
0122 
0123 Implementing the parity calculation is pretty simple.
0124 In C pseudocode::
0125 
0126   for (i = 0; i < 256; i++)
0127   {
0128     if (i & 0x01)
0129        rp1 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp1;
0130     else
0131        rp0 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp0;
0132     if (i & 0x02)
0133        rp3 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp3;
0134     else
0135        rp2 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp2;
0136     if (i & 0x04)
0137       rp5 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp5;
0138     else
0139       rp4 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp4;
0140     if (i & 0x08)
0141       rp7 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp7;
0142     else
0143       rp6 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp6;
0144     if (i & 0x10)
0145       rp9 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp9;
0146     else
0147       rp8 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp8;
0148     if (i & 0x20)
0149       rp11 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp11;
0150     else
0151       rp10 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp10;
0152     if (i & 0x40)
0153       rp13 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp13;
0154     else
0155       rp12 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp12;
0156     if (i & 0x80)
0157       rp15 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp15;
0158     else
0159       rp14 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp14;
0160     cp0 = bit6 ^ bit4 ^ bit2 ^ bit0 ^ cp0;
0161     cp1 = bit7 ^ bit5 ^ bit3 ^ bit1 ^ cp1;
0162     cp2 = bit5 ^ bit4 ^ bit1 ^ bit0 ^ cp2;
0163     cp3 = bit7 ^ bit6 ^ bit3 ^ bit2 ^ cp3
0164     cp4 = bit3 ^ bit2 ^ bit1 ^ bit0 ^ cp4
0165     cp5 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ cp5
0166   }
0167 
0168 
0169 Analysis 0
0170 ==========
0171 
0172 C does have bitwise operators but not really operators to do the above
0173 efficiently (and most hardware has no such instructions either).
0174 Therefore without implementing this it was clear that the code above was
0175 not going to bring me a Nobel prize :-)
0176 
0177 Fortunately the exclusive or operation is commutative, so we can combine
0178 the values in any order. So instead of calculating all the bits
0179 individually, let us try to rearrange things.
0180 For the column parity this is easy. We can just xor the bytes and in the
0181 end filter out the relevant bits. This is pretty nice as it will bring
0182 all cp calculation out of the for loop.
0183 
0184 Similarly we can first xor the bytes for the various rows.
0185 This leads to:
0186 
0187 
0188 Attempt 1
0189 =========
0190 
0191 ::
0192 
0193   const char parity[256] = {
0194       0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
0195       1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
0196       1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
0197       0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
0198       1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
0199       0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
0200       0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
0201       1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
0202       1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
0203       0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
0204       0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
0205       1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
0206       0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
0207       1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
0208       1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
0209       0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0
0210   };
0211 
0212   void ecc1(const unsigned char *buf, unsigned char *code)
0213   {
0214       int i;
0215       const unsigned char *bp = buf;
0216       unsigned char cur;
0217       unsigned char rp0, rp1, rp2, rp3, rp4, rp5, rp6, rp7;
0218       unsigned char rp8, rp9, rp10, rp11, rp12, rp13, rp14, rp15;
0219       unsigned char par;
0220 
0221       par = 0;
0222       rp0 = 0; rp1 = 0; rp2 = 0; rp3 = 0;
0223       rp4 = 0; rp5 = 0; rp6 = 0; rp7 = 0;
0224       rp8 = 0; rp9 = 0; rp10 = 0; rp11 = 0;
0225       rp12 = 0; rp13 = 0; rp14 = 0; rp15 = 0;
0226 
0227       for (i = 0; i < 256; i++)
0228       {
0229           cur = *bp++;
0230           par ^= cur;
0231           if (i & 0x01) rp1 ^= cur; else rp0 ^= cur;
0232           if (i & 0x02) rp3 ^= cur; else rp2 ^= cur;
0233           if (i & 0x04) rp5 ^= cur; else rp4 ^= cur;
0234           if (i & 0x08) rp7 ^= cur; else rp6 ^= cur;
0235           if (i & 0x10) rp9 ^= cur; else rp8 ^= cur;
0236           if (i & 0x20) rp11 ^= cur; else rp10 ^= cur;
0237           if (i & 0x40) rp13 ^= cur; else rp12 ^= cur;
0238           if (i & 0x80) rp15 ^= cur; else rp14 ^= cur;
0239       }
0240       code[0] =
0241           (parity[rp7] << 7) |
0242           (parity[rp6] << 6) |
0243           (parity[rp5] << 5) |
0244           (parity[rp4] << 4) |
0245           (parity[rp3] << 3) |
0246           (parity[rp2] << 2) |
0247           (parity[rp1] << 1) |
0248           (parity[rp0]);
0249       code[1] =
0250           (parity[rp15] << 7) |
0251           (parity[rp14] << 6) |
0252           (parity[rp13] << 5) |
0253           (parity[rp12] << 4) |
0254           (parity[rp11] << 3) |
0255           (parity[rp10] << 2) |
0256           (parity[rp9]  << 1) |
0257           (parity[rp8]);
0258       code[2] =
0259           (parity[par & 0xf0] << 7) |
0260           (parity[par & 0x0f] << 6) |
0261           (parity[par & 0xcc] << 5) |
0262           (parity[par & 0x33] << 4) |
0263           (parity[par & 0xaa] << 3) |
0264           (parity[par & 0x55] << 2);
0265       code[0] = ~code[0];
0266       code[1] = ~code[1];
0267       code[2] = ~code[2];
0268   }
0269 
0270 Still pretty straightforward. The last three invert statements are there to
0271 give a checksum of 0xff 0xff 0xff for an empty flash. In an empty flash
0272 all data is 0xff, so the checksum then matches.
0273 
0274 I also introduced the parity lookup. I expected this to be the fastest
0275 way to calculate the parity, but I will investigate alternatives later
0276 on.
0277 
0278 
0279 Analysis 1
0280 ==========
0281 
0282 The code works, but is not terribly efficient. On my system it took
0283 almost 4 times as much time as the linux driver code. But hey, if it was
0284 *that* easy this would have been done long before.
0285 No pain. no gain.
0286 
0287 Fortunately there is plenty of room for improvement.
0288 
0289 In step 1 we moved from bit-wise calculation to byte-wise calculation.
0290 However in C we can also use the unsigned long data type and virtually
0291 every modern microprocessor supports 32 bit operations, so why not try
0292 to write our code in such a way that we process data in 32 bit chunks.
0293 
0294 Of course this means some modification as the row parity is byte by
0295 byte. A quick analysis:
0296 for the column parity we use the par variable. When extending to 32 bits
0297 we can in the end easily calculate rp0 and rp1 from it.
0298 (because par now consists of 4 bytes, contributing to rp1, rp0, rp1, rp0
0299 respectively, from MSB to LSB)
0300 also rp2 and rp3 can be easily retrieved from par as rp3 covers the
0301 first two MSBs and rp2 covers the last two LSBs.
0302 
0303 Note that of course now the loop is executed only 64 times (256/4).
0304 And note that care must taken wrt byte ordering. The way bytes are
0305 ordered in a long is machine dependent, and might affect us.
0306 Anyway, if there is an issue: this code is developed on x86 (to be
0307 precise: a DELL PC with a D920 Intel CPU)
0308 
0309 And of course the performance might depend on alignment, but I expect
0310 that the I/O buffers in the nand driver are aligned properly (and
0311 otherwise that should be fixed to get maximum performance).
0312 
0313 Let's give it a try...
0314 
0315 
0316 Attempt 2
0317 =========
0318 
0319 ::
0320 
0321   extern const char parity[256];
0322 
0323   void ecc2(const unsigned char *buf, unsigned char *code)
0324   {
0325       int i;
0326       const unsigned long *bp = (unsigned long *)buf;
0327       unsigned long cur;
0328       unsigned long rp0, rp1, rp2, rp3, rp4, rp5, rp6, rp7;
0329       unsigned long rp8, rp9, rp10, rp11, rp12, rp13, rp14, rp15;
0330       unsigned long par;
0331 
0332       par = 0;
0333       rp0 = 0; rp1 = 0; rp2 = 0; rp3 = 0;
0334       rp4 = 0; rp5 = 0; rp6 = 0; rp7 = 0;
0335       rp8 = 0; rp9 = 0; rp10 = 0; rp11 = 0;
0336       rp12 = 0; rp13 = 0; rp14 = 0; rp15 = 0;
0337 
0338       for (i = 0; i < 64; i++)
0339       {
0340           cur = *bp++;
0341           par ^= cur;
0342           if (i & 0x01) rp5 ^= cur; else rp4 ^= cur;
0343           if (i & 0x02) rp7 ^= cur; else rp6 ^= cur;
0344           if (i & 0x04) rp9 ^= cur; else rp8 ^= cur;
0345           if (i & 0x08) rp11 ^= cur; else rp10 ^= cur;
0346           if (i & 0x10) rp13 ^= cur; else rp12 ^= cur;
0347           if (i & 0x20) rp15 ^= cur; else rp14 ^= cur;
0348       }
0349       /*
0350          we need to adapt the code generation for the fact that rp vars are now
0351          long; also the column parity calculation needs to be changed.
0352          we'll bring rp4 to 15 back to single byte entities by shifting and
0353          xoring
0354       */
0355       rp4 ^= (rp4 >> 16); rp4 ^= (rp4 >> 8); rp4 &= 0xff;
0356       rp5 ^= (rp5 >> 16); rp5 ^= (rp5 >> 8); rp5 &= 0xff;
0357       rp6 ^= (rp6 >> 16); rp6 ^= (rp6 >> 8); rp6 &= 0xff;
0358       rp7 ^= (rp7 >> 16); rp7 ^= (rp7 >> 8); rp7 &= 0xff;
0359       rp8 ^= (rp8 >> 16); rp8 ^= (rp8 >> 8); rp8 &= 0xff;
0360       rp9 ^= (rp9 >> 16); rp9 ^= (rp9 >> 8); rp9 &= 0xff;
0361       rp10 ^= (rp10 >> 16); rp10 ^= (rp10 >> 8); rp10 &= 0xff;
0362       rp11 ^= (rp11 >> 16); rp11 ^= (rp11 >> 8); rp11 &= 0xff;
0363       rp12 ^= (rp12 >> 16); rp12 ^= (rp12 >> 8); rp12 &= 0xff;
0364       rp13 ^= (rp13 >> 16); rp13 ^= (rp13 >> 8); rp13 &= 0xff;
0365       rp14 ^= (rp14 >> 16); rp14 ^= (rp14 >> 8); rp14 &= 0xff;
0366       rp15 ^= (rp15 >> 16); rp15 ^= (rp15 >> 8); rp15 &= 0xff;
0367       rp3 = (par >> 16); rp3 ^= (rp3 >> 8); rp3 &= 0xff;
0368       rp2 = par & 0xffff; rp2 ^= (rp2 >> 8); rp2 &= 0xff;
0369       par ^= (par >> 16);
0370       rp1 = (par >> 8); rp1 &= 0xff;
0371       rp0 = (par & 0xff);
0372       par ^= (par >> 8); par &= 0xff;
0373 
0374       code[0] =
0375           (parity[rp7] << 7) |
0376           (parity[rp6] << 6) |
0377           (parity[rp5] << 5) |
0378           (parity[rp4] << 4) |
0379           (parity[rp3] << 3) |
0380           (parity[rp2] << 2) |
0381           (parity[rp1] << 1) |
0382           (parity[rp0]);
0383       code[1] =
0384           (parity[rp15] << 7) |
0385           (parity[rp14] << 6) |
0386           (parity[rp13] << 5) |
0387           (parity[rp12] << 4) |
0388           (parity[rp11] << 3) |
0389           (parity[rp10] << 2) |
0390           (parity[rp9]  << 1) |
0391           (parity[rp8]);
0392       code[2] =
0393           (parity[par & 0xf0] << 7) |
0394           (parity[par & 0x0f] << 6) |
0395           (parity[par & 0xcc] << 5) |
0396           (parity[par & 0x33] << 4) |
0397           (parity[par & 0xaa] << 3) |
0398           (parity[par & 0x55] << 2);
0399       code[0] = ~code[0];
0400       code[1] = ~code[1];
0401       code[2] = ~code[2];
0402   }
0403 
0404 The parity array is not shown any more. Note also that for these
0405 examples I kinda deviated from my regular programming style by allowing
0406 multiple statements on a line, not using { } in then and else blocks
0407 with only a single statement and by using operators like ^=
0408 
0409 
0410 Analysis 2
0411 ==========
0412 
0413 The code (of course) works, and hurray: we are a little bit faster than
0414 the linux driver code (about 15%). But wait, don't cheer too quickly.
0415 There is more to be gained.
0416 If we look at e.g. rp14 and rp15 we see that we either xor our data with
0417 rp14 or with rp15. However we also have par which goes over all data.
0418 This means there is no need to calculate rp14 as it can be calculated from
0419 rp15 through rp14 = par ^ rp15, because par = rp14 ^ rp15;
0420 (or if desired we can avoid calculating rp15 and calculate it from
0421 rp14).  That is why some places refer to inverse parity.
0422 Of course the same thing holds for rp4/5, rp6/7, rp8/9, rp10/11 and rp12/13.
0423 Effectively this means we can eliminate the else clause from the if
0424 statements. Also we can optimise the calculation in the end a little bit
0425 by going from long to byte first. Actually we can even avoid the table
0426 lookups
0427 
0428 Attempt 3
0429 =========
0430 
0431 Odd replaced::
0432 
0433           if (i & 0x01) rp5 ^= cur; else rp4 ^= cur;
0434           if (i & 0x02) rp7 ^= cur; else rp6 ^= cur;
0435           if (i & 0x04) rp9 ^= cur; else rp8 ^= cur;
0436           if (i & 0x08) rp11 ^= cur; else rp10 ^= cur;
0437           if (i & 0x10) rp13 ^= cur; else rp12 ^= cur;
0438           if (i & 0x20) rp15 ^= cur; else rp14 ^= cur;
0439 
0440 with::
0441 
0442           if (i & 0x01) rp5 ^= cur;
0443           if (i & 0x02) rp7 ^= cur;
0444           if (i & 0x04) rp9 ^= cur;
0445           if (i & 0x08) rp11 ^= cur;
0446           if (i & 0x10) rp13 ^= cur;
0447           if (i & 0x20) rp15 ^= cur;
0448 
0449 and outside the loop added::
0450 
0451           rp4  = par ^ rp5;
0452           rp6  = par ^ rp7;
0453           rp8  = par ^ rp9;
0454           rp10  = par ^ rp11;
0455           rp12  = par ^ rp13;
0456           rp14  = par ^ rp15;
0457 
0458 And after that the code takes about 30% more time, although the number of
0459 statements is reduced. This is also reflected in the assembly code.
0460 
0461 
0462 Analysis 3
0463 ==========
0464 
0465 Very weird. Guess it has to do with caching or instruction parallellism
0466 or so. I also tried on an eeePC (Celeron, clocked at 900 Mhz). Interesting
0467 observation was that this one is only 30% slower (according to time)
0468 executing the code as my 3Ghz D920 processor.
0469 
0470 Well, it was expected not to be easy so maybe instead move to a
0471 different track: let's move back to the code from attempt2 and do some
0472 loop unrolling. This will eliminate a few if statements. I'll try
0473 different amounts of unrolling to see what works best.
0474 
0475 
0476 Attempt 4
0477 =========
0478 
0479 Unrolled the loop 1, 2, 3 and 4 times.
0480 For 4 the code starts with::
0481 
0482     for (i = 0; i < 4; i++)
0483     {
0484         cur = *bp++;
0485         par ^= cur;
0486         rp4 ^= cur;
0487         rp6 ^= cur;
0488         rp8 ^= cur;
0489         rp10 ^= cur;
0490         if (i & 0x1) rp13 ^= cur; else rp12 ^= cur;
0491         if (i & 0x2) rp15 ^= cur; else rp14 ^= cur;
0492         cur = *bp++;
0493         par ^= cur;
0494         rp5 ^= cur;
0495         rp6 ^= cur;
0496         ...
0497 
0498 
0499 Analysis 4
0500 ==========
0501 
0502 Unrolling once gains about 15%
0503 
0504 Unrolling twice keeps the gain at about 15%
0505 
0506 Unrolling three times gives a gain of 30% compared to attempt 2.
0507 
0508 Unrolling four times gives a marginal improvement compared to unrolling
0509 three times.
0510 
0511 I decided to proceed with a four time unrolled loop anyway. It was my gut
0512 feeling that in the next steps I would obtain additional gain from it.
0513 
0514 The next step was triggered by the fact that par contains the xor of all
0515 bytes and rp4 and rp5 each contain the xor of half of the bytes.
0516 So in effect par = rp4 ^ rp5. But as xor is commutative we can also say
0517 that rp5 = par ^ rp4. So no need to keep both rp4 and rp5 around. We can
0518 eliminate rp5 (or rp4, but I already foresaw another optimisation).
0519 The same holds for rp6/7, rp8/9, rp10/11 rp12/13 and rp14/15.
0520 
0521 
0522 Attempt 5
0523 =========
0524 
0525 Effectively so all odd digit rp assignments in the loop were removed.
0526 This included the else clause of the if statements.
0527 Of course after the loop we need to correct things by adding code like::
0528 
0529     rp5 = par ^ rp4;
0530 
0531 Also the initial assignments (rp5 = 0; etc) could be removed.
0532 Along the line I also removed the initialisation of rp0/1/2/3.
0533 
0534 
0535 Analysis 5
0536 ==========
0537 
0538 Measurements showed this was a good move. The run-time roughly halved
0539 compared with attempt 4 with 4 times unrolled, and we only require 1/3rd
0540 of the processor time compared to the current code in the linux kernel.
0541 
0542 However, still I thought there was more. I didn't like all the if
0543 statements. Why not keep a running parity and only keep the last if
0544 statement. Time for yet another version!
0545 
0546 
0547 Attempt 6
0548 =========
0549 
0550 THe code within the for loop was changed to::
0551 
0552     for (i = 0; i < 4; i++)
0553     {
0554         cur = *bp++; tmppar  = cur; rp4 ^= cur;
0555         cur = *bp++; tmppar ^= cur; rp6 ^= tmppar;
0556         cur = *bp++; tmppar ^= cur; rp4 ^= cur;
0557         cur = *bp++; tmppar ^= cur; rp8 ^= tmppar;
0558 
0559         cur = *bp++; tmppar ^= cur; rp4 ^= cur; rp6 ^= cur;
0560         cur = *bp++; tmppar ^= cur; rp6 ^= cur;
0561         cur = *bp++; tmppar ^= cur; rp4 ^= cur;
0562         cur = *bp++; tmppar ^= cur; rp10 ^= tmppar;
0563 
0564         cur = *bp++; tmppar ^= cur; rp4 ^= cur; rp6 ^= cur; rp8 ^= cur;
0565         cur = *bp++; tmppar ^= cur; rp6 ^= cur; rp8 ^= cur;
0566         cur = *bp++; tmppar ^= cur; rp4 ^= cur; rp8 ^= cur;
0567         cur = *bp++; tmppar ^= cur; rp8 ^= cur;
0568 
0569         cur = *bp++; tmppar ^= cur; rp4 ^= cur; rp6 ^= cur;
0570         cur = *bp++; tmppar ^= cur; rp6 ^= cur;
0571         cur = *bp++; tmppar ^= cur; rp4 ^= cur;
0572         cur = *bp++; tmppar ^= cur;
0573 
0574         par ^= tmppar;
0575         if ((i & 0x1) == 0) rp12 ^= tmppar;
0576         if ((i & 0x2) == 0) rp14 ^= tmppar;
0577     }
0578 
0579 As you can see tmppar is used to accumulate the parity within a for
0580 iteration. In the last 3 statements is added to par and, if needed,
0581 to rp12 and rp14.
0582 
0583 While making the changes I also found that I could exploit that tmppar
0584 contains the running parity for this iteration. So instead of having:
0585 rp4 ^= cur; rp6 ^= cur;
0586 I removed the rp6 ^= cur; statement and did rp6 ^= tmppar; on next
0587 statement. A similar change was done for rp8 and rp10
0588 
0589 
0590 Analysis 6
0591 ==========
0592 
0593 Measuring this code again showed big gain. When executing the original
0594 linux code 1 million times, this took about 1 second on my system.
0595 (using time to measure the performance). After this iteration I was back
0596 to 0.075 sec. Actually I had to decide to start measuring over 10
0597 million iterations in order not to lose too much accuracy. This one
0598 definitely seemed to be the jackpot!
0599 
0600 There is a little bit more room for improvement though. There are three
0601 places with statements::
0602 
0603         rp4 ^= cur; rp6 ^= cur;
0604 
0605 It seems more efficient to also maintain a variable rp4_6 in the while
0606 loop; This eliminates 3 statements per loop. Of course after the loop we
0607 need to correct by adding::
0608 
0609         rp4 ^= rp4_6;
0610         rp6 ^= rp4_6
0611 
0612 Furthermore there are 4 sequential assignments to rp8. This can be
0613 encoded slightly more efficiently by saving tmppar before those 4 lines
0614 and later do rp8 = rp8 ^ tmppar ^ notrp8;
0615 (where notrp8 is the value of rp8 before those 4 lines).
0616 Again a use of the commutative property of xor.
0617 Time for a new test!
0618 
0619 
0620 Attempt 7
0621 =========
0622 
0623 The new code now looks like::
0624 
0625     for (i = 0; i < 4; i++)
0626     {
0627         cur = *bp++; tmppar  = cur; rp4 ^= cur;
0628         cur = *bp++; tmppar ^= cur; rp6 ^= tmppar;
0629         cur = *bp++; tmppar ^= cur; rp4 ^= cur;
0630         cur = *bp++; tmppar ^= cur; rp8 ^= tmppar;
0631 
0632         cur = *bp++; tmppar ^= cur; rp4_6 ^= cur;
0633         cur = *bp++; tmppar ^= cur; rp6 ^= cur;
0634         cur = *bp++; tmppar ^= cur; rp4 ^= cur;
0635         cur = *bp++; tmppar ^= cur; rp10 ^= tmppar;
0636 
0637         notrp8 = tmppar;
0638         cur = *bp++; tmppar ^= cur; rp4_6 ^= cur;
0639         cur = *bp++; tmppar ^= cur; rp6 ^= cur;
0640         cur = *bp++; tmppar ^= cur; rp4 ^= cur;
0641         cur = *bp++; tmppar ^= cur;
0642         rp8 = rp8 ^ tmppar ^ notrp8;
0643 
0644         cur = *bp++; tmppar ^= cur; rp4_6 ^= cur;
0645         cur = *bp++; tmppar ^= cur; rp6 ^= cur;
0646         cur = *bp++; tmppar ^= cur; rp4 ^= cur;
0647         cur = *bp++; tmppar ^= cur;
0648 
0649         par ^= tmppar;
0650         if ((i & 0x1) == 0) rp12 ^= tmppar;
0651         if ((i & 0x2) == 0) rp14 ^= tmppar;
0652     }
0653     rp4 ^= rp4_6;
0654     rp6 ^= rp4_6;
0655 
0656 
0657 Not a big change, but every penny counts :-)
0658 
0659 
0660 Analysis 7
0661 ==========
0662 
0663 Actually this made things worse. Not very much, but I don't want to move
0664 into the wrong direction. Maybe something to investigate later. Could
0665 have to do with caching again.
0666 
0667 Guess that is what there is to win within the loop. Maybe unrolling one
0668 more time will help. I'll keep the optimisations from 7 for now.
0669 
0670 
0671 Attempt 8
0672 =========
0673 
0674 Unrolled the loop one more time.
0675 
0676 
0677 Analysis 8
0678 ==========
0679 
0680 This makes things worse. Let's stick with attempt 6 and continue from there.
0681 Although it seems that the code within the loop cannot be optimised
0682 further there is still room to optimize the generation of the ecc codes.
0683 We can simply calculate the total parity. If this is 0 then rp4 = rp5
0684 etc. If the parity is 1, then rp4 = !rp5;
0685 
0686 But if rp4 = rp5 we do not need rp5 etc. We can just write the even bits
0687 in the result byte and then do something like::
0688 
0689     code[0] |= (code[0] << 1);
0690 
0691 Lets test this.
0692 
0693 
0694 Attempt 9
0695 =========
0696 
0697 Changed the code but again this slightly degrades performance. Tried all
0698 kind of other things, like having dedicated parity arrays to avoid the
0699 shift after parity[rp7] << 7; No gain.
0700 Change the lookup using the parity array by using shift operators (e.g.
0701 replace parity[rp7] << 7 with::
0702 
0703         rp7 ^= (rp7 << 4);
0704         rp7 ^= (rp7 << 2);
0705         rp7 ^= (rp7 << 1);
0706         rp7 &= 0x80;
0707 
0708 No gain.
0709 
0710 The only marginal change was inverting the parity bits, so we can remove
0711 the last three invert statements.
0712 
0713 Ah well, pity this does not deliver more. Then again 10 million
0714 iterations using the linux driver code takes between 13 and 13.5
0715 seconds, whereas my code now takes about 0.73 seconds for those 10
0716 million iterations. So basically I've improved the performance by a
0717 factor 18 on my system. Not that bad. Of course on different hardware
0718 you will get different results. No warranties!
0719 
0720 But of course there is no such thing as a free lunch. The codesize almost
0721 tripled (from 562 bytes to 1434 bytes). Then again, it is not that much.
0722 
0723 
0724 Correcting errors
0725 =================
0726 
0727 For correcting errors I again used the ST application note as a starter,
0728 but I also peeked at the existing code.
0729 
0730 The algorithm itself is pretty straightforward. Just xor the given and
0731 the calculated ecc. If all bytes are 0 there is no problem. If 11 bits
0732 are 1 we have one correctable bit error. If there is 1 bit 1, we have an
0733 error in the given ecc code.
0734 
0735 It proved to be fastest to do some table lookups. Performance gain
0736 introduced by this is about a factor 2 on my system when a repair had to
0737 be done, and 1% or so if no repair had to be done.
0738 
0739 Code size increased from 330 bytes to 686 bytes for this function.
0740 (gcc 4.2, -O3)
0741 
0742 
0743 Conclusion
0744 ==========
0745 
0746 The gain when calculating the ecc is tremendous. Om my development hardware
0747 a speedup of a factor of 18 for ecc calculation was achieved. On a test on an
0748 embedded system with a MIPS core a factor 7 was obtained.
0749 
0750 On a test with a Linksys NSLU2 (ARMv5TE processor) the speedup was a factor
0751 5 (big endian mode, gcc 4.1.2, -O3)
0752 
0753 For correction not much gain could be obtained (as bitflips are rare). Then
0754 again there are also much less cycles spent there.
0755 
0756 It seems there is not much more gain possible in this, at least when
0757 programmed in C. Of course it might be possible to squeeze something more
0758 out of it with an assembler program, but due to pipeline behaviour etc
0759 this is very tricky (at least for intel hw).
0760 
0761 Author: Frans Meulenbroeks
0762 
0763 Copyright (C) 2008 Koninklijke Philips Electronics NV.