0001 .. SPDX-License-Identifier: GPL-2.0
0002
0003 =========================================================
0004 Neterion's (Formerly S2io) Xframe I/II PCI-X 10GbE driver
0005 =========================================================
0006
0007 Release notes for Neterion's (Formerly S2io) Xframe I/II PCI-X 10GbE driver.
0008
0009 .. Contents
0010 - 1. Introduction
0011 - 2. Identifying the adapter/interface
0012 - 3. Features supported
0013 - 4. Command line parameters
0014 - 5. Performance suggestions
0015 - 6. Available Downloads
0016
0017
0018 1. Introduction
0019 ===============
0020 This Linux driver supports Neterion's Xframe I PCI-X 1.0 and
0021 Xframe II PCI-X 2.0 adapters. It supports several features
0022 such as jumbo frames, MSI/MSI-X, checksum offloads, TSO, UFO and so on.
0023 See below for complete list of features.
0024
0025 All features are supported for both IPv4 and IPv6.
0026
0027 2. Identifying the adapter/interface
0028 ====================================
0029
0030 a. Insert the adapter(s) in your system.
0031 b. Build and load driver::
0032
0033 # insmod s2io.ko
0034
0035 c. View log messages::
0036
0037 # dmesg | tail -40
0038
0039 You will see messages similar to::
0040
0041 eth3: Neterion Xframe I 10GbE adapter (rev 3), Version 2.0.9.1, Intr type INTA
0042 eth4: Neterion Xframe II 10GbE adapter (rev 2), Version 2.0.9.1, Intr type INTA
0043 eth4: Device is on 64 bit 133MHz PCIX(M1) bus
0044
0045 The above messages identify the adapter type(Xframe I/II), adapter revision,
0046 driver version, interface name(eth3, eth4), Interrupt type(INTA, MSI, MSI-X).
0047 In case of Xframe II, the PCI/PCI-X bus width and frequency are displayed
0048 as well.
0049
0050 To associate an interface with a physical adapter use "ethtool -p <ethX>".
0051 The corresponding adapter's LED will blink multiple times.
0052
0053 3. Features supported
0054 =====================
0055 a. Jumbo frames. Xframe I/II supports MTU up to 9600 bytes,
0056 modifiable using ip command.
0057
0058 b. Offloads. Supports checksum offload(TCP/UDP/IP) on transmit
0059 and receive, TSO.
0060
0061 c. Multi-buffer receive mode. Scattering of packet across multiple
0062 buffers. Currently driver supports 2-buffer mode which yields
0063 significant performance improvement on certain platforms(SGI Altix,
0064 IBM xSeries).
0065
0066 d. MSI/MSI-X. Can be enabled on platforms which support this feature
0067 (IA64, Xeon) resulting in noticeable performance improvement(up to 7%
0068 on certain platforms).
0069
0070 e. Statistics. Comprehensive MAC-level and software statistics displayed
0071 using "ethtool -S" option.
0072
0073 f. Multi-FIFO/Ring. Supports up to 8 transmit queues and receive rings,
0074 with multiple steering options.
0075
0076 4. Command line parameters
0077 ==========================
0078
0079 a. tx_fifo_num
0080 Number of transmit queues
0081
0082 Valid range: 1-8
0083
0084 Default: 1
0085
0086 b. rx_ring_num
0087 Number of receive rings
0088
0089 Valid range: 1-8
0090
0091 Default: 1
0092
0093 c. tx_fifo_len
0094 Size of each transmit queue
0095
0096 Valid range: Total length of all queues should not exceed 8192
0097
0098 Default: 4096
0099
0100 d. rx_ring_sz
0101 Size of each receive ring(in 4K blocks)
0102
0103 Valid range: Limited by memory on system
0104
0105 Default: 30
0106
0107 e. intr_type
0108 Specifies interrupt type. Possible values 0(INTA), 2(MSI-X)
0109
0110 Valid values: 0, 2
0111
0112 Default: 2
0113
0114 5. Performance suggestions
0115 ==========================
0116
0117 General:
0118
0119 a. Set MTU to maximum(9000 for switch setup, 9600 in back-to-back configuration)
0120 b. Set TCP windows size to optimal value.
0121
0122 For instance, for MTU=1500 a value of 210K has been observed to result in
0123 good performance::
0124
0125 # sysctl -w net.ipv4.tcp_rmem="210000 210000 210000"
0126 # sysctl -w net.ipv4.tcp_wmem="210000 210000 210000"
0127
0128 For MTU=9000, TCP window size of 10 MB is recommended::
0129
0130 # sysctl -w net.ipv4.tcp_rmem="10000000 10000000 10000000"
0131 # sysctl -w net.ipv4.tcp_wmem="10000000 10000000 10000000"
0132
0133 Transmit performance:
0134
0135 a. By default, the driver respects BIOS settings for PCI bus parameters.
0136 However, you may want to experiment with PCI bus parameters
0137 max-split-transactions(MOST) and MMRBC (use setpci command).
0138
0139 A MOST value of 2 has been found optimal for Opterons and 3 for Itanium.
0140
0141 It could be different for your hardware.
0142
0143 Set MMRBC to 4K**.
0144
0145 For example you can set
0146
0147 For opteron::
0148
0149 #setpci -d 17d5:* 62=1d
0150
0151 For Itanium::
0152
0153 #setpci -d 17d5:* 62=3d
0154
0155 For detailed description of the PCI registers, please see Xframe User Guide.
0156
0157 b. Ensure Transmit Checksum offload is enabled. Use ethtool to set/verify this
0158 parameter.
0159
0160 c. Turn on TSO(using "ethtool -K")::
0161
0162 # ethtool -K <ethX> tso on
0163
0164 Receive performance:
0165
0166 a. By default, the driver respects BIOS settings for PCI bus parameters.
0167 However, you may want to set PCI latency timer to 248::
0168
0169 #setpci -d 17d5:* LATENCY_TIMER=f8
0170
0171 For detailed description of the PCI registers, please see Xframe User Guide.
0172
0173 b. Use 2-buffer mode. This results in large performance boost on
0174 certain platforms(eg. SGI Altix, IBM xSeries).
0175
0176 c. Ensure Receive Checksum offload is enabled. Use "ethtool -K ethX" command to
0177 set/verify this option.
0178
0179 d. Enable NAPI feature(in kernel configuration Device Drivers ---> Network
0180 device support ---> Ethernet (10000 Mbit) ---> S2IO 10Gbe Xframe NIC) to
0181 bring down CPU utilization.
0182
0183 .. note::
0184
0185 For AMD opteron platforms with 8131 chipset, MMRBC=1 and MOST=1 are
0186 recommended as safe parameters.
0187
0188 For more information, please review the AMD8131 errata at
0189 http://vip.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/
0190 26310_AMD-8131_HyperTransport_PCI-X_Tunnel_Revision_Guide_rev_3_18.pdf
0191
0192 6. Support
0193 ==========
0194
0195 For further support please contact either your 10GbE Xframe NIC vendor (IBM,
0196 HP, SGI etc.)