0001 .. SPDX-License-Identifier: GPL-2.0
0002
0003 ======================
0004 Hyper-V network driver
0005 ======================
0006
0007 Compatibility
0008 =============
0009
0010 This driver is compatible with Windows Server 2012 R2, 2016 and
0011 Windows 10.
0012
0013 Features
0014 ========
0015
0016 Checksum offload
0017 ----------------
0018 The netvsc driver supports checksum offload as long as the
0019 Hyper-V host version does. Windows Server 2016 and Azure
0020 support checksum offload for TCP and UDP for both IPv4 and
0021 IPv6. Windows Server 2012 only supports checksum offload for TCP.
0022
0023 Receive Side Scaling
0024 --------------------
0025 Hyper-V supports receive side scaling. For TCP & UDP, packets can
0026 be distributed among available queues based on IP address and port
0027 number.
0028
0029 For TCP & UDP, we can switch hash level between L3 and L4 by ethtool
0030 command. TCP/UDP over IPv4 and v6 can be set differently. The default
0031 hash level is L4. We currently only allow switching TX hash level
0032 from within the guests.
0033
0034 On Azure, fragmented UDP packets have high loss rate with L4
0035 hashing. Using L3 hashing is recommended in this case.
0036
0037 For example, for UDP over IPv4 on eth0:
0038
0039 To include UDP port numbers in hashing::
0040
0041 ethtool -N eth0 rx-flow-hash udp4 sdfn
0042
0043 To exclude UDP port numbers in hashing::
0044
0045 ethtool -N eth0 rx-flow-hash udp4 sd
0046
0047 To show UDP hash level::
0048
0049 ethtool -n eth0 rx-flow-hash udp4
0050
0051 Generic Receive Offload, aka GRO
0052 --------------------------------
0053 The driver supports GRO and it is enabled by default. GRO coalesces
0054 like packets and significantly reduces CPU usage under heavy Rx
0055 load.
0056
0057 Large Receive Offload (LRO), or Receive Side Coalescing (RSC)
0058 -------------------------------------------------------------
0059 The driver supports LRO/RSC in the vSwitch feature. It reduces the per packet
0060 processing overhead by coalescing multiple TCP segments when possible. The
0061 feature is enabled by default on VMs running on Windows Server 2019 and
0062 later. It may be changed by ethtool command::
0063
0064 ethtool -K eth0 lro on
0065 ethtool -K eth0 lro off
0066
0067 SR-IOV support
0068 --------------
0069 Hyper-V supports SR-IOV as a hardware acceleration option. If SR-IOV
0070 is enabled in both the vSwitch and the guest configuration, then the
0071 Virtual Function (VF) device is passed to the guest as a PCI
0072 device. In this case, both a synthetic (netvsc) and VF device are
0073 visible in the guest OS and both NIC's have the same MAC address.
0074
0075 The VF is enslaved by netvsc device. The netvsc driver will transparently
0076 switch the data path to the VF when it is available and up.
0077 Network state (addresses, firewall, etc) should be applied only to the
0078 netvsc device; the slave device should not be accessed directly in
0079 most cases. The exceptions are if some special queue discipline or
0080 flow direction is desired, these should be applied directly to the
0081 VF slave device.
0082
0083 Receive Buffer
0084 --------------
0085 Packets are received into a receive area which is created when device
0086 is probed. The receive area is broken into MTU sized chunks and each may
0087 contain one or more packets. The number of receive sections may be changed
0088 via ethtool Rx ring parameters.
0089
0090 There is a similar send buffer which is used to aggregate packets
0091 for sending. The send area is broken into chunks, typically of 6144
0092 bytes, each of section may contain one or more packets. Small
0093 packets are usually transmitted via copy to the send buffer. However,
0094 if the buffer is temporarily exhausted, or the packet to be transmitted is
0095 an LSO packet, the driver will provide the host with pointers to the data
0096 from the SKB. This attempts to achieve a balance between the overhead of
0097 data copy and the impact of remapping VM memory to be accessible by the
0098 host.
0099
0100 XDP support
0101 -----------
0102 XDP (eXpress Data Path) is a feature that runs eBPF bytecode at the early
0103 stage when packets arrive at a NIC card. The goal is to increase performance
0104 for packet processing, reducing the overhead of SKB allocation and other
0105 upper network layers.
0106
0107 hv_netvsc supports XDP in native mode, and transparently sets the XDP
0108 program on the associated VF NIC as well.
0109
0110 Setting / unsetting XDP program on synthetic NIC (netvsc) propagates to
0111 VF NIC automatically. Setting / unsetting XDP program on VF NIC directly
0112 is not recommended, also not propagated to synthetic NIC, and may be
0113 overwritten by setting of synthetic NIC.
0114
0115 XDP program cannot run with LRO (RSC) enabled, so you need to disable LRO
0116 before running XDP::
0117
0118 ethtool -K eth0 lro off
0119
0120 XDP_REDIRECT action is not yet supported.