- 17 5月, 2017 16 次提交
-
-
由 Leon Romanovsky 提交于
There is inline function to test if carrier present, so it makes open-coded solution redundant. Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Acked-by: NYuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
BBR congestion control depends on pacing, and pacing is currently handled by sch_fq packet scheduler for performance reasons, and also because implemening pacing with FQ was convenient to truly avoid bursts. However there are many cases where this packet scheduler constraint is not practical. - Many linux hosts are not focusing on handling thousands of TCP flows in the most efficient way. - Some routers use fq_codel or other AQM, but still would like to use BBR for the few TCP flows they initiate/terminate. This patch implements an automatic fallback to internal pacing. Pacing is requested either by BBR or use of SO_MAX_PACING_RATE option. If sch_fq happens to be in the egress path, pacing is delegated to the qdisc, otherwise pacing is done by TCP itself. One advantage of pacing from TCP stack is to get more precise rtt estimations, and less work done from TX completion, since TCP Small queue limits are not generally hit. Setups with single TX queue but many cpus might even benefit from this. Note that unlike sch_fq, we do not take into account header sizes. Taking care of these headers would add additional complexity for no practical differences in behavior. Some performance numbers using 800 TCP_STREAM flows rate limited to ~48 Mbit per second on 40Gbit NIC. If MQ+pfifo_fast is used on the NIC : $ sar -n DEV 1 5 | grep eth 14:48:44 eth0 725743.00 2932134.00 46776.76 4335184.68 0.00 0.00 1.00 14:48:45 eth0 725349.00 2932112.00 46751.86 4335158.90 0.00 0.00 0.00 14:48:46 eth0 725101.00 2931153.00 46735.07 4333748.63 0.00 0.00 0.00 14:48:47 eth0 725099.00 2931161.00 46735.11 4333760.44 0.00 0.00 1.00 14:48:48 eth0 725160.00 2931731.00 46738.88 4334606.07 0.00 0.00 0.00 Average: eth0 725290.40 2931658.20 46747.54 4334491.74 0.00 0.00 0.40 $ vmstat 1 5 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 4 0 0 259825920 45644 2708324 0 0 21 2 247 98 0 0 100 0 0 4 0 0 259823744 45644 2708356 0 0 0 0 2400825 159843 0 19 81 0 0 0 0 0 259824208 45644 2708072 0 0 0 0 2407351 159929 0 19 81 0 0 1 0 0 259824592 45644 2708128 0 0 0 0 2405183 160386 0 19 80 0 0 1 0 0 259824272 45644 2707868 0 0 0 32 2396361 158037 0 19 81 0 0 Now use MQ+FQ : lpaa23:~# echo fq >/proc/sys/net/core/default_qdisc lpaa23:~# tc qdisc replace dev eth0 root mq $ sar -n DEV 1 5 | grep eth 14:49:57 eth0 678614.00 2727930.00 43739.13 4033279.14 0.00 0.00 0.00 14:49:58 eth0 677620.00 2723971.00 43674.69 4027429.62 0.00 0.00 1.00 14:49:59 eth0 676396.00 2719050.00 43596.83 4020125.02 0.00 0.00 0.00 14:50:00 eth0 675197.00 2714173.00 43518.62 4012938.90 0.00 0.00 1.00 14:50:01 eth0 676388.00 2719063.00 43595.47 4020171.64 0.00 0.00 0.00 Average: eth0 676843.00 2720837.40 43624.95 4022788.86 0.00 0.00 0.40 $ vmstat 1 5 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 2 0 0 259832240 46008 2710912 0 0 21 2 223 192 0 1 99 0 0 1 0 0 259832896 46008 2710744 0 0 0 0 1702206 198078 0 17 82 0 0 0 0 0 259830272 46008 2710596 0 0 0 0 1696340 197756 1 17 83 0 0 4 0 0 259829168 46024 2710584 0 0 16 0 1688472 197158 1 17 82 0 0 3 0 0 259830224 46024 2710408 0 0 0 0 1692450 197212 0 18 82 0 0 As expected, number of interrupts per second is very different. Signed-off-by: NEric Dumazet <edumazet@google.com> Acked-by: NSoheil Hassas Yeganeh <soheil@google.com> Cc: Neal Cardwell <ncardwell@google.com> Cc: Yuchung Cheng <ycheng@google.com> Cc: Van Jacobson <vanj@google.com> Cc: Jerry Chu <hkchu@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Paolo Abeni says: ==================== udp: scalability improvements This patch series implement an idea suggested by Eric Dumazet to reduce the contention of the udp sk_receive_queue lock when the socket is under flood. An ancillary queue is added to the udp socket, and the socket always tries first to read packets from such queue. If it's empty, we splice the content from sk_receive_queue into the ancillary queue. The first patch introduces some helpers to keep the udp code small, and the following two implement the ancillary queue strategy. The code is split to hopefully help the reviewing process. The measured overall gain under udp flood is up to the 30% depending on the numa layout and the number of ingress queue used by the relevant nic. The performance numbers have been gathered using pktgen as sender, with 64 bytes packets, random src port on a host b2b connected via a 10Gbs link with the dut. The receiver used the udp_sink program by Jesper [1] and an h/w l4 rx hash on the ingress nic, so that the number of ingress nic rx queues hit by the udp traffic could be controlled via ethtool -L. The udp_sink program was bound to the first idle cpu, to get more stable numbers. On a single numa node receiver: nic rx queues vanilla patched kernel 1 1820 kpps 1900 kpps 2 1950 kpps 2500 kpps 16 1670 kpps 2120 kpps When using a single nic rx queue, busy polling was also enabled, elsewhere, in the above scenario, the bh processing becomes the bottle-neck and this produces large artifacts in the measured performances (e.g. improving the udp sink run time, decreases the overall tput, since more action from the scheduler comes into play). [1] https://github.com/netoptimizer/network-testing/blob/master/src/udp_sink.c v1 -> v2: Patches 1/3 and 2/3 are unchanged, in patch 3/3 the rx_queue_lock_held param of udp_rmem_release() is now a bool. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Paolo Abeni 提交于
On packet reception, when we are forced to splice the sk_receive_queue, we can keep the related lock held, so that we can avoid re-acquiring it, if fwd memory scheduling is required. v1 -> v2: the rx_queue_lock_held param in udp_rmem_release() is now a bool Signed-off-by: NPaolo Abeni <pabeni@redhat.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Paolo Abeni 提交于
under udp flood the sk_receive_queue spinlock is heavily contended. This patch try to reduce the contention on such lock adding a second receive queue to the udp sockets; recvmsg() looks first in such queue and, only if empty, tries to fetch the data from sk_receive_queue. The latter is spliced into the newly added queue every time the receive path has to acquire the sk_receive_queue lock. The accounting of forward allocated memory is still protected with the sk_receive_queue lock, so udp_rmem_release() needs to acquire both locks when the forward deficit is flushed. On specific scenarios we can end up acquiring and releasing the sk_receive_queue lock multiple times; that will be covered by the next patch Suggested-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NPaolo Abeni <pabeni@redhat.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Paolo Abeni 提交于
And update __sk_queue_drop_skb() to work on the specified queue. This will help the udp protocol to use an additional private rx queue in a later patch. Signed-off-by: NPaolo Abeni <pabeni@redhat.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Jakub Kicinski says: ==================== nfp: LSO, checksum and XDP datapath updates This series introduces a number of refinements to standard features like LSO and checksum offload. Three major features are support for CHECKSUM_COMPLETE, refinement of TSO handling and another small speed up for XDP TX. This series also switches from depending on some app FW<>driver ABI versions to heavier use of capabilities. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Given that our rings are always a power of 2, we can simplify the calculation of number of completed TX descriptors by using masking instead of if statement based on whether the index have wrapped or not. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
We have a number of places where we calculate the descriptor index based on a value which may have overflown. Create a macro for masking with the ring size. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Since XDP TX ring holds "spare" RX buffers anyway, we don't have to rush the completion. We can wait until ring fills up completely before trying to reclaim buffers. If RX poll has ended an no buffer has been queued for XDP TX we have no guarantee we will see another interrupt, so run the reclaim there as well, to make sure TX statistics won't become stale. This should help us reclaim more buffers per single queue controller register read. Note that the XDP completion is very trivial, it only adds up the sizes of transmitted frames for statistics so the latency spike should be acceptable. In case user sets the ring sizes to something crazy, limit the completion to 2k entries. The check if the ring is empty at the beginning of xdp_complete() is no longer needed - the callers will perform it. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Introduce NFP_NET_CFG_CTRL_CSUM_COMPLETE capability and implement parsing of CHECKSUM_COMPLETE metadata. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NEdwin Peer <edwin.peer@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Edwin Peer 提交于
ABI version 4 introduced metadata chaining. Using the ABI version to signal metadata chaining precludes firmware that advertises new capabilities which rely on prepended metadata from working on older kernels. Capability bits are thus better suited to signalling the chained metadata format. A new version of the RSS capability is introduced to distinguish between the differing metadata formats for ABI versions other than 4. Signed-off-by: NEdwin Peer <edwin.peer@netronome.com> Reviewed-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Even if capability for RSS and IRQ moderation are present we may have not initialized them for control vNIC. Depend on selected features mask (ctrl) rather than capabilities (cap) to determine which features should be enabled. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Edwin Peer 提交于
Firmware advertising the LSO2 capability exploits driver provided L3 and L4 offsets in order to avoid parsing packet headers in the TX path. The vlan field in struct nfp_net_tx_desc is repurposed, making TXVLAN a mutually exclusive configuration to LSO2. Signed-off-by: NEdwin Peer <edwin.peer@netronome.com> Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Edwin Peer 提交于
The l4_offset field referred to by NFD is confusingly named. It is not the offset of the L4 transport header, but rather the L4 payload. The LSO2 capability supported by alternative device firmware requires the actual L4 offset, thus the rename seems prudent. Signed-off-by: NEdwin Peer <edwin.peer@netronome.com> Reviewed-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
We advertise TSO to the stack but leave it disabled by default. Make sure it's not only disabled in the netdev features but also on the device itself. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 5月, 2017 24 次提交
-
-
由 linzhang 提交于
Signed-off-by: Nlinzhang <xiaolou4617@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Haiyang Zhang 提交于
The clean up function is updated to cover duplicate config info in files included by "source" key word in Ubuntu network config. Signed-off-by: NHaiyang Zhang <haiyangz@microsoft.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Shannon Nelson 提交于
On the SPARC platform we need to use the DMA_ATTR_WEAK_ORDERING attribute in our Rx path dma mapping in order to get the expected performance out of the receive path. Adding it to the Tx path has little effect, so that's not a part of this patch. Signed-off-by: NShannon Nelson <shannon.nelson@oracle.com> Reviewed-by: NTushar Dave <tushar.n.dave@oracle.com> Reviewed-by: NTom Saeger <tom.saeger@oracle.com> Acked-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Iyappan Subramanian says: ==================== drivers: net: xgene: Add ethtool stats and bug fixes This patch set, - adds ethtool extended statistics support - addresses errata workarounds - fixes bugs related to statistics v2: Address review comments from v1 - Adds lock to protect mdio-xgene indirect MAC access - Refactors xgene-enet indirect MAC read/write functions - Uses mdio-xgene MAC access routines, if xgene-enet port use the same HW. v1: - Initial version Signed-off-by: NIyappan Subramanian <isubramanian@apm.com> Signed-off-by: NQuan Nguyen <qnguyen@apm.com> ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Iyappan Subramanian 提交于
Prefetch buffer cleanup code was called twice, causing EDAC to report errors during reboot. [ 1130.972475] xgene-edac 78800000.edac: IOB bridge agent (BA) transaction error [ 1130.979584] xgene-edac 78800000.edac: IOB BA write response error [ 1130.985648] xgene-edac 78800000.edac: IOB BA write access at 0x00.00000000 () [ 1130.993612] xgene-edac 78800000.edac: IOB BA requestor ID 0x00002400 [ 1131.000242] xgene-edac 78800000.edac: IOB bridge agent (BA) transaction error ... This patch fixes the errors by, - removing the redundant prefetch buffer cleanup from port_ops->shutdown() - moving port_ops->shutdown() after delete_rings() Signed-off-by: NIyappan Subramanian <isubramanian@apm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Quan Nguyen 提交于
This patch adds workaround for HW errata 10GE_10 and ENET_15: "HW statistic counters value are duplicated". - RFCS duplicates RALN counter - RFLR duplicates RUND counter - TFCS duplicates TFRG counter - RALN should be intepreted as 0 in 10G mode Signed-off-by: NQuan Nguyen <qnguyen@apm.com> Signed-off-by: NIyappan Subramanian <isubramanian@apm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Quan Nguyen 提交于
This patch adds statistic counter for frames recovered from HW errata 10GE_8 and ENET_11: "HW reports Length error for valid 64 byte frames with len <46 bytes". Signed-off-by: NQuan Nguyen <qnguyen@apm.com> Signed-off-by: NIyappan Subramanian <isubramanian@apm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Quan Nguyen 提交于
This patch adds workaround for HW errata 10GE_4: "XGENET_ICM_ECM_DROP_COUNT_REG_0 reg not clear on read". Signed-off-by: NQuan Nguyen <qnguyen@apm.com> Signed-off-by: NIyappan Subramanian <isubramanian@apm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Iyappan Subramanian 提交于
This patch adds rx_overrun and tx_underrun ethtool statistic counters. Signed-off-by: NQuan Nguyen <qnguyen@apm.com> Signed-off-by: NIyappan Subramanian <isubramanian@apm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Quan Nguyen 提交于
This patch adds extended ethtool statistics support. Signed-off-by: NQuan Nguyen <qnguyen@apm.com> Signed-off-by: NIyappan Subramanian <isubramanian@apm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Quan Nguyen 提交于
This patch cleans up unused macros to improve readability. Signed-off-by: NQuan Nguyen <qnguyen@apm.com> Signed-off-by: NIyappan Subramanian <isubramanian@apm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Quan Nguyen 提交于
This patch fixes the tx error counters and adds more rx error counters. Signed-off-by: NQuan Nguyen <qnguyen@apm.com> Signed-off-by: NIyappan Subramanian <isubramanian@apm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Quan Nguyen 提交于
Commit 5944701d ("net: remove useless memset's in drivers get_stats64") makes the pdata->stats redundant. This patch removes pdata->stats and updates get_stats64() callback accordingly. Signed-off-by: NQuan Nguyen <qnguyen@apm.com> Signed-off-by: NIyappan Subramanian <isubramanian@apm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Quan Nguyen 提交于
This patch switches to use rgmii mdio mac access routines if available, as they share the same HW. Signed-off-by: NQuan Nguyen <qnguyen@apm.com> Signed-off-by: NIyappan Subramanian <isubramanian@apm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Quan Nguyen 提交于
This patch, - refactors mac access routine - adds lock to protect mac indirect access Signed-off-by: NQuan Nguyen <qnguyen@apm.com> Signed-off-by: NIyappan Subramanian <isubramanian@apm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Iyappan Subramanian 提交于
This patch, - refactors mac read/write functions - adds lock to protect indirect mac access Signed-off-by: NIyappan Subramanian <isubramanian@apm.com> Signed-off-by: NQuan Nguyen <qnguyen@apm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/davem/net由 Linus Torvalds 提交于
Pull networking fixes from David Miller: 1) Track alignment in BPF verifier so that legitimate programs won't be rejected on !CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS architectures. 2) Make tail calls work properly in arm64 BPF JIT, from Deniel Borkmann. 3) Make the configuration and semantics Generic XDP make more sense and don't allow both generic XDP and a driver specific instance to be active at the same time. Also from Daniel. 4) Don't crash on resume in xen-netfront, from Vitaly Kuznetsov. 5) Fix use-after-free in VRF driver, from Gao Feng. 6) Use netdev_alloc_skb_ip_align() to avoid unaligned IP headers in qca_spi driver, from Stefan Wahren. 7) Always run cleanup routines in BPF samples when we get SIGTERM, from Andy Gospodarek. 8) The mdio phy code should bring PHYs out of reset using the shared GPIO lines before invoking bus->reset(). From Florian Fainelli. 9) Some USB descriptor access endian fixes in various drivers from Johan Hovold. 10) Handle PAUSE advertisements properly in mlx5 driver, from Gal Pressman. 11) Fix reversed test in mlx5e_setup_tc(), from Saeed Mahameed. 12) Cure netdev leak in AF_PACKET when using timestamping via control messages. From Douglas Caetano dos Santos. 13) netcp doesn't support HWTSTAMP_FILTER_ALl, reject it. From Miroslav Lichvar. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (52 commits) ldmvsw: stop the clean timer at beginning of remove ldmvsw: unregistering netdev before disable hardware net: netcp: fix check of requested timestamping filter ipv6: avoid dad-failures for addresses with NODAD qed: Fix uninitialized data in aRFS infrastructure mdio: mux: fix device_node_continue.cocci warnings net/packet: fix missing net_device reference release net/mlx4_core: Use min3 to select number of MSI-X vectors macvlan: Fix performance issues with vlan tagged packets net: stmmac: use correct pointer when printing normal descriptor ring net/mlx5: Use underlay QPN from the root name space net/mlx5e: IPoIB, Only support regular RQ for now net/mlx5e: Fix setup TC ndo net/mlx5e: Fix ethtool pause support and advertise reporting net/mlx5e: Use the correct pause values for ethtool advertising vmxnet3: ensure that adapter is in proper state during force_close sfc: revert changes to NIC revision numbers net: ch9200: add missing USB-descriptor endianness conversions net: irda: irda-usb: fix firmware name on big-endian hosts net: dsa: mv88e6xxx: add default case to switch ...
-
git://git.samba.org/sfrench/cifs-2.6由 Linus Torvalds 提交于
Pull cifs fixes from Steve French: "A set of minor cifs fixes" * 'for-next' of git://git.samba.org/sfrench/cifs-2.6: [CIFS] Minor cleanup of xattr query function fs: cifs: transport: Use time_after for time comparison SMB2: Fix share type handling cifs: cifsacl: Use a temporary ops variable to reduce code length Don't delay freeing mids when blocked on slow socket write of request CIFS: silence lockdep splat in cifs_relock_file()
-
由 David S. Miller 提交于
Shannon Nelson says: ==================== ldmvsw: port removal stability Under heavy reboot stress testing we found a couple of timing issues when removing the device that could cause the kernel great heartburn, addressed by these two patches. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Shannon Nelson 提交于
Stop the clean timer earlier to be sure there's no asynchronous interference while stopping the port. Orabug: 25748241 Signed-off-by: NShannon Nelson <shannon.nelson@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Thomas Tai 提交于
When running LDom binding/unbinding test, kernel may panic in ldmvsw_open(). It is more likely that because we're removing the ldc connection before unregistering the netdev in vsw_port_remove(), we set up a window of time where one process could be removing the device while another trying to UP the device. This also sometimes causes vio handshake error due to opening a device without closing it completely. We should unregister the netdev before we disable the "hardware". Orabug: 25980913, 25925306 Signed-off-by: NThomas Tai <thomas.tai@oracle.com> Signed-off-by: NShannon Nelson <shannon.nelson@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Miroslav Lichvar 提交于
The driver doesn't support timestamping of all received packets and should return error when trying to enable the HWTSTAMP_FILTER_ALL filter. Cc: WingMan Kwok <w-kwok2@ti.com> Cc: Richard Cochran <richardcochran@gmail.com> Signed-off-by: NMiroslav Lichvar <mlichvar@redhat.com> Acked-by: NRichard Cochran <richardcochran@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux由 David S. Miller 提交于
Saeed Mahameed says: ==================== Mellanox, mlx5 fixes 2017-05-12 This series contains some mlx5 fixes for net. Please pull and let me know if there's any problem. For -stable: ("net/mlx5e: Fix ethtool pause support and advertise reporting") kernels >= 4.8 ("net/mlx5e: Use the correct pause values for ethtool advertising") kernels >= 4.8 v1->v2: Dropped statistics spinlock patch, it needs some extra work. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Mahesh Bandewar 提交于
Every address gets added with TENTATIVE flag even for the addresses with IFA_F_NODAD flag and dad-work is scheduled for them. During this DAD process we realize it's an address with NODAD and complete the process without sending any probe. However the TENTATIVE flags stays on the address for sometime enough to cause misinterpretation when we receive a NS. While processing NS, if the address has TENTATIVE flag, we mark it DADFAILED and endup with an address that was originally configured as NODAD with DADFAILED. We can't avoid scheduling dad_work for addresses with NODAD but we can avoid adding TENTATIVE flag to avoid this racy situation. Signed-off-by: NMahesh Bandewar <maheshb@google.com> Acked-by: NDavid Ahern <dsahern@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-