- 18 10月, 2016 1 次提交
-
-
由 Jarod Wilson 提交于
e100: min_mtu 68, max_mtu 1500 - remove e100_change_mtu entirely, is identical to old eth_change_mtu, and no longer serves a purpose. No need to set min_mtu or max_mtu explicitly, as ether_setup() will already set them to 68 and 1500. e1000: min_mtu 46, max_mtu 16110 e1000e: min_mtu 68, max_mtu varies based on adapter fm10k: min_mtu 68, max_mtu 15342 - remove fm10k_change_mtu entirely, does nothing now i40e: min_mtu 68, max_mtu 9706 i40evf: min_mtu 68, max_mtu 9706 igb: min_mtu 68, max_mtu 9216 - There are two different "max" frame sizes claimed and both checked in the driver, the larger value wasn't relevant though, so I've set max_mtu to the smaller of the two values here to retain identical behavior. igbvf: min_mtu 68, max_mtu 9216 - Same issue as igb duplicated ixgb: min_mtu 68, max_mtu 16114 - Also remove pointless old == new check, as that's done in dev_set_mtu ixgbe: min_mtu 68, max_mtu 9710 ixgbevf: min_mtu 68, max_mtu dependent on hardware/firmware - Some hw can only handle up to max_mtu 1504 on a vf, others 9710 CC: netdev@vger.kernel.org CC: intel-wired-lan@lists.osuosl.org CC: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJarod Wilson <jarod@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 4月, 2016 1 次提交
-
-
由 Stefan Assmann 提交于
Calling dev_close() causes IFF_UP to be cleared which will remove the interfaces routes and some addresses. That's probably not what the user intended when running the offline selftest. Besides this does not happen if the interface is brought down before the test, so the current behaviour is inconsistent. Instead call the net_device_ops ndo_stop function directly and avoid touching IFF_UP at all. Signed-off-by: NStefan Assmann <sassmann@kpanic.de> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 06 4月, 2016 2 次提交
-
-
由 Alexander Duyck 提交于
The 82544 has code that adds one additional descriptor per data buffer. However we weren't taking that into account when determining the descriptors needed for the next transmit at the end of the xmit_frame path. This change takes that into account by doubling the number of descriptors needed for the 82544 so that we can avoid a potential issue where we could hang the Tx ring by loading frames with xmit_more enabled and then stopping the ring without writing the tail. In addition it adds a few more descriptors to account for some additional workarounds that have been added over time. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
The current code path is capable of grossly overestimating the number of descriptors needed to transmit a new frame. This specifically occurs if the skb contains a number of 4K pages. The issue is that the logic for determining the descriptors needed is ((S) >> (X)) + 1. When X is 12 it means that we were indicating that we required 2 descriptors for each 4K page when we only needed one. This change corrects this by instead adding (1 << (X)) - 1 to the S value instead of adding 1 after the fact. This way we get an accurate descriptor needed count as we are essentially doing a DIV_ROUNDUP(). Reported-by: NIvan Suzdal <isuzdal@mirantis.com> Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 13 12月, 2015 8 次提交
-
-
由 Janusz Wolak 提交于
Signed-off-by: NJanusz Wolak <januszvdm@gmail.com> Acked-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jean Sacren 提交于
By using goto statement, we can achieve sharing the same exit path so that code duplication could be minimized. Signed-off-by: NJean Sacren <sakiwit@gmail.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jean Sacren 提交于
Due to historical reason, 'phy_data' has never been included in the kernel doc. Fix it so that the requirement could be fulfilled. Signed-off-by: NJean Sacren <sakiwit@gmail.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jean Sacren 提交于
Use 'That' to replace 'The' so that the comment would make sense. Signed-off-by: NJean Sacren <sakiwit@gmail.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jean Sacren 提交于
The checking logic needed some clean-up work, so we rewrite it by checking for break first. With that change in place, we can even move the second check for goto statement outside of the loop. As this is merely a cleanup, no functional change is involved. The questionable 'tmp != 0xFF' is intentionally left alone. Mark Rustad and Alexander Duyck contributed to this patch. CC: Mark Rustad <mark.d.rustad@intel.com> CC: Alex Duyck <aduyck@mirantis.com> Signed-off-by: NJean Sacren <sakiwit@gmail.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Janusz Wolak 提交于
Signed-off-by: NJanusz Wolak <januszvdm@gmail.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Dmitriy Vyukov 提交于
e1000_clean_tx_irq cleans buffers and sets tx_ring->next_to_clean, then e1000_xmit_frame reuses the cleaned buffers. But there are no memory barriers when buffers gets recycled, so the recycled buffers can be corrupted. Use smp_store_release to update tx_ring->next_to_clean and smp_load_acquire to read tx_ring->next_to_clean to properly hand off buffers from e1000_clean_tx_irq to e1000_xmit_frame. The data race was found with KernelThreadSanitizer (KTSAN). Signed-off-by: NDmitry Vyukov <dvyukov@google.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Joern Engel 提交于
Code was responsible for ~150ms scheduler latencies. Signed-off-by: NJoern Engel <joern@logfs.org> Signed-off-by: NSpencer Baugh <sbaugh@catern.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 16 10月, 2015 2 次提交
-
-
由 Jesse Brandeburg 提交于
As per Eric Dumazet's previous patches: (see commit (24d2e4a5) - tg3: use napi_complete_done()) Quoting verbatim: Using napi_complete_done() instead of napi_complete() allows us to use /sys/class/net/ethX/gro_flush_timeout GRO layer can aggregate more packets if the flush is delayed a bit, without having to set too big coalescing parameters that impact latencies. </end quote> Tested configuration: low latency via ethtool -C ethx adaptive-rx off rx-usecs 10 adaptive-tx off tx-usecs 15 workload: streaming rx using netperf TCP_MAERTS igb: MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.0.1 () port 0 AF_INET : demo ... Interim result: 941.48 10^6bits/s over 1.000 seconds ending at 1440193171.589 Alignment Offset Bytes Bytes Recvs Bytes Sends Local Remote Local Remote Xfered Per Per Recv Send Recv Send Recv (avg) Send (avg) 8 8 0 0 1176930056 1475.36 797726 16384.00 71905 MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.0.1 () port 0 AF_INET : demo ... Interim result: 941.49 10^6bits/s over 0.997 seconds ending at 1440193142.763 Alignment Offset Bytes Bytes Recvs Bytes Sends Local Remote Local Remote Xfered Per Per Recv Send Recv Send Recv (avg) Send (avg) 8 8 0 0 1175182320 50476.00 23282 16384.00 71816 i40e: Hard to test because the traffic is incoming so fast (24Gb/s) that GRO always receives 87kB, even at the highest interrupt rate. Other drivers were only compile tested. Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Ivan Vecera 提交于
Many drivers initialize uselessly n_priv_flags, n_stats, testinfo_len, eedump_len & regdump_len fields in their .get_drvinfo() ethtool op. It's not necessary as these fields is filled in ethtool_get_drvinfo(). v2: removed unused variable v3: removed another unused variable Signed-off-by: NIvan Vecera <ivecera@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 23 9月, 2015 1 次提交
-
-
由 Francois Romieu 提交于
The device probe method e1000_probe calls e1000_init_eeprom_params itself so there's no reason to call it again from e1000_do_write_eeprom or e1000_do_read_eeprom. The sentence above assumes that e1000_init_eeprom_params is effective. e1000_init_eeprom_params depends mostly on hw->mac_type and e1000_probe bails out early if it can't set mac_type (see e1000_init_hw_struct, then e1000_set_mac_type), qed. Btw, if effective, the removed paths would had been deadlock prone when e1000_eeprom_spi was set: -> e1000_write_eeprom (takes e1000_eeprom_lock) -> e1000_do_write_eeprom -> e1000_init_eeprom_params -> e1000_read_eeprom (takes e1000_eeprom_lock) (same narrative with e1000_read_eeprom -> e1000_do_read_eeprom etc.) As a final note, the candidate deadlock above can't happen in e1000_probe due to the way eeprom->word_size is set / tested. Signed-off-by: NFrancois Romieu <romieu@fr.zoreil.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 12 5月, 2015 1 次提交
-
-
由 Alexander Duyck 提交于
Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com> Acked-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 4月, 2015 1 次提交
-
-
由 Alexander Duyck 提交于
This change replaces calls to rmb with dma_rmb in the case where we want to order all follow-on descriptor reads after the check for the descriptor status bit. Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com> Acked-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 3月, 2015 1 次提交
-
-
由 Joe Perches 提交于
To test a checkpatch spelling patch, I ran codespell against drivers/net/ethernet/. $ git ls-files drivers/net/ethernet/ | \ while read file ; do \ codespell -w $file; \ done I removed a false positive in e1000_hw.h Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 06 3月, 2015 2 次提交
-
-
由 Sabrina Dubroca 提交于
There is a race condition between e1000_change_mtu's cleanups and netpoll, when we change the MTU across jumbo size: Changing MTU frees all the rx buffers: e1000_change_mtu -> e1000_down -> e1000_clean_all_rx_rings -> e1000_clean_rx_ring Then, close to the end of e1000_change_mtu: pr_info -> ... -> netpoll_poll_dev -> e1000_clean -> e1000_clean_rx_irq -> e1000_alloc_rx_buffers -> e1000_alloc_frag And when we come back to do the rest of the MTU change: e1000_up -> e1000_configure -> e1000_configure_rx -> e1000_alloc_jumbo_rx_buffers alloc_jumbo finds the buffers already != NULL, since data (shared with page in e1000_rx_buffer->rxbuf) has been re-alloc'd, but it's garbage, or at least not what is expected when in jumbo state. This results in an unusable adapter (packets don't get through), and a NULL pointer dereference on the next call to e1000_clean_rx_ring (other mtu change, link down, shutdown): BUG: unable to handle kernel NULL pointer dereference at (null) IP: [<ffffffff81194d6e>] put_compound_page+0x7e/0x330 [...] Call Trace: [<ffffffff81195445>] put_page+0x55/0x60 [<ffffffff815d9f44>] e1000_clean_rx_ring+0x134/0x200 [<ffffffff815da055>] e1000_clean_all_rx_rings+0x45/0x60 [<ffffffff815df5e0>] e1000_down+0x1c0/0x1d0 [<ffffffff811e2260>] ? deactivate_slab+0x7f0/0x840 [<ffffffff815e21bc>] e1000_change_mtu+0xdc/0x170 [<ffffffff81647050>] dev_set_mtu+0xa0/0x140 [<ffffffff81664218>] do_setlink+0x218/0xac0 [<ffffffff814459e9>] ? nla_parse+0xb9/0x120 [<ffffffff816652d0>] rtnl_newlink+0x6d0/0x890 [<ffffffff8104f000>] ? kvm_clock_read+0x20/0x40 [<ffffffff810a2068>] ? sched_clock_cpu+0xa8/0x100 [<ffffffff81663802>] rtnetlink_rcv_msg+0x92/0x260 By setting the allocator to a dummy version, netpoll can't mess up our rx buffers. The allocator is set back to a sane value in e1000_configure_rx. Fixes: edbbb3ca ("e1000: implement jumbo receive with partial descriptors") Signed-off-by: NSabrina Dubroca <sd@queasysnail.net> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Eliezer Tamir 提交于
When bringing down an interface netif_carrier_off() should be one the first things we do, since this will prevent the stack from queuing more packets to this interface. This operation is very fast, and should make the device behave much nicer when trying to bring down an interface under load. Also, this would Do The Right Thing (TM) if this device has some sort of fail-over teaming and redirect traffic to the other IF. Move netif_carrier_off as early as possible. Signed-off-by: NEliezer Tamir <eliezer.tamir@linux.intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 23 1月, 2015 2 次提交
-
-
由 Florian Westphal 提交于
Don't update Tx tail descriptor if we queue hasn't been stopped and we know at least one more skb will be sent right away. Signed-off-by: NFlorian Westphal <fw@strlen.de> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Asaf Vertz 提交于
To be future-proof and for better readability the time comparisons are modified to use time_after_eq() instead of plain, error-prone math. Signed-off-by: NAsaf Vertz <asaf.vertz@tandemg.com> Acked-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 14 1月, 2015 1 次提交
-
-
由 Jiri Pirko 提交于
The same macros are used for rx as well. So rename it. Signed-off-by: NJiri Pirko <jiri@resnulli.us> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 12月, 2014 1 次提交
-
-
由 Alexander Duyck 提交于
This change replaces calls to netdev_alloc_skb_ip_align with napi_alloc_skb. The advantage of napi_alloc_skb is currently the fact that the page allocation doesn't make use of any irq disable calls. There are few spots where I couldn't replace the calls as the buffer allocation routine is called as a part of init which is outside of the softirq context. Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 12月, 2014 1 次提交
-
-
由 Alexander Duyck 提交于
Update the Intel Ethernet drivers to use eth_skb_pad() and skb_put_padto instead of doing their own implementations of the function. Also this cleans up two other spots where skb_pad was called but the length and tail pointers were being manipulated directly instead of just having the padding length added via __skb_put. Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com> Acked-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 10月, 2014 1 次提交
-
-
由 Francesco Ruggeri 提交于
VMWare's e1000 implementation does not seem to support unicast filtering. This can be observed by configuring a macvlan interface on eth0 in a VM in VMWare Fusion 5.0.5, and trying to use that interface instead of eth0. Tested on 3.16. Signed-off-by: NFrancesco Ruggeri <fruggeri@arista.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 12 9月, 2014 7 次提交
-
-
由 Florian Westphal 提交于
napi_gro_frags allows skb re-use in case GRO can merge payload pages into an skb on the GRO lists. netperf TCP_STREAM, kvm-e1000 emulation, mtu 9k: Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec old: 87380 16384 16384 30.00 8985.78 new: 87380 16384 16384 30.00 9907.05 Signed-off-by: NFlorian Westphal <fw@strlen.de> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Florian Westphal 提交于
Instead of preallocating Rx skbs, allocate them right before sending inbound packet up the stack. e1000-kvm, mtu1500, netperf TCP_STREAM: Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec old: 87380 16384 16384 60.00 4532.40 new: 87380 16384 16384 60.00 4599.05 Signed-off-by: NFlorian Westphal <fw@strlen.de> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Florian Westphal 提交于
and remove *page, its only used for Rx. Signed-off-by: NFlorian Westphal <fw@strlen.de> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Florian Westphal 提交于
e1000 uses the same metadata struct for Rx and Tx. But Tx and Rx have different requirements. For Rx, we only need to store a buffer and a DMA address. Follow-up patch will remove skb for Rx, bringing rx_buffer_info down to 16 bytes on x86_64. [ buffer_info is 48 bytes ] Signed-off-by: NFlorian Westphal <fw@strlen.de> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Florian Westphal 提交于
Currently we unmap the DMA range, then copy to new skb. Change this so we can keep the mapping in case the data is copied. Signed-off-by: NFlorian Westphal <fw@strlen.de> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Florian Westphal 提交于
Its the same in both handlers. Signed-off-by: NFlorian Westphal <fw@strlen.de> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Florian Westphal 提交于
... and make it static. Signed-off-by: NFlorian Westphal <fw@strlen.de> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 06 9月, 2014 1 次提交
-
-
Fixed many errors/warnings and checks in e1000_ethtool.c reported by checkpatch.pl. Suggestions from Joe Perches and Alexander Duyck applied as well Signed-off-by: NKrzysztof Majzerowicz-Jaszcz <cristos@vipserv.org> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 26 8月, 2014 1 次提交
-
-
由 Vlad Yasevich 提交于
This device claims TSO and checksum support for vlans. It also allows a user to control vlan acceleration offloading. As such, it is possible to turn off vlan acceleration and configure a vlan which will continue to support TSO. In such situation the packet passed down the the device will contain a vlan header and skb->protocol will be set to ETH_P_8021Q. The device assumes that skb->protocol contains network protocol value and uses that value to set up TSO and checksum information. This will results in corrupted frames sent on the wire. This patch extract the protocol value correctly and corrects TSO for non-accelerated traffic. CC: Jeff Kirsher <jeffrey.t.kirsher@intel.com> CC: Jesse Brandeburg <jesse.brandeburg@intel.com> CC: Bruce Allan <bruce.w.allan@intel.com> CC: Carolyn Wyborny <carolyn.wyborny@intel.com> CC: Don Skidmore <donald.c.skidmore@intel.com> CC: Greg Rose <gregory.v.rose@intel.com> CC: Alex Duyck <alexander.h.duyck@intel.com> CC: John Ronciak <john.ronciak@intel.com> CC: Mitch Williams <mitch.a.williams@intel.com> CC: Linux NICS <linux.nics@intel.com> CC: e1000-devel@lists.sourceforge.net Signed-off-by: NVladislav Yasevich <vyasevic@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 8月, 2014 1 次提交
-
-
由 Benoit Taine 提交于
We should prefer `struct pci_device_id` over `DEFINE_PCI_DEVICE_TABLE` to meet kernel coding style guidelines. This issue was reported by checkpatch. A simplified version of the semantic patch that makes this change is as follows (http://coccinelle.lip6.fr/): // <smpl> @@ identifier i; declarer name DEFINE_PCI_DEVICE_TABLE; initializer z; @@ - DEFINE_PCI_DEVICE_TABLE(i) + const struct pci_device_id i[] = z; // </smpl> [bhelgaas: add semantic patch] Signed-off-by: NBenoit Taine <benoit.taine@lip6.fr> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
- 21 7月, 2014 1 次提交
-
-
由 Fabian Frederick 提交于
Signed-off-by: NFabian Frederick <fabf@skynet.be> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 6月, 2014 1 次提交
-
-
由 Jiri Pirko 提交于
Signed-off-by: NJiri Pirko <jiri@resnulli.us> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 6月, 2014 2 次提交
-
-
由 Manuel Schölling 提交于
To be future-proof and for better readability the time comparisons are modified to use time_after() instead of plain, error-prone math. Signed-off-by: NManuel Schölling <manuel.schoelling@gmx.de> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Yongjian Xu 提交于
There is no case skb->len would be 0 or 'negative'. Remove the check. Signed-off-by: NYongjian Xu <xuyongjiande@gmail.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-