- 20 2月, 2017 9 次提交
-
-
由 Florian Fainelli 提交于
There are number of function calls, originating from user-space, typically through the Ethernet driver that can make us crash by dereferencing phydev->drv which will be NULL once we unbind the driver from the PHY. There are still functional issues that prevent an unbind then rebind to work, but these will be addressed separately. Suggested-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
The PHY library does not deal very well with bind and unbind events. The first thing we would see is that we were not properly canceling the PHY state machine workqueue, so we would be crashing while dereferencing phydev->drv since there is no driver attached anymore. Suggested-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 david.wu 提交于
Add constants and callback functions for the dwmac on rk3328 socs. As can be seen, the base structure is the same, only registers and the bits in them moved slightly. Signed-off-by: Ndavid.wu <david.wu@rock-chips.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jason Wang 提交于
We already have counters for sent/recv packets and sent/recv bytes. Doing a batched update to reduce the number of u64_stats_update_begin/end(). Take care not to bother with stats update when called speculatively. Cc: Willem de Bruijn <willemb@google.com> Signed-off-by: NJason Wang <jasowang@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
1) In the case where rate == priv->pkt_rate_low == priv->pkt_rate_high, mlx4_en_auto_moderation() does a divide by zero. 2) We want to properly change the moderation parameters if rx_frames was changed (like in ethtool -C eth0 rx-frames 16) Signed-off-by: NEric Dumazet <edumazet@google.com> Reviewed-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Thomas Falcon 提交于
After sending device capability queries and requests to the vNIC Server, an interrupt is triggered and the responses are written to the driver's CRQ response buffer. Since the interrupt can be triggered before all responses are written and visible to the partition, there is a danger that the interrupt handler or tasklet can terminate before all responses are read, resulting in a failure to initialize the device. To avoid this scenario, when capability commands are sent, we set a flag that will be checked in the following interrupt tasklet that will handle the capability responses from the server. Once all responses have been handled, the flag is disabled; and the tasklet is allowed to terminate. Signed-off-by: NThomas Falcon <tlfalcon@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Thomas Falcon 提交于
Two different counters were being used for capabilities requests and queries. These commands are not called at the same time so there is no reason a single counter cannot be used. Signed-off-by: NThomas Falcon <tlfalcon@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Thomas Falcon 提交于
Create a tasklet to process queued commands or messages received from firmware instead of processing them in the interrupt handler. Note that this handler does not process network traffic, but communications related to resource allocation and device settings. Signed-off-by: NThomas Falcon <tlfalcon@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arun Easi 提交于
This adds the backbone required for the various HW initalizations which are necessary for the FCoE driver (qedf) for QLogic FastLinQ 4xxxx line of adapters - FW notification, resource initializations, etc. Signed-off-by: NArun Easi <arun.easi@cavium.com> Signed-off-by: NYuval Mintz <yuval.mintz@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 2月, 2017 17 次提交
-
-
由 Paolo Abeni 提交于
Since the commit 0c1d70af ("net: use dst_cache for vxlan device") vxlan_fill_metadata_dst() calls vxlan_get_route() passing a NULL dst_cache pointer, so the latter should explicitly check for valid dst_cache ptr. Unfortunately the commit d71785ff ("net: add dst_cache to ovs vxlan lwtunnel") removed said check. As a result is possible to trigger a null pointer access calling vxlan_fill_metadata_dst(), e.g. with: ovs-vsctl add-br ovs-br0 ovs-vsctl add-port ovs-br0 vxlan0 -- set interface vxlan0 \ type=vxlan options:remote_ip=192.168.1.1 \ options:key=1234 options:dst_port=4789 ofport_request=10 ip address add dev ovs-br0 172.16.1.2/24 ovs-vsctl set Bridge ovs-br0 ipfix=@i -- --id=@i create IPFIX \ targets=\"172.16.1.1:1234\" sampling=1 iperf -c 172.16.1.1 -u -l 1000 -b 10M -t 1 -p 1234 This commit addresses the issue passing to vxlan_get_route() the dst_cache already available into the lwt info processed by vxlan_fill_metadata_dst(). Fixes: d71785ff ("net: add dst_cache to ovs vxlan lwtunnel") Signed-off-by: NPaolo Abeni <pabeni@redhat.com> Acked-by: NJiri Benc <jbenc@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Peter Dunning 提交于
efx_start_all can return without initialising queues as a reset is pending. This means that when netif_device_attach is called, the kernel can start sending traffic without having an initialised TX queue to send to. This patch avoids this by not calling netif_device_attach if there is a pending reset. Fixes: e283546c ("sfc:On MCDI timeout, issue an FLR (and mark MCDI to fail-fast)") Signed-off-by: NEdward Cree <ecree@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Bert Kenward 提交于
If the hw doesn't think they exist, we should defer to its authority. Signed-off-by: NEdward Cree <ecree@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jon Cooper 提交于
On EF10, hardware filter IDs are 13 bits, but in some places we store 32-bit "full filter IDs" in which higher order bits encode the filter match-priority. This could cause a filter to have a full filter ID of 0xffff, which is also the value EFX_EF10_FILTER_ID_INVALID which we use in 16-bit "short" filter IDs (without match-priority bits). This would occur if the hardware filter ID was 0x1fff and the match-priority was 7. Unfortunately, some code that checks for EFX_EF10_FILTER_ID_INVALID can be called on full filter IDs, and will WARN_ON if this ever happens. So, since we have plenty of spare bits in the full filter ID, this patch shifts the priority bits left one bit when constructing the full filter IDs, ensuring that the 0x2000 bit of a full filter ID will always be 0 and thus no full filter ID can ever equal EFX_EF10_FILTER_ID_INVALID. This patch also replaces open-coded full<->short filter ID conversions with calls to functions, thus keeping the definition of the full filter ID format in one place. Signed-off-by: NEdward Cree <ecree@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arnd Bergmann 提交于
I got a warning about broken code on ARM64 with 64K pages: drivers/net/vmxnet3/vmxnet3_drv.c: In function 'vmxnet3_rq_init': drivers/net/vmxnet3/vmxnet3_drv.c:1679:29: error: large integer implicitly truncated to unsigned type [-Werror=overflow] rq->buf_info[0][i].len = PAGE_SIZE; 'len' here is a 16-bit integer, so this clearly won't work. I don't think this driver is used much on anything other than x86, so there is no need to fix this properly and we can work around it with a Kconfig dependency to forbid known-broken configurations. qemu in theory supports it on other architectures too, but presumably only for compatibility with x86 guests that also run on vmware. CONFIG_PAGE_SIZE_64KB is used on hexagon, mips, sh and tile, the other symbols are architecture-specific names for the same thing. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Valentin Longchamp 提交于
It is required to build it as a module. Signed-off-by: NValentin Longchamp <valentin.longchamp@keymile.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Simon Horman 提交于
Use PCI_DEVICE_ID_NETRONOME_NFP*, defined in linux/pci_ids.h, rather than replicating the same values in the NFP driver. Signed-off-by: NSimon Horman <simon.horman@netronome.com> Acked-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Simon Xiao 提交于
Return the correct tx_errors stats in netvsc. Reviewed-by: NHaiyang Zhang <haiyangz@microsoft.com> Signed-off-by: NSimon Xiao <sixiao@microsoft.com> Signed-off-by: NStephen Hemminger <sthemmin@microsoft.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Philippe Reynes 提交于
The ethtool api {get|set}_settings is deprecated. We move this driver to new api {get|set}_link_ksettings. As I don't have the hardware, I'd be very pleased if someone may test this patch. Signed-off-by: NPhilippe Reynes <tremyfr@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Philippe Reynes 提交于
The ethtool api {get|set}_settings is deprecated. We move this driver to new api {get|set}_link_ksettings. As I don't have the hardware, I'd be very pleased if someone may test this patch. Signed-off-by: NPhilippe Reynes <tremyfr@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tobias Klauser 提交于
After commit 34a5102c ("net: bgmac: allocate struct bgmac just once & don't copy it") the mac_addr member of struct bgmac is no longer necessary to pass the MAC address to bgmac_enet_probe(). Instead it can directly be stored in netdev->dev_addr. Also use eth_hw_addr_random() instead of eth_random_addr() in case a random MAC is nedded. This will make sure netdev->addr_assign_type will be properly set. Signed-off-by: NTobias Klauser <tklauser@distanz.ch> Acked-by: NJon Mason <jon.mason@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tobias Klauser 提交于
Use eth_hw_addr_random() to set a random dev_addr and update addr_assign_type instead of open-coding it. Signed-off-by: NTobias Klauser <tklauser@distanz.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dan Carpenter 提交于
This should be >= instead of > here. It means that we don't increment the free count enough so it becomes off by one. Fixes: 9ad1a374 ("dpaa_eth: add support for DPAA Ethernet") Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jisheng Zhang 提交于
The mvneta_eth_tool_ops is only used internally in mvneta driver, so make it static. Signed-off-by: NJisheng Zhang <jszhang@marvell.com> Acked-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Philippe Reynes 提交于
The ethtool api {get|set}_settings is deprecated. We move this driver to new api {get|set}_link_ksettings. As I don't have the hardware, I'd be very pleased if someone may test this patch. Signed-off-by: NPhilippe Reynes <tremyfr@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ivan Khoronzhuk 提交于
The ale is a property of cpsw, so change dev to cpsw->dev, aka pdev->dev, to be consistent. Signed-off-by: NIvan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Philippe Reynes 提交于
The ethtool api {get|set}_settings is deprecated. We move this driver to new api {get|set}_link_ksettings. As I don't have the hardware, I'd be very pleased if someone may test this patch. Signed-off-by: NPhilippe Reynes <tremyfr@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 2月, 2017 4 次提交
-
-
由 Eric Dumazet 提交于
All rx and rx netdev interrupts are handled by respectively by mlx4_en_rx_irq() and mlx4_en_tx_irq() which simply schedule a NAPI. But mlx4_eq_int() also fires a tasklet to service all items that were queued via mlx4_add_cq_to_tasklet(), but this handler was not called unless user cqe was handled. This is very confusing, as "mpstat -I SCPU ..." show huge number of tasklet invocations. This patch saves this overhead, by carefully firing the tasklet directly from mlx4_add_cq_to_tasklet(), removing four atomic operations per IRQ. Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Tariq Toukan <tariqt@mellanox.com> Cc: Saeed Mahameed <saeedm@mellanox.com> Acked-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ganesh Goudar 提交于
Remove variable rxq_info and also remove redundant assignment to it. Signed-off-by: NGanesh Goudar <ganeshgr@chelsio.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ganesh Goudar 提交于
Signed-off-by: NGanesh Goudar <ganeshgr@chelsio.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arjun V 提交于
Make max number of supported tc u32 links equal to max number of filters supported by hardware. Signed-off-by: NArjun V <arjun@chelsio.com> Signed-off-by: NRahul Lakkireddy <rahul.lakkireddy@chelsio.com> Signed-off-by: NCasey Leedom <leedom@chelsio.com> Signed-off-by: NGanesh Goudar <ganeshgr@chelsio.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 2月, 2017 10 次提交
-
-
由 Alexander Duyck 提交于
This patch makes it so that we don't need to bother with clearing the memory out for the descriptor rings. The general idea is to only free buffers associated with buffers in use which are located between the next_to_clean and next_to_use or next_to_alloc values. Everything outside of those regions can be safely ignored since they should have no buffers associated with them. The advantage to doing things this way is that is should speed up bring-up and tear-down of the rings. Specifically we can avoid the 512 or more cycles required to memset the rings in tear-down. In the bring-up phase we then clear the memory as a part of initialization. The general idea is that the clearing in initialization can act as a prefetch of sorts for the buffer info structures so they are in the local CPU when we go to populate them. This should help to improve overall time needed to perform a suspend/resume. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch adds build_skb support to the Rx path. There are several advantages to this change. 1. It avoids the memcpy and skb->head allocation for small packets which improves performance by about 5% in my tests. 2. It avoids the memcpy, skb->head allocation, and eth_get_headlen for larger packets improving performance by about 10% in my tests. 3. For VXLAN packets it allows the full header to be in skb->data which improves the performance by as much as 30% in some of my tests. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
Since there are potential drawbacks to the new Rx allocation approach I thought it best to add a "chicken bit" so that we can turn the feature off if in the event that a problem is found. It also provides a means of validating the legacy Rx path in the event that we are forced to fall back. At some point in the future when we are convinced we don't need it anymore we might be able to drop the legacy-rx flag. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch adds support for providing a buffer with headroom and tailroom to allow for shared info, NET_SKB_PAD, and NET_IP_ALIGN. With this combined with the DMA changes we can start using build_skb to build frames around an incoming Rx buffer instead of having to memcpy the headers. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
We are going to be expanding the number of Rx paths in the driver. Instead of duplicating all that code I am pulling it apart into separate functions so that we don't have so much code duplication. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change makes it so that we use the length of the packet instead of the DD status bit to determine if a new descriptor is ready to be processed. The obvious advantage is that it cuts down on reads as we don't really even need the DD bit if going from a 0 to a non-zero value on size is enough to inform us that the packet has been completed. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
In order to support build_skb with jumbo frames it will be necessary to use 3K buffers for the Rx path with 8K pages backing them. This is needed on architectures that implement 4K pages because we can't support 2K buffers plus padding in a 4K page. In the case of systems that support page sizes larger than 4K the 3K attribute will only be applied to FCoE as we can fall back to using just 2K buffers and adding the padding. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
Batch the page count updates instead of doing them one at a time. By doing this we can improve the overall performance as the atomic increment operations can be expensive due to the fact that on x86 they are locked operations which can cause stalls. By doing bulk updates we can consolidate the stall which should help to improve the overall receive performance. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Acked-by: NJohn Fastabend <john.r.fastabend@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch adds support for DMA_ATTR_SKIP_CPU_SYNC and DMA_ATTR_WEAK_ORDERING. By enabling both of these for the Rx path we are able to see performance improvements on architectures that implement either one due to the fact that page mapping and unmapping only has to sync what is actually being used instead of the entire buffer. In addition by enabling the weak ordering attribute enables a performance improvement for architectures that can associate a memory ordering with a DMA buffer such as Sparc. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
On some platforms, syncing a buffer for DMA is expensive. Rather than sync the whole 2K receive buffer, only synchronise the length of the frame, which will typically be the MTU, or a much smaller TCP ACK. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-