- 26 1月, 2018 40 次提交
-
-
由 Emil Tantilov 提交于
Based on commit ec718254 ("ixgbe: Improve performance and reduce size of ixgbe_tx_map") This change is meant to both improve the performance and reduce the size of ixgbevf_tx_map(). Expand the work done in the main loop by pushing first into tx_buffer. This allows us to pull in the dma_mapping_error check, the tx_buffer value assignment, and the initial DMA value assignment to the Tx descriptor. Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Emil Tantilov 提交于
Based on commit d2bead57 ("igb: Clear Rx buffer_info in configure instead of clean") This change makes it so that instead of going through the entire ring on Rx cleanup we only go through the region that was designated to be cleaned up and stop when we reach the region where new allocations should start. In addition we can avoid having to perform a memset on the Rx buffer_info structures until we are about to start using the ring again. Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Emil Tantilov 提交于
We already had placehloders for failed page and buffer allocations. Added alloc_rx_page and made sure the stats are properly updated and exposed in ethtool. Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Emil Tantilov 提交于
Based on commit bd4171a5 ("igb: update code to better handle incrementing page count") Update the driver code so that we do bulk updates of the page reference count instead of just incrementing it by one reference at a time. The advantage to doing this is that we cut down on atomic operations and this in turn should give us a slight improvement in cycles per packet. In addition if we eventually move this over to using build_skb the gains will be more noticeable. Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Emil Tantilov 提交于
Based on commit 5be59554 ("igb: update driver to make use of DMA_ATTR_SKIP_CPU_SYNC") and commit 7bd17592 ("igb: Add support for DMA_ATTR_WEAK_ORDERING") Convert the calls to dma_map/unmap_page() to the attributes version and add DMA_ATTR_SKIP_CPU_SYNC/WEAK_ORDERING which should help improve performance on some platforms. Move sync_for_cpu call before we perform a prefetch to avoid invalidating the first 128 bytes of the packet on architectures where that call may invalidate the cache. Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Emil Tantilov 提交于
Based on: commit 7ec0116c ("igb: Use length to determine if descriptor is done") This change makes it so that we use the length of the packet instead of the DD status bit to determine if a new descriptor is ready to be processed. The obvious advantage is that it cuts down on reads as we don't really even need the DD bit if going from a 0 to a non-zero value on size is enough to inform us that the packet has been completed. In addition we only reset the Rx descriptor length for descriptor zero when resetting a ring instead of having to do a memset with 0 over the entire ring. By doing this we can save some time on initialization. Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Emil Tantilov 提交于
Based on commit 64f2525c ("igb: Only DMA sync frame length") On some architectures synching a buffer for DMA may be expensive. Instead of the entire 2K receive buffer only synchronize the length of the frame, which will typically be the MTU or smaller. Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Emil Tantilov 提交于
Introduce ixgbevf_can_reuse_page() similar to the change in ixgbe from commit af43da0d ("ixgbe: Add function for checking to see if we can reuse page") Signed-off-by: NEmil Tantilov <emil.s.tantilov@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 David Ahern 提交于
Message sends to the local broadcast address (255.255.255.255) require uc_index or sk_bound_dev_if to be set to an egress device. However, responses or only received if the socket is bound to the device. This is overly constraining for processes running in an L3 domain. This patch allows a socket bound to the VRF device to send to the local broadcast address by using IP_UNICAST_IF to set the egress interface with packet receipt handled by the VRF binding. Similar to IP_MULTICAST_IF, relax the constraint on setting IP_UNICAST_IF if a socket is bound to an L3 master device. In this case allow uc_index to be set to an enslaved if sk_bound_dev_if is an L3 master device and is the master device for the ifindex. In udp and raw sendmsg, allow uc_index to override the oif if uc_index master device is oif (ie., the oif is an L3 master and the index is an L3 slave). Signed-off-by: NDavid Ahern <dsahern@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
William Tu says: ==================== net: erspan: add support for openvswitch The first patch refactors the erspan header definitions. Originally, the erspan fields are defined as a group into a __be16 field, and use mask and offset to access each field. This is more costly due to calling ntohs/htons and error-prone. The first patch changes it to use bitfields. The second patch creates erspan.h in UAPI and move the definition 'struct erspan_metadata' to it for later openvswitch to use. The final patch introduces the new OVS tunnel key attribute, OVS_TUNNEL_KEY_ATTR_ERSPAN_OPTS, to program both v1 and v2 erspan tunnel for openvswitch. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 William Tu 提交于
The patch adds support for openvswitch to configure erspan v1 and v2. The OVS_TUNNEL_KEY_ATTR_ERSPAN_OPTS attr is added to uapi as a binary blob to support all ERSPAN v1 and v2's fields. Note that Previous commit "openvswitch: Add erspan tunnel support." was reverted since it does not design properly. Signed-off-by: NWilliam Tu <u9012063@gmail.com> Acked-by: NPravin B Shelar <pshelar@ovn.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 William Tu 提交于
The patch adds a new uapi header file, erspan.h, and moves the 'struct erspan_metadata' from internal erspan.h to it. Signed-off-by: NWilliam Tu <u9012063@gmail.com> Acked-by: NPravin B Shelar <pshelar@ovn.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 William Tu 提交于
Originally the erspan fields are defined as a group into a __be16 field, and use mask and offset to access each field. This is more costly due to calling ntohs/htons. The patch changes it to use bitfields. Signed-off-by: NWilliam Tu <u9012063@gmail.com> Acked-by: NPravin B Shelar <pshelar@ovn.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Jakub Kicinski says: ==================== use tc_cls_can_offload_and_chain0() throughout the drivers This set makes all drivers use a new tc_cls_can_offload_and_chain0() helper which will set extack in case TC hw offload flag is disabled. I chose to keep the new helper which also looks at the chain but renamed it more appropriately. The rationale being that most drivers don't accept chains other than 0 and since we have to pass extack to the helper we can as well pass the entire struct tc_cls_common_offload and perform the most common checks. This code makes the assumption that type_data in the callback can be interpreted as struct tc_cls_common_offload, i.e. the real offload structure has common part as the first member. This allows us to make the check once for all classifier types if driver supports more than one. v1: - drop the type validation in nfp and netdevsim. v2: - reorder checks in patch 1; - split other changes from patch 1; - add the i40e patch in; - add one more test case - for chain 0 extack. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Make sure netdevsim doesn't allow offload of chains other than 0, and that it reports the expected extack message. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Drivers should not report errors when offload is not forced. Check stdout and stderr for familiar messages when with no skip flags and with skip_hw. Check for add, replace, and destroy. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Make use of tc_cls_can_offload_and_chain0() to set extack msg in case ethtool tc offload flag is not set or chain unsupported. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Make use of tc_cls_can_offload_and_chain0() to set extack msg in case ethtool tc offload flag is not set or chain unsupported. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Make use of tc_cls_can_offload_and_chain0() to set extack msg in case ethtool tc offload flag is not set or chain unsupported. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Make use of tc_cls_can_offload_and_chain0() to set extack msg in case ethtool tc offload flag is not set or chain unsupported. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Make use of tc_cls_can_offload_and_chain0() to set extack msg in case ethtool tc offload flag is not set or chain unsupported. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Acked-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Make use of tc_cls_can_offload_and_chain0() to set extack msg in case ethtool tc offload flag is not set or chain unsupported. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Make use of tc_cls_can_offload_and_chain0() to set extack msg in case ethtool tc offload flag is not set or chain unsupported. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Make use of tc_cls_can_offload_and_chain0() to set extack msg in case ethtool tc offload flag is not set or chain unsupported. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Very few (mlxsw) upstream drivers seem to allow offload of chains other than 0. Save driver developers typing and add a helper for checking both if ethtool's TC offload flag is on and if chain is 0. This helper will set the extack appropriately in both error cases. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rohit Visavalia 提交于
Issue found by checkpatch. Signed-off-by: NRohit Visavalia <rohit.visavalia@softnautics.com> Acked-by: NMichal Kalderon <michal.kalderon@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rohit Visavalia 提交于
Resolved Warning: networking block comments don't use an empty /* line, use /* Comment... Issue found by checkpatch. Signed-off-by: NRohit Visavalia <rohit.visavalia@softnautics.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Merge branch 'for-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next Johan Hedberg says: ==================== pull request: bluetooth-next 2018-01-25 Here's one last bluetooth-next pull request for the 4.16 kernel: - Improved support for Intel controllers - New set_parity method to serdev (agreed with maintainers to be taken through bluetooth-next) - Fix error path in hci_bcm (missing call to serdev close) - New ID for BCM4343A0 UART controller Please let me know if there are any issues pulling. Thanks. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ganesh Goudar 提交于
t4_wr_mbox_meat_timeout() can be called from both softirq context and process context, hence protect the mbox with spin_lock_bh() instead of simple spin_lock() Signed-off-by: NGanesh Goudar <ganeshgr@chelsio.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Ahern 提交于
IPv6 allows routes to be installed when the device is not up (admin up). Worse, it does not mark it as LINKDOWN. IPv4 does not allow it and really there is no reason for IPv6 to allow it, so check the flags and deny if device is admin down. Signed-off-by: NDavid Ahern <dsahern@gmail.com> Reviewed-by: NIdo Schimmel <idosch@mellanox.com> Reviewed-by: NRoopa Prabhu <roopa@cumulusnetworks.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Ursula Braun says: ==================== net/smc: more socket closing improvements these patches improve the smc behavior for abnormal socket closing. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ursula Braun 提交于
If a problem for at least one connection of a link group is detected, the whole link group and all its connections are terminated. This patch adds a check for healthy link group when trying to reserve a work request, and checks for healthy connections before starting a tx worker. Signed-off-by: NUrsula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ursula Braun 提交于
If a new connection with a new rmb is added to a link group, its memory region is registered. If a link group is terminated, a pending registration requires a wake up. And consolidate setting of tx_flag peer_conn_abort in smc_lgr_terminate(). Signed-off-by: NUrsula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ursula Braun 提交于
Once a linkgroup is created successfully, it stays alive for a certain time to service more connections potentially created. If one of the initialization steps for a new linkgroup fails, the linkgroup should not be reused by other connections following. Signed-off-by: NUrsula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ursula Braun 提交于
If ib_post_send() fails, terminate all connections of this link group. Signed-off-by: NUrsula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ursula Braun 提交于
A state transition from closing state SMC_PEERFINCLOSEWAIT to closing state SMC_APPFINCLOSEWAIT is not allowed. Once a closing indication from the peer has been received, the socket reaches state SMC_CLOSED. And receiving a peer_conn_abort just changes the state of the socket into one of the states SMC_PROCESSABORT or SMC_CLOSED; sending a peer_conn_abort occurs in smc_close_active() for state SMC_PROCESSABORT only. Signed-off-by: NUrsula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ursula Braun 提交于
If an SMC socket is aborted, the tx worker should be cancelled. Signed-off-by: NUrsula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Edward Cree says: ==================== sfc: support PTP on 8000 and X2000 series NICs Starting from the 8000-series (Medford 1), SFC NICs can timestamp TX packets sent through an ordinary DMA queue, rather than a special control-plane operation as in the 7000-series. Patches 2-8 implement support for this. The X2000-series (Medford 2) changes the format of timestamps, from seconds+ (2^27)ths to seconds + quarter nanoseconds, as well as changing the shift of the frequency adjustment for increased precision. Patches 9-12 implement support for these changes. Patch #1 is an unrelated fix for NAPI budget handling, needed in order for TX completion changes in the later patches to apply cleanly. ==================== Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Laurence Evans 提交于
Support increased precision frequency adjustment format (FP44) used by Medford2 adapters. Signed-off-by: NLaurence Evans <levans@solarflare.com> Signed-off-by: NEdward Cree <ecree@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Edward Cree 提交于
The time_format that we stash in the PTP data structure is never referenced, so we can remove it. Instead, store the information needed to interpret sync event timestamps. Also rolls in a couple of other related minor PTP fixes. Based on patches by Bert Kenward <bkenward@solarflare.com> and Laurence Evans <levans@solarflare.com>. Signed-off-by: NEdward Cree <ecree@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-