- 04 3月, 2016 1 次提交
-
-
由 John Fastabend 提交于
I added this check in setup_tc to multiple drivers, if (handle != TC_H_ROOT || tc->type != TC_SETUP_MQPRIO) Unfortunately restricting to TC_H_ROOT like this breaks the old instantiation of mqprio to setup a hardware qdisc. This patch relaxes the test to only check the type to make it equivalent to the check before I broke it. With this the old instantiation continues to work. A good smoke test is to setup mqprio with, # tc qdisc add dev eth4 root mqprio num_tc 8 \ map 0 1 2 3 4 5 6 7 \ queues 0@0 1@1 2@2 3@3 4@4 5@5 6@6 7@7 Fixes: e4c6734e ("net: rework ndo tc op to consume additional qdisc handle paramete") Reported-by: NSingh Krishneil <krishneil.k.singh@intel.com> Reported-by: NJake Keller <jacob.e.keller@intel.com> CC: Murali Karicheri <m-karicheri2@ti.com> CC: Shradha Shah <sshah@solarflare.com> CC: Or Gerlitz <ogerlitz@mellanox.com> CC: Ariel Elior <ariel.elior@qlogic.com> CC: Jeff Kirsher <jeffrey.t.kirsher@intel.com> CC: Bruce Allan <bruce.w.allan@intel.com> CC: Jesse Brandeburg <jesse.brandeburg@intel.com> CC: Don Skidmore <donald.c.skidmore@intel.com> Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 2月, 2016 2 次提交
-
-
由 John Fastabend 提交于
This patch updates setup_tc so we can pass additional parameters into the ndo op in a generic way. To do this we provide structured union and type flag. This lets each classifier and qdisc provide its own set of attributes without having to add new ndo ops or grow the signature of the callback. Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Acked-by: NJiri Pirko <jiri@mellanox.com> Acked-by: NJamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 John Fastabend 提交于
The ndo_setup_tc() op was added to support drivers offloading tx qdiscs however only support for mqprio was ever added. So we only ever added support for passing the number of traffic classes to the driver. This patch generalizes the ndo_setup_tc op so that a handle can be provided to indicate if the offload is for ingress or egress or potentially even child qdiscs. CC: Murali Karicheri <m-karicheri2@ti.com> CC: Shradha Shah <sshah@solarflare.com> CC: Or Gerlitz <ogerlitz@mellanox.com> CC: Ariel Elior <ariel.elior@qlogic.com> CC: Jeff Kirsher <jeffrey.t.kirsher@intel.com> CC: Bruce Allan <bruce.w.allan@intel.com> CC: Jesse Brandeburg <jesse.brandeburg@intel.com> CC: Don Skidmore <donald.c.skidmore@intel.com> Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com> Acked-by: NJiri Pirko <jiri@mellanox.com> Acked-by: NJamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 02 12月, 2015 1 次提交
-
-
由 Bert Kenward 提交于
The Solarflare 8000 series NIC will use a new TSO scheme. The current driver refuses to load if the current TSO scheme is not found. Remove that check and instead make the TSO version a per-queue parameter. Signed-off-by: NBert Kenward <bkenward@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 03 11月, 2015 1 次提交
-
-
由 Martin Habets 提交于
When the IP stack passes SKBs the sfc driver puts them in 2 different TX queues (called partners), one for checksummed and one for not checksummed. If the SKB has xmit_more set the driver will delay pushing the work to the NIC. When later it does decide to push the buffers this patch ensures it also pushes the partner queue, if that also has any delayed work. Before this fix the work in the partner queue would be left for a long time and cause a netdev watchdog. Fixes: 70b33fb0 ("sfc: add support for skb->xmit_more") Reported-by: NJianlin Shi <jishi@redhat.com> Signed-off-by: NMartin Habets <mhabets@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 7月, 2015 1 次提交
-
-
由 Peter Dunning 提交于
The limit for BQL is updated each time we call netdev_tx_completed_queue. Without this patch the BQL limit was updated for every TX event we see. The issue was that this only updated the limit to handle the data we complete in two events as the first event wouldn't show that enough traffic had been processed between them. This was OK when interrupt moderation was off but not when it was on as more data had to be completed in a single interrupt. The patch changes this so that we do report the completion to BQL only when all the TX events in the interrupt have been processed. Signed-off-by: NShradha Shah <sshah@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 23 10月, 2014 1 次提交
-
-
由 Jon Cooper 提交于
write_count and insert_count can wrap around, making > check invalid. Fixes: 70b33fb0 ("sfc: add support for skb->xmit_more"). Signed-off-by: NEdward Cree <ecree@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 10月, 2014 1 次提交
-
-
由 Edward Cree 提交于
Don't ring the doorbell, and don't do PIO. This will also prevent TX Push, because there will be more than one buffer waiting when the doorbell is rung. Signed-off-by: NEdward Cree <ecree@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 9月, 2014 1 次提交
-
-
由 Rick Jones 提交于
Convert the normal transmit completion path from dev_kfree_skb_any() to dev_consume_skb_any() to help keep dropped packet profiling meaningful. Signed-off-by: NRick Jones <rick.jones2@hp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 7月, 2014 1 次提交
-
-
由 Ben Hutchings 提交于
__iowrite64_copy() isn't quite the same as efx_memcpy_64(), but it looks close enough: - The length is in units of qwords not bytes - It never byte-swaps, but that doesn't make a difference now as PIO is only enabled for x86_64 - It doesn't include any memory barriers, but that's OK as there is a barrier just before pushing the doorbell - mlx4_en uses it for the same purpose Compile-tested only. Signed-off-by: NBen Hutchings <ben@decadent.org.uk> Acked-by: NEdward Cree <ecree@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 7月, 2014 1 次提交
-
-
由 Andrew Rybchenko 提交于
Implement per channel software TX and RX packet counters accessed as ethtool statistics. This allows confirmation with MAC statistics. Signed-off-by: NShradha Shah <sshah@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 6月, 2014 1 次提交
-
-
由 Jon Cooper 提交于
Fixes:ee45fd92 ("sfc: Use TX PIO for sufficiently small packets") The linux net driver uses memcpy_toio() in order to copy into the PIO buffers. Even on a 64bit machine this causes 32bit accesses to a write- combined memory region. There are hardware limitations that mean that only 64bit naturally aligned accesses are safe in all cases. Due to being write-combined memory region two 32bit accesses may be coalesced to form a 64bit non 64bit aligned access. Solution was to open-code the memory copy routines using pointers and to only enable PIO for x86_64 machines. Not tested on platforms other than x86_64 because this patch disables the PIO feature on other platforms. Compile-tested on x86 to ensure that works. The WARN_ON_ONCE() code in the previous version of this patch has been moved into the internal sfc debug driver as the assertion was unnecessary in the upstream kernel code. This bug fix applies to v3.13 and v3.14 stable branches. Signed-off-by: NShradha Shah <sshah@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 2月, 2014 2 次提交
-
-
由 Ben Hutchings 提交于
Signed-off-by: NBen Hutchings <bhutchings@solarflare.com> Signed-off-by: NShradha Shah <sshah@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ben Hutchings 提交于
If CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is defined then NET_IP_ALIGN will be defined as 0, so this macro is redundant. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com> Signed-off-by: NShradha Shah <sshah@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 2月, 2014 1 次提交
-
-
由 Paul Gortmaker 提交于
Commit ee45fd92 ("sfc: Use TX PIO for sufficiently small packets") introduced the following warning: drivers/net/ethernet/sfc/tx.c: In function 'efx_enqueue_skb': drivers/net/ethernet/sfc/tx.c:432:1: warning: label 'finish_packet' defined but not used Stick the label inside the same #ifdef that the code which calls it uses. Note that this is only seen for arch that do not set ARCH_HAS_IOREMAP_WC, such as arm, mips, sparc, ..., as the others enable the write combining code and hence use the label. Cc: Jon Cooper <jcooper@solarflare.com> Cc: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Acked-by: NShradha Shah <sshah@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 11月, 2013 1 次提交
-
-
由 Alexandre Rames 提交于
When using firmware assisted TSO, we use a single DMA mapping for the linear area of a TSO skb. We still have to segment the super-packet and insert a descriptor containing the original headers before each segment of payload, so we can unmap the linear area only after the last segment is completed. The unmapping information for the linear area is therefore associated with the last header descriptor. We calculate the DMA address to unmap from using the map length and the invariant that the end of the DMA mapping matches the end of the data referenced by the last descriptor. But this invariant is broken when there is TCP payload in the linear area. Fix this by adding and using an explicit dma_offset field. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 21 9月, 2013 5 次提交
-
-
由 Jon Cooper 提交于
Sufficiently small linear packets can be copied into the PIO buffer with a single call to memcpy_toio(). Non-linear packets require an intermediate cache-line-sized buffer. [bwh: I wrote the first version of this, but Jon did the hard work to handle non-linear packets.] Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
Try to allocate a segment of PIO buffer to each TX channel. If allocation fails, log an error but continue. PIO buffers must be mapped separately from the NIC registers, with write-combining enabled. Where the host page size is 4K, we could potentially map each VI's registers and PIO buffer separately. However, this would add significant complexity, and we also need to support architectures such as POWER which have a greater page size. So make a single contiguous write-combining mapping after the uncacheable mapping, aligned to the host page size, and link PIO buffers there. Where necessary, allocate additional VIs within the write-combining mapping purely for access to PIO buffers. Link all TX buffers to TX queues and the additional VIs in efx_ef10_dimension_resources() and in efx_ef10_init_nic() after an MC reboot. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
Segmentation remains in the driver, but we generate option descriptors describing the required packet editing rather than making our own copies. Reduce tso_state::ipv4_id to 16 bits, so it doesn't overflow into the TCP_FLAGS field of the option descriptor. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 30 8月, 2013 2 次提交
-
-
由 Ben Hutchings 提交于
Update the dates for files that have been added to in 2012-2013. Drop the 'Solarstorm' brand name that's still lingering here. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
The TX path firmware for EF10 supports 'option descriptors' to control offloads and various other features. Add a flag and field for these in struct efx_tx_buffer, and don't treat them as DMA descriptors on completion. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 28 8月, 2013 1 次提交
-
-
由 Ben Hutchings 提交于
Add a counter for TX merged completion events. This is implemented in the common TX path, because the NIC event handlers only know how many descriptors were completed, not how many packets. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 22 8月, 2013 2 次提交
-
-
由 Ben Hutchings 提交于
Currently efx_stop_datapath() will try to flush our DMA queues (if DMA is enabled), then finalise software and hardware state for each queue. However, for EF10 we must ask the MC to finalise each queue, which implicitly starts flushing it, and then wait for the flush events. We therefore need to delegate more of this to the NIC type. Combine all the hardware operations into a new NIC-type operation efx_nic_type::fini_dmaq, and call this before tearing down the software state and buffers for all the DMA queues. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
Most call sites for efx_nic_alloc_buffer() are part of the probe or reconfiguration paths and can allocate with GFP_KERNEL. A few others should use GFP_NOIO (I think). Only one is in atomic context and must use the current GFP_ATOMIC. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 19 9月, 2012 1 次提交
-
-
由 Stuart Hodgson 提交于
Add PTP IEEE-1588 support and make accesible via the PHC subsystem. This work is based on prior code by Andrew Jackson Signed-off-by: NStuart Hodgson <smhodgson@solarflare.com> [bwh: - Add byte order conversion in efx_ptp_send_times() - Simplify conversion of PPS event times - Add the built-in vs module check to CONFIG_SFC_PTP dependencies] Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 25 8月, 2012 5 次提交
-
-
由 Ben Hutchings 提交于
Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
We only use tso_state::full_packet_space to calculate the IPv4 tot_len or IPv6 payload_len, not to set tso_state::packet_space. Replace it with an ip_base_len field holding the value of tot_len or payload_len before including the TCP payload, which is much more useful when constructing the new headers. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
TSO header buffers contain a control structure immediately followed by the packet headers, and are kept on a free list when not in use. This complicates buffer management and tends to result in cache read misses when we recycle such buffers (particularly if DMA-coherent memory requires caches to be disabled). Replace the free list with a simple mapping by descriptor index. We know that there is always a payload descriptor between any two descriptors with TSO header buffers, so we can allocate only one such buffer for each two descriptors. While we're at it, use a standard error code for allocation failure, not -1. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
We now have a definite upper bound on the number of descriptors per skb; use that to stop the queue when the next packet might not fit. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
Add a flags field to struct efx_tx_buffer, replacing the continuation and map_single booleans. Since a single descriptor cannot be both a TSO header and the last descriptor for an skb, unionise efx_tx_buffer::{skb,tsoh} and add flags for validity of these fields. Clear all flags in free buffers (whereas previously the continuation flag would be set). Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 02 8月, 2012 1 次提交
-
-
由 Ben Hutchings 提交于
Currently an skb requiring TSO may not fit within a minimum-size TX queue. The TX queue selected for the skb may stall and trigger the TX watchdog repeatedly (since the problem skb will be retried after the TX reset). This issue is designated as CVE-2012-3412. Set the maximum number of TSO segments for our devices to 100. This should make no difference to behaviour unless the actual MSS is less than about 700. Increase the minimum TX queue size accordingly to allow for 2 worst-case skbs, so that there will definitely be space to add an skb after we wake a queue. To avoid invalidating existing configurations, change efx_ethtool_set_ringparam() to fix up values that are too small rather than returning -EINVAL. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 7月, 2012 3 次提交
-
-
由 Ben Hutchings 提交于
There is nothing in the VLAN driver or core VLAN support that invalidates the TCP and IP header offsets. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
tso_state::packet_space is always set in tso_start_packet(); the value set in tso_start() is not used, and is also incorrect. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
由 Ben Hutchings 提交于
Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 23 2月, 2012 1 次提交
-
-
由 Ben Hutchings 提交于
Fix some indentation and line continuations. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 14 2月, 2012 1 次提交
-
-
由 Ben Hutchings 提交于
The 'page size' for PCIe DMA, i.e. the alignment of boundaries at which DMA must be broken, is 4KB. Name this value as EFX_PAGE_SIZE and use it in efx_max_tx_len(). Redefine EFX_BUF_SIZE as EFX_PAGE_SIZE since its value is also a result of that requirement, and use it in efx_init_special_buffer(). Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 27 1月, 2012 1 次提交
-
-
由 Ben Hutchings 提交于
The out-of-tree version of the sfc driver used to run a self-test on each device before registering it. Although this was never included in-tree, some functions have checks for this special case which is not really possible. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
-
- 04 12月, 2011 1 次提交
-
-
由 Thomas Meyer 提交于
The advantage of kcalloc is, that will prevent integer overflows which could result from the multiplication of number of elements and size and it is also a bit nicer to read. The semantic patch that makes this change is available in https://lkml.org/lkml/2011/11/25/107Signed-off-by: NThomas Meyer <thomas@m3y3r.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-