1. 30 8月, 2013 2 次提交
  2. 28 8月, 2013 4 次提交
  3. 23 8月, 2013 2 次提交
    • B
      sfc: Cleanup Falcon-arch simple MAC filter state · 964e6135
      Ben Hutchings 提交于
      On Falcon we implement MAC filtering requested by the stack using the
      MAC wrapper's single unicast filter and multicast hash filter.  Siena
      is very similar, though MAC configuration is mediated by the MC.
      
      Since MCDI operations may sleep, reconfiguration is deferred from
      ndo_set_rx_mode to a work item.  However, it still updates the private
      variables describing the filter state synchronously.  Contrary to
      comments, the later use of these variables is not protected using the
      address lock, resulting in race conditions.
      
      Move the state update to a new function
      efx_farch_filter_sync_rx_mode() and make the Falcon-arch MAC
      configuration functions call that, so that its use is consistently
      serialised by the mac_lock.
      
      Invert and rename the promiscuous flag to the more accurate
      unicast_filter, and comment that both this and multicast_hash are
      not used on EF10.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      964e6135
    • B
      sfc: Make most filter operations NIC-type-specific · add72477
      Ben Hutchings 提交于
      Aside from accelerated RFS, there is almost nothing that can be shared
      between the filter table implementations for the Falcon architecture
      and EF10.
      
      Move the few shared functions into efx.c and rx.c and the rest into
      farch.c.  Introduce efx_nic_type operations for the implementation and
      inline wrapper functions that call these.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      add72477
  4. 22 8月, 2013 8 次提交
    • B
      sfc: Do not assume efx_nic_type::ev_fini is idempotent · be3fc09c
      Ben Hutchings 提交于
      efx_fini_eventq() needs to be idempotent but EF10 firmware is
      picky about queue states.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      be3fc09c
    • B
      sfc: Get rid of per-NIC-type phys_addr_channels and mem_map_size · b105798f
      Ben Hutchings 提交于
      EF10 functions don't have a fixed BAR size, and the minimum is not
      large enough for all the queues we might want to allocate.  We have to
      find out the BAR size at run-time, and therefore phys_addr_channels
      and mem_map_size cannot be defined per-NIC-type.
      
      Change efx_nic_type::mem_map_size to a function pointer which is
      called to find the wanted memory map size (before probe).
      
      Replace efx_nic_type::phys_addr_channels with efx_nic::max_channels,
      to be initialised by the probe function.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      b105798f
    • B
      sfc: Move and rename Falcon/Siena common NIC operations · 86094f7f
      Ben Hutchings 提交于
      Add efx_nic_type operations for the many efx_nic functions that need
      to be implemented different on EF10.  For now, change most of the
      existing efx_nic_*() functions into inline wrappers.  As a later step,
      we may be able to improve branch prediction for operations used on the
      fast path by copying the pointers into each queue/channel structure.
      
      Move the Falcon/Siena implementations to new file farch.c and rename
      the functions and static data to use a prefix of 'efx_farch_'.
      
      Move efx_may_push_tx_desc() to nic.h, as the EF10 TX code will also
      use it.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      86094f7f
    • B
      sfc: Refactor queue teardown sequence to allow for EF10 flush behaviour · e42c3d85
      Ben Hutchings 提交于
      Currently efx_stop_datapath() will try to flush our DMA queues (if DMA
      is enabled), then finalise software and hardware state for each queue.
      However, for EF10 we must ask the MC to finalise each queue, which
      implicitly starts flushing it, and then wait for the flush events.
      We therefore need to delegate more of this to the NIC type.
      
      Combine all the hardware operations into a new NIC-type operation
      efx_nic_type::fini_dmaq, and call this before tearing down the
      software state and buffers for all the DMA queues.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      e42c3d85
    • B
      sfc: Remove bogus call to efx_release_tx_buffers() · 501a248c
      Ben Hutchings 提交于
      efx_unregister_netdev() should not call efx_release_tx_buffers()
      directly, as it is already done when closing the device:
      efx_net_stop() -> efx_stop_all() -> efx_stop_datapath() ->
      efx_fini_tx_queue() -> efx_release_tx_buffers().
      
      (This was presumably a workaround for a race between efx_stop_all()
      and the data path that has since been properly fixed.)
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      501a248c
    • B
      sfc: Stop RX refill before flushing RX queues · d8aec745
      Ben Hutchings 提交于
      rx_queue::enabled guards refill, so rename it to reflect that.  Clear
      it at the start of the queue teardown process rather than waiting for
      the RX queue to be flushed.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      d8aec745
    • B
      sfc: Rework IRQ enable/disable · d8291187
      Ben Hutchings 提交于
      There are many problems with the current efx_stop_interrupts() and
      efx_start_interrupts():
      
      1. On Siena, it is unsafe to disable the master IRQ enable bit
      (DRV_INT_EN_KER) while any IRQ sources are enabled.
      
      2. On EF10 there is no master IRQ enable bit, so we cannot expect to
      defer IRQs without tearing down event queues.  (Though I don't think
      we will need to keep any event queues around while the device is down,
      as we do for VFDI on Siena.)
      
      3. synchronize_irq() only waits for a running IRQ handler to finish,
      not for any propagation through IRQ controllers.  Therefore an IRQ may
      still be received and handled after efx_stop_interrupts() returns.
      IRQ handlers can then race with channel reallocation.
      
      To fix this:
      
      a. Introduce a software IRQ enable flag.  So long as this is clear,
      IRQ handlers will only acknowledge IRQs and not touch the channel
      structures.
      
      b. Define a new struct efx_msi_context as the context for MSIs.  This
      is never reallocated and is sufficient to find the software enable
      flag and the channel structure.  It also includes the channel/IRQ
      name, which was previously separated out as it must also not be
      reallocated.
      
      c. Split efx_{start,stop}_interrupts() into
      efx_{,soft_}_{enable,disable}_interrupts().  The 'soft' functions
      don't touch the hardware master enable flag (if it exists) and don't
      reinitialise or tear down channels with the keep_eventq flag set.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      d8291187
    • B
      sfc: Remove efx_process_channel_now() · 514bedbc
      Ben Hutchings 提交于
      efx_process_channel_now() is unneeded since self-tests can rely on
      normal NAPI polling.  Remove it and all calls to it.
      
      efx_channel::work_pending and efx_channel_processed() are also
      unneeded (the latter being the same as efx_nic_eventq_read_ack()).
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      514bedbc
  5. 25 6月, 2013 2 次提交
    • B
      sfc: Fix IRQ cleanup in case of a probe failure · 1899c111
      Ben Hutchings 提交于
      The lifetime of an irq_cpu_rmap is odd: we have to allocate it before
      installing IRQ handlers and free it before removing the IRQ handlers.
      As a result of this asymmetry, it was omitted from some failure paths.
      
      On another failure path, we could try to remove IRQ handlers we
      had not yet installed.
      
      Move the irq_cpu_rmap allocation and freeing alongside IRQ handler
      installation and removal, in efx_nic_{init,fini}_interrupts().
      Count the number of IRQ handlers successfully installed and only
      remove those on the failure path.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      1899c111
    • A
      sfc: Fix EEH with legacy interrupts. · b28405b0
      Alexandre Rames 提交于
      PCI legacy interrupts are level-triggered, and we cannot mask them up
      on an isolated device.  Instead, disable the IRQ at the controller
      until we have recovered.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      b28405b0
  6. 20 6月, 2013 1 次提交
  7. 29 5月, 2013 1 次提交
  8. 15 5月, 2013 2 次提交
    • B
      sfc: Reduce RX scatter buffer size, and reduce alignment if appropriate · 950c54df
      Ben Hutchings 提交于
      efx_start_datapath() asserts that we can fit 2 RX scatter buffers plus
      a software structure, each appropriately aligned, into a single page.
      Where L1_CACHE_BYTES == 256 and PAGE_SIZE == 4096, which is the case
      on s390, this assertion fails.
      
      The current scatter buffer size is also not a multiple of 64 or 128,
      which are more common cache line sizes.  If we can make both the start
      and end of a scatter buffer cache-aligned, this will reduce the need
      for read-modify-write operations on inter- processor links.
      
      Fix the alignment by reducing EFX_RX_USR_BUF_SIZE to 2048 - 256 ==
      1792.  (We could use 2048 - L1_CACHE_BYTES, but EFX_RX_USR_BUF_SIZE
      also affects user-level networking where a larger amount of
      housekeeping data may be needed.  Although this version of the driver
      does not support user-level networking, I prefer to keep scattering
      behaviour consistent with the out-of-tree version.)
      
      This still doesn't fix the s390 build because like most architectures
      it has NET_IP_ALIGN == 2.  When NET_IP_ALIGN != 0 we cannot achieve
      cache line alignment at either the start or end of a scatter buffer,
      so there is actually no point in padding the buffers to a multiple of
      the cache line size.  All we need is 4-byte alignment of the network
      header, so do that.
      
      Adjust the assertions accordingly.
      Reported-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Reported-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      950c54df
    • B
      sfc: Delete EFX_PAGE_IP_ALIGN, equivalent to NET_IP_ALIGN · c14ff2ea
      Ben Hutchings 提交于
      The two architectures that define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
      (powerpc and x86) now both define NET_IP_ALIGN as 0, so there is no
      need for this optimisation any more.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c14ff2ea
  9. 18 3月, 2013 1 次提交
  10. 13 3月, 2013 1 次提交
  11. 08 3月, 2013 7 次提交
    • D
      sfc: allocate more RX buffers per page · 1648a23f
      Daniel Pieczko 提交于
      Allocating 2 buffers per page is insanely inefficient when MTU is 1500
      and PAGE_SIZE is 64K (as it usually is on POWER).  Allocate as many as
      we can fit, and choose the refill batch size at run-time so that we
      still always use a whole page at once.
      
      [bwh: Fix loop condition to allow for compound pages; rebase]
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      1648a23f
    • D
      sfc: reuse pages to avoid DMA mapping/unmapping costs · 2768935a
      Daniel Pieczko 提交于
      On POWER systems, DMA mapping/unmapping operations are very expensive.
      These changes reduce these costs by trying to reuse DMA mapped pages.
      
      After all the buffers associated with a page have been processed and
      passed up, the page is placed into a ring (if there is room).  For
      each page that is required for a refill operation, a page in the ring
      is examined to determine if its page count has fallen to 1, ie. the
      kernel has released its reference to these packets.  If this is the
      case, the page can be immediately added back into the RX descriptor
      ring, without having to re-map it for DMA.
      
      If the kernel is still holding a reference to this page, it is removed
      from the ring and unmapped for DMA.  Then a new page, which can
      immediately be used by RX buffers in the descriptor ring, is allocated
      and DMA mapped.
      
      The time a page needs to spend in the recycle ring before the kernel
      has released its page references is based on the number of buffers
      that use this page.  As large pages can hold more RX buffers, the RX
      recycle ring can be shorter.  This reduces memory usage on POWER
      systems, while maintaining the performance gain achieved by recycling
      pages, following the driver change to pack more than two RX buffers
      into large pages.
      
      When an IOMMU is not present, the recycle ring can be small to reduce
      memory usage, since DMA mapping operations are inexpensive.
      
      With a small recycle ring, attempting to refill the descriptor queue
      with more buffers than the equivalent size of the recycle ring could
      ultimately lead to memory leaks if page entries in the recycle ring
      were overwritten.  To prevent this, the check to see if the recycle
      ring is full is changed to check if the next entry to be written is
      NULL.
      
      [bwh: Combine and rebase several commits so this is complete
       before the following buffer-packing changes.  Remove module
       parameter.]
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      2768935a
    • B
      sfc: Enable RX DMA scattering where possible · 85740cdf
      Ben Hutchings 提交于
      Enable RX DMA scattering iff an RX buffer large enough for the current
      MTU will not fit into a single page and the NIC supports DMA
      scattering for kernel-mode RX queues.
      
      On Falcon and Siena, the RX_USR_BUF_SIZE field is used as the DMA
      limit for both all RX queues with scatter enabled.  Set it to 1824,
      matching what Onload uses now.
      
      Maintain a statistic for frames truncated due to lack of descriptors
      (rx_nodesc_trunc).  This is distinct from rx_frm_trunc which may be
      incremented when scattering is disabled and implies an over-length
      frame.
      
      Whenever an MTU change causes scattering to be turned on or off,
      update filters that point to the PF queues, but leave others
      unchanged, as VF drivers assume scattering is off.
      
      Add n_frags parameters to various functions, and make them iterate:
      - efx_rx_packet()
      - efx_recycle_rx_buffers()
      - efx_rx_mk_skb()
      - efx_rx_deliver()
      
      Make efx_handle_rx_event() responsible for updating
      efx_rx_queue::removed_count.
      
      Change the RX pipeline state to a starting ring index and number of
      fragments, and make __efx_rx_packet() responsible for clearing it.
      
      Based on earlier versions by David Riddoch and Jon Cooper.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      85740cdf
    • B
      sfc: Wrap __efx_rx_packet() with efx_rx_flush_packet() · ff734ef4
      Ben Hutchings 提交于
      The pipeline mechanism will need to change a bit for scattered
      packets.  Add a wrapper to insulate efx_process_channel() from this.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      ff734ef4
    • B
      sfc: Properly distinguish RX buffer and DMA lengths · 272baeeb
      Ben Hutchings 提交于
      Replace efx_nic::rx_buffer_len with efx_nic::rx_dma_len, the maximum
      RX DMA length.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      272baeeb
    • A
      sfc: Add AER and EEH support for Siena · 626950db
      Alexandre Rames 提交于
      The Linux side of EEH is triggered by MMIO reads, but this
      driver's data path does not issue any MMIO reads (except in
      legacy interrupt mode).  Therefore add a monitor function
      to poll EEH periodically.
      
      When preparing to reset the device based on our own error
      detection, also poll EEH and defer to its recovery mechanism
      if appropriate.
      
      [bwh: Use a separate condition for the initial link poll; fix some
       style errors]
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      626950db
    • A
      sfc: Remove rx_alloc_method SKB · 97d48a10
      Alexandre Rames 提交于
      [bwh: Remove more dead code, and make efx_ptp_rx() pull the data it
       needs into the header area.]
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      97d48a10
  12. 26 2月, 2013 1 次提交
    • B
      sfc: Detach net device when stopping queues for reconfiguration · 29c69a48
      Ben Hutchings 提交于
      We must only ever stop TX queues when they are full or the net device
      is not 'ready' so far as the net core, and specifically the watchdog,
      is concerned.  Otherwise, the watchdog may fire *immediately* if no
      packets have been added to the queue in the last 5 seconds.
      
      The device is ready if all the following are true:
      
      (a) It has a qdisc
      (b) It is marked present
      (c) It is running
      (d) The link is reported up
      
      (a) and (c) are normally true, and must not be changed by a driver.
      (d) is under our control, but fake link changes may disturb userland.
      This leaves (b).  We already mark the device absent during reset
      and self-test, but we need to do the same during MTU changes and ring
      reallocation.  We don't need to do this when the device is brought
      down because then (c) is already false.
      Signed-off-by: NBen Hutchings <bhutchings@solarflare.com>
      29c69a48
  13. 08 12月, 2012 1 次提交
  14. 04 12月, 2012 1 次提交
  15. 01 12月, 2012 2 次提交
  16. 06 10月, 2012 1 次提交
  17. 19 9月, 2012 1 次提交
  18. 08 9月, 2012 2 次提交