1. 14 7月, 2018 2 次提交
  2. 30 6月, 2018 2 次提交
  3. 13 6月, 2018 2 次提交
    • J
      nfp: remove phys_port_name on flower's vNIC · fe06a64e
      Jakub Kicinski 提交于
      .ndo_get_phys_port_name was recently extended to support multi-vNIC
      FWs.  These are firmwares which can have more than one vNIC per PF
      without associated port (e.g. Adaptive Buffer Management FW), therefore
      we need a way of distinguishing the vNICs.  Unfortunately, it's too
      late to make flower use the same naming.  Flower users may depend on
      .ndo_get_phys_port_name returning -EOPNOTSUPP, for example the name
      udev gave the PF vNIC was just the bare PCI device-based name before
      the change, and will have 'nn0' appended after.
      
      To ensure flower's vNIC doesn't have phys_port_name attribute, add
      a flag to vNIC struct and set it in flower code.  New projects will
      not set the flag adhere to the naming scheme from the start.
      
      Fixes: 51c1df83 ("nfp: assign vNIC id as phys_port_name of vNICs which are not ports")
      Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Reviewed-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com>
      Reviewed-by: NSimon Horman <simon.horman@netronome.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fe06a64e
    • J
      nfp: include all ring counters in interface stats · 29f534c4
      Jakub Kicinski 提交于
      We are gathering software statistics on per-ring basis.
      .ndo_get_stats64 handler adds the rings up.  Unfortunately
      we are currently only adding up active rings, which means
      that if user decreases the number of active rings the
      statistics from deactivated rings will no longer be counted
      and total interface statistics may go backwards.
      
      Always sum all possible rings, the stats are allocated
      statically for max number of rings, so we don't have to
      worry about them being removed.  We could add the stats
      up when user changes the ring count, but it seems unnecessary..
      Adding up inactive rings will be very quick since no datapath
      will be touching them.
      
      Fixes: 164d1e9e ("nfp: add support for ethtool .set_channels")
      Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Reviewed-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      29f534c4
  4. 29 5月, 2018 2 次提交
  5. 24 5月, 2018 1 次提交
  6. 19 4月, 2018 1 次提交
  7. 04 4月, 2018 1 次提交
  8. 30 3月, 2018 1 次提交
  9. 08 2月, 2018 3 次提交
  10. 23 1月, 2018 1 次提交
  11. 20 1月, 2018 5 次提交
  12. 15 1月, 2018 1 次提交
  13. 11 1月, 2018 1 次提交
    • J
      nfp: always unmask aux interrupts at init · fc233650
      Jakub Kicinski 提交于
      The link state and exception interrupts may be masked when we probe.
      The firmware should in theory prevent sending (and automasking) those
      interrupts if the device is disabled, but if my reading of the FW code
      is correct there are firmwares out there with race conditions in this
      area.  The interrupt may also be masked if previous driver which used
      the device was malfunctioning and we didn't load the FW (there is no
      other good way to comprehensively reset the PF).
      
      Note that FW unmasks the data interrupts by itself when vNIC is
      enabled, such helpful operation is not performed for LSC/EXN interrupts.
      
      Always unmask the auxiliary interrupts after request_irq().  On the
      remove path add missing PCI write flush before free_irq().
      
      Fixes: 4c352362 ("net: add driver for Netronome NFP4000/NFP6000 NIC VFs")
      Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Reviewed-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fc233650
  14. 10 1月, 2018 3 次提交
  15. 06 1月, 2018 2 次提交
  16. 18 12月, 2017 1 次提交
  17. 03 12月, 2017 2 次提交
  18. 16 11月, 2017 1 次提交
    • M
      mm: remove __GFP_COLD · 453f85d4
      Mel Gorman 提交于
      As the page free path makes no distinction between cache hot and cold
      pages, there is no real useful ordering of pages in the free list that
      allocation requests can take advantage of.  Juding from the users of
      __GFP_COLD, it is likely that a number of them are the result of copying
      other sites instead of actually measuring the impact.  Remove the
      __GFP_COLD parameter which simplifies a number of paths in the page
      allocator.
      
      This is potentially controversial but bear in mind that the size of the
      per-cpu pagelists versus modern cache sizes means that the whole per-cpu
      list can often fit in the L3 cache.  Hence, there is only a potential
      benefit for microbenchmarks that alloc/free pages in a tight loop.  It's
      even worse when THP is taken into account which has little or no chance
      of getting a cache-hot page as the per-cpu list is bypassed and the
      zeroing of multiple pages will thrash the cache anyway.
      
      The truncate microbenchmarks are not shown as this patch affects the
      allocation path and not the free path.  A page fault microbenchmark was
      tested but it showed no sigificant difference which is not surprising
      given that the __GFP_COLD branches are a miniscule percentage of the
      fault path.
      
      Link: http://lkml.kernel.org/r/20171018075952.10627-9-mgorman@techsingularity.netSigned-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      453f85d4
  19. 05 11月, 2017 2 次提交
  20. 02 11月, 2017 2 次提交
  21. 27 10月, 2017 1 次提交
  22. 11 10月, 2017 1 次提交
  23. 27 9月, 2017 2 次提交
    • D
      bpf, nfp: add meta data support · 65d88fd0
      Daniel Borkmann 提交于
      Implement support for transferring XDP meta data into skb for
      nfp driver; before calling into the program, xdp.data_meta points
      to xdp.data, where on program return with pass verdict, we call
      into skb_metadata_set().
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Reviewed-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      65d88fd0
    • D
      bpf: add meta pointer for direct access · de8f3a83
      Daniel Borkmann 提交于
      This work enables generic transfer of metadata from XDP into skb. The
      basic idea is that we can make use of the fact that the resulting skb
      must be linear and already comes with a larger headroom for supporting
      bpf_xdp_adjust_head(), which mangles xdp->data. Here, we base our work
      on a similar principle and introduce a small helper bpf_xdp_adjust_meta()
      for adjusting a new pointer called xdp->data_meta. Thus, the packet has
      a flexible and programmable room for meta data, followed by the actual
      packet data. struct xdp_buff is therefore laid out that we first point
      to data_hard_start, then data_meta directly prepended to data followed
      by data_end marking the end of packet. bpf_xdp_adjust_head() takes into
      account whether we have meta data already prepended and if so, memmove()s
      this along with the given offset provided there's enough room.
      
      xdp->data_meta is optional and programs are not required to use it. The
      rationale is that when we process the packet in XDP (e.g. as DoS filter),
      we can push further meta data along with it for the XDP_PASS case, and
      give the guarantee that a clsact ingress BPF program on the same device
      can pick this up for further post-processing. Since we work with skb
      there, we can also set skb->mark, skb->priority or other skb meta data
      out of BPF, thus having this scratch space generic and programmable
      allows for more flexibility than defining a direct 1:1 transfer of
      potentially new XDP members into skb (it's also more efficient as we
      don't need to initialize/handle each of such new members). The facility
      also works together with GRO aggregation. The scratch space at the head
      of the packet can be multiple of 4 byte up to 32 byte large. Drivers not
      yet supporting xdp->data_meta can simply be set up with xdp->data_meta
      as xdp->data + 1 as bpf_xdp_adjust_meta() will detect this and bail out,
      such that the subsequent match against xdp->data for later access is
      guaranteed to fail.
      
      The verifier treats xdp->data_meta/xdp->data the same way as we treat
      xdp->data/xdp->data_end pointer comparisons. The requirement for doing
      the compare against xdp->data is that it hasn't been modified from it's
      original address we got from ctx access. It may have a range marking
      already from prior successful xdp->data/xdp->data_end pointer comparisons
      though.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      de8f3a83