1. 04 5月, 2017 22 次提交
  2. 03 5月, 2017 1 次提交
    • S
      qed*: Fix issues in the ptp filter config implementation. · 8d3f87d8
      sudarsana.kalluru@cavium.com 提交于
      PTP hardware filter configuration performed by the driver for a given
      user requested config is not correct for some of the PTP modes.
      Following changes are needed for PTP config-filter implementation.
       1. NIG_REG_TX_PTP_EN register - Bits 0/1/2 respectively enables
          TimeSync/"V1 frame format support"/"V2 frame format support" on
          the TX side. Set the associated bits based on the user request.
       2. ptp4l application fails to operate in Peer Delay mode. Following
          changes are needed to fix this,
          a. Driver should enable (set to 0) DA #1-related bits for IPv4,
             IPv6 and MAC destination addresses in these registers:
               NIG_REG_TX_LLH_PTP_RULE_MASK
               NIG_REG_LLH_PTP_RULE_MASK
          b. NIG_REG_LLH_PTP_PARAM_MASK/NIG_REG_TX_LLH_PTP_PARAM_MASK should
             be set to 0x0 in all modes.
      Signed-off-by: NSudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com>
      Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8d3f87d8
  3. 02 5月, 2017 3 次提交
  4. 01 5月, 2017 4 次提交
    • R
      qed: output the DPM status and WID count · 20b1bd96
      Ram Amrani 提交于
      Output to the RDMA driver whether DPM mode is enabled or disabled in
      the HW and if so what is the number of WIDs it supports
      Signed-off-by: NRam Amrani <Ram.Amrani@cavium.com>
      Signed-off-by: NYuval Mintz <Yuval.Mintz@cavium.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      20b1bd96
    • J
      xdp: propagate extended ack to XDP setup · ddf9f970
      Jakub Kicinski 提交于
      Drivers usually have a number of restrictions for running XDP
      - most common being buffer sizes, LRO and number of rings.
      Even though some drivers try to be helpful and print error
      messages experience shows that users don't often consult
      kernel logs on netlink errors.  Try to use the new extended
      ack mechanism to carry the message back to user space.
      Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ddf9f970
    • J
      netlink: add NULL-friendly helper for setting extended ACK message · 45d9b378
      Jakub Kicinski 提交于
      As we propagate extended ack reporting throughout various paths in
      the kernel it may be that the same function is called with the
      extended ack parameter passed as NULL.  One place where that happens
      is in drivers which have a centralized reconfiguration function
      called both from ndos and from ethtool_ops.  Add a new helper for
      setting the error message in such conditions.
      
      Existing helper is left as is to encourage propagating the ext act
      fully wherever possible.  It also makes it clear in the code which
      messages may be lost due to ext ack being NULL.
      Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      45d9b378
    • D
      mm, zone_device: Replace {get, put}_zone_device_page() with a single reference to fix pmem crash · 71389703
      Dan Williams 提交于
      The x86 conversion to the generic GUP code included a small change which causes
      crashes and data corruption in the pmem code - not good.
      
      The root cause is that the /dev/pmem driver code implicitly relies on the x86
      get_user_pages() implementation doing a get_page() on the page refcount, because
      get_page() does a get_zone_device_page() which properly refcounts pmem's separate
      page struct arrays that are not present in the regular page struct structures.
      (The pmem driver does this because it can cover huge memory areas.)
      
      But the x86 conversion to the generic GUP code changed the get_page() to
      page_cache_get_speculative() which is faster but doesn't do the
      get_zone_device_page() call the pmem code relies on.
      
      One way to solve the regression would be to change the generic GUP code to use
      get_page(), but that would slow things down a bit and punish other generic-GUP
      using architectures for an x86-ism they did not care about. (Arguably the pmem
      driver was probably not working reliably for them: but nvdimm is an Intel
      feature, so non-x86 exposure is probably still limited.)
      
      So restructure the pmem code's interface with the MM instead: get rid of the
      get/put_zone_device_page() distinction, integrate put_zone_device_page() into
      __put_page() and and restructure the pmem completion-wait and teardown machinery:
      
      Kirill points out that the calls to {get,put}_dev_pagemap() can be
      removed from the mm fast path if we take a single get_dev_pagemap()
      reference to signify that the page is alive and use the final put of the
      page to drop that reference.
      
      This does require some care to make sure that any waits for the
      percpu_ref to drop to zero occur *after* devm_memremap_page_release(),
      since it now maintains its own elevated reference.
      
      This speeds up things while also making the pmem refcounting more robust going
      forward.
      Suggested-by: NKirill Shutemov <kirill.shutemov@linux.intel.com>
      Tested-by: NKirill Shutemov <kirill.shutemov@linux.intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Reviewed-by: NLogan Gunthorpe <logang@deltatee.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Jérôme Glisse <jglisse@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/149339998297.24933.1129582806028305912.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      71389703
  5. 30 4月, 2017 1 次提交
    • H
      net/mlx5e: Update neighbour 'used' state using HW flow rules counters · f6dfb4c3
      Hadar Hen Zion 提交于
      When IP tunnel encapsulation rules are offloaded, the kernel can't see
      the traffic of the offloaded flow. The neighbour for the IP tunnel
      destination of the offloaded flow can mistakenly become STALE and
      deleted by the kernel since its 'used' value wasn't changed.
      
      To make sure that a neighbour which is used by the HW won't become
      STALE, we proactively update the neighbour 'used' value every
      DELAY_PROBE_TIME period, when packets were matched and counted by the HW
      for one of the tunnel encap flows related to this neighbour.
      
      The periodic task that updates the used neighbours is scheduled when a
      tunnel encap rule is successfully offloaded into HW and keeps re-scheduling
      itself as long as the representor's neighbours list isn't empty.
      
      Add, remove, lookup and status change operations done over the
      representor's neighbours list or the neighbour hash entry encaps list
      are all serialized by RTNL lock.
      Signed-off-by: NHadar Hen Zion <hadarh@mellanox.com>
      Reviewed-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      f6dfb4c3
  6. 29 4月, 2017 1 次提交
  7. 28 4月, 2017 6 次提交
  8. 27 4月, 2017 2 次提交