1. 24 9月, 2016 1 次提交
  2. 23 6月, 2016 5 次提交
  3. 18 5月, 2016 1 次提交
    • C
      IB/core: Do not require CAP_NET_ADMIN for packet sniffing · e3b6d8cf
      Christoph Lameter 提交于
      In the Ethernet/TCP world, CAP_NET_RAW is sufficient to allow a program
      to listen to all incoming packets on a specific interface, and the
      higher CAP_NET_ADMIN is required to set the interface into promiscuous
      mode.  We want to emulate that same basic division of privilege in the
      RDMA stack, so when dealing with Raw Ethernet QPs, allow apps with
      CAP_NET_RAW to listen to all incoming flows (and direct them as they see
      fit in their own listen stream).  Do not require CAP_NET_ADMIN just to
      listen to traffic already incoming.  Reserve CAP_NET_ADMIN if we attempt
      to set promiscuous mode.
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      e3b6d8cf
  4. 14 5月, 2016 2 次提交
  5. 22 3月, 2016 2 次提交
  6. 03 3月, 2016 1 次提交
    • M
      IB/{core, mlx5}: Fix input len in vendor part of create_qp/srq · 3d943c9d
      Majd Dibbiny 提交于
      Currently, the inlen field of the vendor's part of the command
      doesn't match the command buffer. This happens because the inlen
      accommodates ib_uverbs_cmd_hdr which is deducted from the in buffer.
      This is problematic since the vendor function could be called either
      from the legacy verb (where the input length mismatches the actual
      length) or by the extended verb (where the length matches). The vendor
      has no idea which function calls it and therefore has no way to know
      how the length variable should be treated.
      
      Fixing this by aligning the inlen to the correct length.
      
      All vendor drivers either assumed that inlen >= sizeof(vendor_uhw_cmd)
      or just failed wrongly (mlx5) and fixed in this patch.
      
      Fixes: cfb5e088 ('IB/mlx5: Add CQE version 1 support to user QPs and SRQs')
      Signed-off-by: NMajd Dibbiny <majd@mellanox.com>
      Reviewed-by: NMatan Barak <matanb@mellanox.com>
      Reviewed-by: NHaggai Eran <haggaie@mellanox.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      3d943c9d
  7. 02 3月, 2016 1 次提交
  8. 01 3月, 2016 1 次提交
    • M
      IB/core: Add don't trap flag to flow creation · a3100a78
      Marina Varshaver 提交于
      Don't trap flag (i.e. IB_FLOW_ATTR_FLAGS_DONT_TRAP) indicates that QP
      will receive traffic, but will not steal it.
      
      When a packet matches a flow steering rule that was created with
      the don't trap flag, the QPs assigned to this rule will get this
      packet, but matching will continue to other equal/lower priority
      rules. This will let other QPs assigned to those rules to get the
      packet too.
      
      If both don't trap rule and other rules have the same priority
      and match the same packet, the behavior is undefined.
      
      The don't trap flag can't be set with default rule types
      (i.e. IB_FLOW_ATTR_ALL_DEFAULT, IB_FLOW_ATTR_MC_DEFAULT) as default rules
      don't have rules after them and don't trap has no meaning here.
      Signed-off-by: NMarina Varshaver <marinav@mellanox.com>
      Reviewed-by: NMatan Barak <matanb@mellanox.com>
      Reviewed-by: NYishai Hadas <yishaih@mellanox.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      a3100a78
  9. 24 12月, 2015 3 次提交
  10. 23 12月, 2015 1 次提交
  11. 08 12月, 2015 2 次提交
  12. 22 10月, 2015 5 次提交
  13. 08 10月, 2015 1 次提交
    • C
      IB: split struct ib_send_wr · e622f2f4
      Christoph Hellwig 提交于
      This patch split up struct ib_send_wr so that all non-trivial verbs
      use their own structure which embedds struct ib_send_wr.  This dramaticly
      shrinks the size of a WR for most common operations:
      
      sizeof(struct ib_send_wr) (old):	96
      
      sizeof(struct ib_send_wr):		48
      sizeof(struct ib_rdma_wr):		64
      sizeof(struct ib_atomic_wr):		96
      sizeof(struct ib_ud_wr):		88
      sizeof(struct ib_fast_reg_wr):		88
      sizeof(struct ib_bind_mw_wr):		96
      sizeof(struct ib_sig_handover_wr):	80
      
      And with Sagi's pending MR rework the fast registration WR will also be
      down to a reasonable size:
      
      sizeof(struct ib_fastreg_wr):		64
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com> [srp, srpt]
      Reviewed-by: Chuck Lever <chuck.lever@oracle.com> [sunrpc]
      Tested-by: NHaggai Eran <haggaie@mellanox.com>
      Tested-by: NSagi Grimberg <sagig@mellanox.com>
      Tested-by: NSteve Wise <swise@opengridcomputing.com>
      e622f2f4
  14. 04 9月, 2015 1 次提交
  15. 31 8月, 2015 4 次提交
  16. 13 6月, 2015 4 次提交
  17. 19 2月, 2015 2 次提交
  18. 18 2月, 2015 1 次提交
  19. 06 2月, 2015 1 次提交
  20. 16 12月, 2014 1 次提交
    • H
      IB/core: Implement support for MMU notifiers regarding on demand paging regions · 882214e2
      Haggai Eran 提交于
      * Add an interval tree implementation for ODP umems. Create an
        interval tree for each ucontext (including a count of the number of
        ODP MRs in this context, semaphore, etc.), and register ODP umems in
        the interval tree.
      * Add MMU notifiers handling functions, using the interval tree to
        notify only the relevant umems and underlying MRs.
      * Register to receive MMU notifier events from the MM subsystem upon
        ODP MR registration (and unregister accordingly).
      * Add a completion object to synchronize the destruction of ODP umems.
      * Add mechanism to abort page faults when there's a concurrent invalidation.
      
      The way we synchronize between concurrent invalidations and page
      faults is by keeping a counter of currently running invalidations, and
      a sequence number that is incremented whenever an invalidation is
      caught. The page fault code checks the counter and also verifies that
      the sequence number hasn't progressed before it updates the umem's
      page tables. This is similar to what the kvm module does.
      
      In order to prevent the case where we register a umem in the middle of
      an ongoing notifier, we also keep a per ucontext counter of the total
      number of active mmu notifiers. We only enable new umems when all the
      running notifiers complete.
      Signed-off-by: NSagi Grimberg <sagig@mellanox.com>
      Signed-off-by: NShachar Raindel <raindel@mellanox.com>
      Signed-off-by: NHaggai Eran <haggaie@mellanox.com>
      Signed-off-by: NYuval Dagan <yuvalda@mellanox.com>
      Signed-off-by: NRoland Dreier <roland@purestorage.com>
      882214e2