1. 09 12月, 2016 1 次提交
  2. 03 12月, 2016 1 次提交
  3. 30 11月, 2016 1 次提交
    • E
      mlx4: give precise rx/tx bytes/packets counters · 40931b85
      Eric Dumazet 提交于
      mlx4 stats are chaotic because a deferred work queue is responsible
      to update them every 250 ms.
      
      Even sampling stats every one second with "sar -n DEV 1" gives
      variations like the following :
      
      lpaa23:~# sar -n DEV 1 10 | grep eth0 | cut -c1-65
      07:39:22         eth0 146877.00 3265554.00   9467.15 4828168.50
      07:39:23         eth0 146587.00 3260329.00   9448.15 4820445.98
      07:39:24         eth0 146894.00 3259989.00   9468.55 4819943.26
      07:39:25         eth0 110368.00 2454497.00   7113.95 3629012.17  <<>>
      07:39:26         eth0 146563.00 3257502.00   9447.25 4816266.23
      07:39:27         eth0 145678.00 3258292.00   9389.79 4817414.39
      07:39:28         eth0 145268.00 3253171.00   9363.85 4809852.46
      07:39:29         eth0 146439.00 3262185.00   9438.97 4823172.48
      07:39:30         eth0 146758.00 3264175.00   9459.94 4826124.13
      07:39:31         eth0 146843.00 3256903.00   9465.44 4815381.97
      Average:         eth0 142827.50 3179259.70   9206.30 4700578.16
      
      This patch allows rx/tx bytes/packets counters being folded at the
      time we need stats.
      
      We now can fetch stats every 1 ms if we want to check NIC behavior
      on a small time window. It is also easier to detect anomalies.
      
      lpaa23:~# sar -n DEV 1 10 | grep eth0 | cut -c1-65
      07:42:50         eth0 142915.00 3177696.00   9212.06 4698270.42
      07:42:51         eth0 143741.00 3200232.00   9265.15 4731593.02
      07:42:52         eth0 142781.00 3171600.00   9202.92 4689260.16
      07:42:53         eth0 143835.00 3192932.00   9271.80 4720761.39
      07:42:54         eth0 141922.00 3165174.00   9147.64 4679759.21
      07:42:55         eth0 142993.00 3207038.00   9216.78 4741653.05
      07:42:56         eth0 141394.06 3154335.64   9113.85 4663731.73
      07:42:57         eth0 141850.00 3161202.00   9144.48 4673866.07
      07:42:58         eth0 143439.00 3180736.00   9246.05 4702755.35
      07:42:59         eth0 143501.00 3210992.00   9249.99 4747501.84
      Average:         eth0 142835.66 3182165.93   9206.98 4704874.08
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Tariq Toukan <tariqt@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      40931b85
  4. 29 11月, 2016 1 次提交
    • T
      Revert "net/mlx4_en: Avoid unregister_netdev at shutdown flow" · b4353708
      Tariq Toukan 提交于
      This reverts commit 9d769311.
      
      Using unregister_netdev at shutdown flow prevents calling
      the netdev's ndos or trying to access its freed resources.
      
      This fixes crashes like the following:
       Call Trace:
        [<ffffffff81587a6e>] dev_get_phys_port_id+0x1e/0x30
        [<ffffffff815a36ce>] rtnl_fill_ifinfo+0x4be/0xff0
        [<ffffffff815a53f3>] rtmsg_ifinfo_build_skb+0x73/0xe0
        [<ffffffff815a5476>] rtmsg_ifinfo.part.27+0x16/0x50
        [<ffffffff815a54c8>] rtmsg_ifinfo+0x18/0x20
        [<ffffffff8158a6c6>] netdev_state_change+0x46/0x50
        [<ffffffff815a5e78>] linkwatch_do_dev+0x38/0x50
        [<ffffffff815a6165>] __linkwatch_run_queue+0xf5/0x170
        [<ffffffff815a6205>] linkwatch_event+0x25/0x30
        [<ffffffff81099a82>] process_one_work+0x152/0x400
        [<ffffffff8109a325>] worker_thread+0x125/0x4b0
        [<ffffffff8109a200>] ? rescuer_thread+0x350/0x350
        [<ffffffff8109fc6a>] kthread+0xca/0xe0
        [<ffffffff8109fba0>] ? kthread_park+0x60/0x60
        [<ffffffff816a1285>] ret_from_fork+0x25/0x30
      
      Fixes: 9d769311 ("net/mlx4_en: Avoid unregister_netdev at shutdown flow")
      Signed-off-by: NTariq Toukan <tariqt@mellanox.com>
      Reported-by: NSebastian Ott <sebott@linux.vnet.ibm.com>
      Reported-by: NSteve Wise <swise@opengridcomputing.com>
      Cc: Jiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b4353708
  5. 28 11月, 2016 1 次提交
  6. 25 11月, 2016 1 次提交
    • E
      mlx4: reorganize struct mlx4_en_tx_ring · e3f42f84
      Eric Dumazet 提交于
      Goal is to reorganize this critical structure to increase performance.
      
      ndo_start_xmit() should only dirty one cache line, and access as few
      cache lines as possible.
      
      Add sp_ (Slow Path) prefix to fields that are not used in fast path,
      to make clear what is going on.
      
      After this patch pahole reports something much better, as all
      ndo_start_xmit() needed fields are packed into two cache lines instead
      of seven or eight
      
      struct mlx4_en_tx_ring {
      	u32                        last_nr_txbb;         /*     0   0x4 */
      	u32                        cons;                 /*   0x4   0x4 */
      	long unsigned int          wake_queue;           /*   0x8   0x8 */
      	struct netdev_queue *      tx_queue;             /*  0x10   0x8 */
      	u32                        (*free_tx_desc)(struct mlx4_en_priv *, struct mlx4_en_tx_ring *, int, u8, u64, int); /*  0x18   0x8 */
      	struct mlx4_en_rx_ring *   recycle_ring;         /*  0x20   0x8 */
      
      	/* XXX 24 bytes hole, try to pack */
      
      	/* --- cacheline 1 boundary (64 bytes) --- */
      	u32                        prod;                 /*  0x40   0x4 */
      	unsigned int               tx_dropped;           /*  0x44   0x4 */
      	long unsigned int          bytes;                /*  0x48   0x8 */
      	long unsigned int          packets;              /*  0x50   0x8 */
      	long unsigned int          tx_csum;              /*  0x58   0x8 */
      	long unsigned int          tso_packets;          /*  0x60   0x8 */
      	long unsigned int          xmit_more;            /*  0x68   0x8 */
      	struct mlx4_bf             bf;                   /*  0x70  0x18 */
      	/* --- cacheline 2 boundary (128 bytes) was 8 bytes ago --- */
      	__be32                     doorbell_qpn;         /*  0x88   0x4 */
      	__be32                     mr_key;               /*  0x8c   0x4 */
      	u32                        size;                 /*  0x90   0x4 */
      	u32                        size_mask;            /*  0x94   0x4 */
      	u32                        full_size;            /*  0x98   0x4 */
      	u32                        buf_size;             /*  0x9c   0x4 */
      	void *                     buf;                  /*  0xa0   0x8 */
      	struct mlx4_en_tx_info *   tx_info;              /*  0xa8   0x8 */
      	int                        qpn;                  /*  0xb0   0x4 */
      	u8                         queue_index;          /*  0xb4   0x1 */
      	bool                       bf_enabled;           /*  0xb5   0x1 */
      	bool                       bf_alloced;           /*  0xb6   0x1 */
      	u8                         hwtstamp_tx_type;     /*  0xb7   0x1 */
      	u8 *                       bounce_buf;           /*  0xb8   0x8 */
      	/* --- cacheline 3 boundary (192 bytes) --- */
      	long unsigned int          queue_stopped;        /*  0xc0   0x8 */
      	struct mlx4_hwq_resources  sp_wqres;             /*  0xc8  0x58 */
      	/* --- cacheline 4 boundary (256 bytes) was 32 bytes ago --- */
      	struct mlx4_qp             sp_qp;                /* 0x120  0x30 */
      	/* --- cacheline 5 boundary (320 bytes) was 16 bytes ago --- */
      	struct mlx4_qp_context     sp_context;           /* 0x150  0xf8 */
      	/* --- cacheline 9 boundary (576 bytes) was 8 bytes ago --- */
      	cpumask_t                  sp_affinity_mask;     /* 0x248  0x20 */
      	enum mlx4_qp_state         sp_qp_state;          /* 0x268   0x4 */
      	u16                        sp_stride;            /* 0x26c   0x2 */
      	u16                        sp_cqn;               /* 0x26e   0x2 */
      
      	/* size: 640, cachelines: 10, members: 36 */
      	/* sum members: 600, holes: 1, sum holes: 24 */
      	/* padding: 16 */
      };
      
      Instead of this silly placement :
      
      struct mlx4_en_tx_ring {
      	u32                        last_nr_txbb;         /*     0   0x4 */
      	u32                        cons;                 /*   0x4   0x4 */
      	long unsigned int          wake_queue;           /*   0x8   0x8 */
      
      	/* XXX 48 bytes hole, try to pack */
      
      	/* --- cacheline 1 boundary (64 bytes) --- */
      	u32                        prod;                 /*  0x40   0x4 */
      
      	/* XXX 4 bytes hole, try to pack */
      
      	long unsigned int          bytes;                /*  0x48   0x8 */
      	long unsigned int          packets;              /*  0x50   0x8 */
      	long unsigned int          tx_csum;              /*  0x58   0x8 */
      	long unsigned int          tso_packets;          /*  0x60   0x8 */
      	long unsigned int          xmit_more;            /*  0x68   0x8 */
      	unsigned int               tx_dropped;           /*  0x70   0x4 */
      
      	/* XXX 4 bytes hole, try to pack */
      
      	struct mlx4_bf             bf;                   /*  0x78  0x18 */
      	/* --- cacheline 2 boundary (128 bytes) was 16 bytes ago --- */
      	long unsigned int          queue_stopped;        /*  0x90   0x8 */
      	cpumask_t                  affinity_mask;        /*  0x98  0x10 */
      	struct mlx4_qp             qp;                   /*  0xa8  0x30 */
      	/* --- cacheline 3 boundary (192 bytes) was 24 bytes ago --- */
      	struct mlx4_hwq_resources  wqres;                /*  0xd8  0x58 */
      	/* --- cacheline 4 boundary (256 bytes) was 48 bytes ago --- */
      	u32                        size;                 /* 0x130   0x4 */
      	u32                        size_mask;            /* 0x134   0x4 */
      	u16                        stride;               /* 0x138   0x2 */
      
      	/* XXX 2 bytes hole, try to pack */
      
      	u32                        full_size;            /* 0x13c   0x4 */
      	/* --- cacheline 5 boundary (320 bytes) --- */
      	u16                        cqn;                  /* 0x140   0x2 */
      
      	/* XXX 2 bytes hole, try to pack */
      
      	u32                        buf_size;             /* 0x144   0x4 */
      	__be32                     doorbell_qpn;         /* 0x148   0x4 */
      	__be32                     mr_key;               /* 0x14c   0x4 */
      	void *                     buf;                  /* 0x150   0x8 */
      	struct mlx4_en_tx_info *   tx_info;              /* 0x158   0x8 */
      	struct mlx4_en_rx_ring *   recycle_ring;         /* 0x160   0x8 */
      	u32                        (*free_tx_desc)(struct mlx4_en_priv *, struct mlx4_en_tx_ring *, int, u8, u64, int); /* 0x168   0x8 */
      	u8 *                       bounce_buf;           /* 0x170   0x8 */
      	struct mlx4_qp_context     context;              /* 0x178  0xf8 */
      	/* --- cacheline 9 boundary (576 bytes) was 48 bytes ago --- */
      	int                        qpn;                  /* 0x270   0x4 */
      	enum mlx4_qp_state         qp_state;             /* 0x274   0x4 */
      	u8                         queue_index;          /* 0x278   0x1 */
      	bool                       bf_enabled;           /* 0x279   0x1 */
      	bool                       bf_alloced;           /* 0x27a   0x1 */
      
      	/* XXX 5 bytes hole, try to pack */
      
      	/* --- cacheline 10 boundary (640 bytes) --- */
      	struct netdev_queue *      tx_queue;             /* 0x280   0x8 */
      	int                        hwtstamp_tx_type;     /* 0x288   0x4 */
      
      	/* size: 704, cachelines: 11, members: 36 */
      	/* sum members: 587, holes: 6, sum holes: 65 */
      	/* padding: 52 */
      };
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Reviewed-by: NTariq Toukan <tariqt@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e3f42f84
  7. 24 11月, 2016 1 次提交
  8. 13 11月, 2016 1 次提交
  9. 10 11月, 2016 1 次提交
  10. 03 11月, 2016 2 次提交
    • T
      net/mlx4_en: Add ethtool statistics for XDP cases · 15fca2c8
      Tariq Toukan 提交于
      XDP statistics are reported in ethtool, in total and per ring,
      as follows:
      - xdp_drop: the number of packets dropped by xdp.
      - xdp_tx: the number of packets forwarded by xdp.
      - xdp_tx_full: the number of times an xdp forward failed
      	due to a full tx xdp ring.
      
      In addition, all packets that are dropped/forwarded by XDP
      are no longer accounted in rx_packets/rx_bytes of the ring,
      so that they count traffic that is passed to the stack.
      Signed-off-by: NTariq Toukan <tariqt@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      15fca2c8
    • T
      net/mlx4_en: Refactor the XDP forwarding rings scheme · 67f8b1dc
      Tariq Toukan 提交于
      Separately manage the two types of TX rings: regular ones, and XDP.
      Upon an XDP set, do not borrow regular TX rings and convert them
      into XDP ones, but allocate new ones, unless we hit the max number
      of rings.
      Which means that in systems with smaller #cores we will not consume
      the current TX rings for XDP, while we are still in the num TX limit.
      
      XDP TX rings counters are not shown in ethtool statistics.
      Instead, XDP counters will be added to the respective RX rings
      in a downstream patch.
      
      This has no performance implications.
      Signed-off-by: NTariq Toukan <tariqt@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      67f8b1dc
  11. 30 10月, 2016 3 次提交
  12. 18 10月, 2016 1 次提交
  13. 24 9月, 2016 3 次提交
  14. 12 9月, 2016 1 次提交
  15. 07 9月, 2016 1 次提交
    • B
      net/mlx4_en: protect ring->xdp_prog with rcu_read_lock · 326fe02d
      Brenden Blanco 提交于
      Depending on the preempt mode, the bpf_prog stored in xdp_prog may be
      freed despite the use of call_rcu inside bpf_prog_put. The situation is
      possible when running in PREEMPT_RCU=y mode, for instance, since the rcu
      callback for destroying the bpf prog can run even during the bh handling
      in the mlx4 rx path.
      
      Several options were considered before this patch was settled on:
      
      Add a napi_synchronize loop in mlx4_xdp_set, which would occur after all
      of the rings are updated with the new program.
      This approach has the disadvantage that as the number of rings
      increases, the speed of update will slow down significantly due to
      napi_synchronize's msleep(1).
      
      Add a new rcu_head in bpf_prog_aux, to be used by a new bpf_prog_put_bh.
      The action of the bpf_prog_put_bh would be to then call bpf_prog_put
      later. Those drivers that consume a bpf prog in a bh context (like mlx4)
      would then use the bpf_prog_put_bh instead when the ring is up. This has
      the problem of complexity, in maintaining proper refcnts and rcu lists,
      and would likely be harder to review. In addition, this approach to
      freeing must be exclusive with other frees of the bpf prog, for instance
      a _bh prog must not be referenced from a prog array that is consumed by
      a non-_bh prog.
      
      The placement of rcu_read_lock in this patch is functionally the same as
      putting an rcu_read_lock in napi_poll. Actually doing so could be a
      potentially controversial change, but would bring the implementation in
      line with sk_busy_loop (though of course the nature of those two paths
      is substantially different), and would also avoid future copy/paste
      problems with future supporters of XDP. Still, this patch does not take
      that opinionated option.
      
      Testing was done with kernels in either PREEMPT_RCU=y or
      CONFIG_PREEMPT_VOLUNTARY=y+PREEMPT_RCU=n modes, with neither exhibiting
      any drawback. With PREEMPT_RCU=n, the extra call to rcu_read_lock did
      not show up in the perf report whatsoever, and with PREEMPT_RCU=y the
      overhead of rcu_read_lock (according to perf) was the same before/after.
      In the rx path, rcu_read_lock is eventually called for every packet
      from netif_receive_skb_internal, so the napi poll call's rcu_read_lock
      is easily amortized.
      
      v2:
      Remove extra rcu_read_lock in mlx4_en_process_rx_cq body
      Annotate xdp_prog with __rcu, and convert all usages to rcu_assign or
      rcu_dereference[_protected] as appropriate.
      Add explicit mutex lock around rcu_assign instead of xchg loop.
      
      Fixes: d576acf0 ("net/mlx4_en: add page recycle to prepare rx ring for tx support")
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <alexei.starovoitov@gmail.com>
      Signed-off-by: NBrenden Blanco <bblanco@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      326fe02d
  16. 20 7月, 2016 5 次提交
    • B
      net/mlx4_en: add xdp forwarding and data write support · 9ecc2d86
      Brenden Blanco 提交于
      A user will now be able to loop packets back out of the same port using
      a bpf program attached to xdp hook. Updates to the packet contents from
      the bpf program is also supported.
      
      For the packet write feature to work, the rx buffers are now mapped as
      bidirectional when the page is allocated. This occurs only when the xdp
      hook is active.
      
      When the program returns a TX action, enqueue the packet directly to a
      dedicated tx ring, so as to avoid completely any locking. This requires
      the tx ring to be allocated 1:1 for each rx ring, as well as the tx
      completion running in the same softirq.
      
      Upon tx completion, this dedicated tx ring recycles pages without
      unmapping directly back to the original rx ring. In steady state tx/drop
      workload, effectively 0 page allocs/frees will occur.
      
      In order to separate out the paths between free and recycle, a
      free_tx_desc func pointer is introduced that is optionally updated
      whenever recycle_ring is activated. By default the original free
      function is always initialized.
      Signed-off-by: NBrenden Blanco <bblanco@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9ecc2d86
    • B
      net/mlx4_en: add page recycle to prepare rx ring for tx support · d576acf0
      Brenden Blanco 提交于
      The mlx4 driver by default allocates order-3 pages for the ring to
      consume in multiple fragments. When the device has an xdp program, this
      behavior will prevent tx actions since the page must be re-mapped in
      TODEVICE mode, which cannot be done if the page is still shared.
      
      Start by making the allocator configurable based on whether xdp is
      running, such that order-0 pages are always used and never shared.
      
      Since this will stress the page allocator, add a simple page cache to
      each rx ring. Pages in the cache are left dma-mapped, and in drop-only
      stress tests the page allocator is eliminated from the perf report.
      
      Note that setting an xdp program will now require the rings to be
      reconfigured.
      
      Before:
       26.91%  ksoftirqd/0  [mlx4_en]         [k] mlx4_en_process_rx_cq
       17.88%  ksoftirqd/0  [mlx4_en]         [k] mlx4_en_alloc_frags
        6.00%  ksoftirqd/0  [mlx4_en]         [k] mlx4_en_free_frag
        4.49%  ksoftirqd/0  [kernel.vmlinux]  [k] get_page_from_freelist
        3.21%  swapper      [kernel.vmlinux]  [k] intel_idle
        2.73%  ksoftirqd/0  [kernel.vmlinux]  [k] bpf_map_lookup_elem
        2.57%  swapper      [mlx4_en]         [k] mlx4_en_process_rx_cq
      
      After:
       31.72%  swapper      [kernel.vmlinux]       [k] intel_idle
        8.79%  swapper      [mlx4_en]              [k] mlx4_en_process_rx_cq
        7.54%  swapper      [kernel.vmlinux]       [k] poll_idle
        6.36%  swapper      [mlx4_core]            [k] mlx4_eq_int
        4.21%  swapper      [kernel.vmlinux]       [k] tasklet_action
        4.03%  swapper      [kernel.vmlinux]       [k] cpuidle_enter_state
        3.43%  swapper      [mlx4_en]              [k] mlx4_en_prepare_rx_desc
        2.18%  swapper      [kernel.vmlinux]       [k] native_irq_return_iret
        1.37%  swapper      [kernel.vmlinux]       [k] menu_select
        1.09%  swapper      [kernel.vmlinux]       [k] bpf_map_lookup_elem
      Signed-off-by: NBrenden Blanco <bblanco@plumgrid.com>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d576acf0
    • B
      net/mlx4_en: add support for fast rx drop bpf program · 47a38e15
      Brenden Blanco 提交于
      Add support for the BPF_PROG_TYPE_XDP hook in mlx4 driver.
      
      In tc/socket bpf programs, helpers linearize skb fragments as needed
      when the program touches the packet data. However, in the pursuit of
      speed, XDP programs will not be allowed to use these slower functions,
      especially if it involves allocating an skb.
      
      Therefore, disallow MTU settings that would produce a multi-fragment
      packet that XDP programs would fail to access. Future enhancements could
      be done to increase the allowable MTU.
      
      The xdp program is present as a per-ring data structure, but as of yet
      it is not possible to set at that granularity through any ndo.
      Signed-off-by: NBrenden Blanco <bblanco@plumgrid.com>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      47a38e15
    • E
      net/mlx4_en: Add resilience in low memory systems · ec25bc04
      Eugenia Emantayev 提交于
      This patch fixes the lost of Ethernet port on low memory system,
      when driver frees its resources and fails to allocate new resources.
      Issue could happen while changing number of channels, rings size or
      changing the timestamp configuration.
      This fix is necessary because of removing vmap use in the code.
      When vmap was in use driver could allocate non-contiguous memory
      and make it contiguous with vmap. Now it could fail to allocate
      a large chunk of contiguous memory and lose the port.
      Current code tries to allocate new resources and then upon success
      frees the old resources.
      
      Fixes: 73898db0 ('net/mlx4: Avoid wrong virtual mappings')
      Signed-off-by: NEugenia Emantayev <eugenia@mellanox.com>
      Signed-off-by: NTariq Toukan <tariqt@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ec25bc04
    • E
      net/mlx4_en: Move filters cleanup to a proper location · 30f56e3c
      Eugenia Emantayev 提交于
      Filters cleanup should be done once before destroying net device,
      since filters list is contained in the private data.
      
      Fixes: 1eb8c695 ('net/mlx4_en: Add accelerated RFS support')
      Signed-off-by: NEugenia Emantayev <eugenia@mellanox.com>
      Signed-off-by: NTariq Toukan <tariqt@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      30f56e3c
  17. 24 6月, 2016 1 次提交
  18. 23 6月, 2016 2 次提交
  19. 18 6月, 2016 1 次提交
  20. 17 6月, 2016 1 次提交
  21. 10 6月, 2016 1 次提交
  22. 26 5月, 2016 4 次提交
  23. 06 5月, 2016 1 次提交
    • H
      net/mlx4: Avoid wrong virtual mappings · 73898db0
      Haggai Abramovsky 提交于
      The dma_alloc_coherent() function returns a virtual address which can
      be used for coherent access to the underlying memory.  On some
      architectures, like arm64, undefined behavior results if this memory is
      also accessed via virtual mappings that are not coherent.  Because of
      their undefined nature, operations like virt_to_page() return garbage
      when passed virtual addresses obtained from dma_alloc_coherent().  Any
      subsequent mappings via vmap() of the garbage page values are unusable
      and result in bad things like bus errors (synchronous aborts in ARM64
      speak).
      
      The mlx4 driver contains code that does the equivalent of:
      vmap(virt_to_page(dma_alloc_coherent)), this results in an OOPs when the
      device is opened.
      
      Prevent Ethernet driver to run this problematic code by forcing it to
      allocate contiguous memory. As for the Infiniband driver, at first we
      are trying to allocate contiguous memory, but in case of failure roll
      back to work with fragmented memory.
      Signed-off-by: NHaggai Abramovsky <hagaya@mellanox.com>
      Signed-off-by: NYishai Hadas <yishaih@mellanox.com>
      Reported-by: NDavid Daney <david.daney@cavium.com>
      Tested-by: NSinan Kaya <okaya@codeaurora.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      73898db0
  24. 05 5月, 2016 2 次提交
  25. 22 4月, 2016 1 次提交
  26. 04 3月, 2016 1 次提交
    • J
      net: relax setup_tc ndo op handle restriction · 5eb4dce3
      John Fastabend 提交于
      I added this check in setup_tc to multiple drivers,
      
       if (handle != TC_H_ROOT || tc->type != TC_SETUP_MQPRIO)
      
      Unfortunately restricting to TC_H_ROOT like this breaks the old
      instantiation of mqprio to setup a hardware qdisc. This patch
      relaxes the test to only check the type to make it equivalent
      to the check before I broke it. With this the old instantiation
      continues to work.
      
      A good smoke test is to setup mqprio with,
      
      # tc qdisc add dev eth4 root mqprio num_tc 8 \
        map 0 1 2 3 4 5 6 7 \
        queues 0@0 1@1 2@2 3@3 4@4 5@5 6@6 7@7
      
      Fixes: e4c6734e ("net: rework ndo tc op to consume additional qdisc handle paramete")
      Reported-by: NSingh Krishneil <krishneil.k.singh@intel.com>
      Reported-by: NJake Keller <jacob.e.keller@intel.com>
      CC: Murali Karicheri <m-karicheri2@ti.com>
      CC: Shradha Shah <sshah@solarflare.com>
      CC: Or Gerlitz <ogerlitz@mellanox.com>
      CC: Ariel Elior <ariel.elior@qlogic.com>
      CC: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      CC: Bruce Allan <bruce.w.allan@intel.com>
      CC: Jesse Brandeburg <jesse.brandeburg@intel.com>
      CC: Don Skidmore <donald.c.skidmore@intel.com>
      Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5eb4dce3