1. 01 3月, 2016 3 次提交
  2. 13 2月, 2016 1 次提交
  3. 03 2月, 2016 4 次提交
  4. 22 1月, 2016 11 次提交
  5. 20 1月, 2016 3 次提交
  6. 18 1月, 2016 1 次提交
    • D
      net/mlx5_core: Fix trimming down IRQ number · 0b6e26ce
      Doron Tsur 提交于
      With several ConnectX-4 cards installed on a server, one may receive
      irqn > 255 from the kernel API, which we mistakenly trim to 8bit.
      
      This causes EQ creation failure with the following stack trace:
      [<ffffffff812a11f4>] dump_stack+0x48/0x64
      [<ffffffff810ace21>] __setup_irq+0x3a1/0x4f0
      [<ffffffff810ad7e0>] request_threaded_irq+0x120/0x180
      [<ffffffffa0923660>] ? mlx5_eq_int+0x450/0x450 [mlx5_core]
      [<ffffffffa0922f64>] mlx5_create_map_eq+0x1e4/0x2b0 [mlx5_core]
      [<ffffffffa091de01>] alloc_comp_eqs+0xb1/0x180 [mlx5_core]
      [<ffffffffa091ea99>] mlx5_dev_init+0x5e9/0x6e0 [mlx5_core]
      [<ffffffffa091ec29>] init_one+0x99/0x1c0 [mlx5_core]
      [<ffffffff812e2afc>] local_pci_probe+0x4c/0xa0
      
      Fixing it by changing of the irqn type from u8 to unsigned int to
      support values > 255
      
      Fixes: 61d0e73e ('net/mlx5_core: Use the the real irqn in eq->irqn')
      Reported-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDoron Tsur <doront@mellanox.com>
      Signed-off-by: NMatan Barak <matanb@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0b6e26ce
  7. 12 1月, 2016 1 次提交
  8. 24 12月, 2015 15 次提交
  9. 09 12月, 2015 1 次提交
    • L
      IB/mlx5: Postpone remove_keys under knowledge of coming preemption · ab5cdc31
      Leon Romanovsky 提交于
      The remove_keys() logic is performed as garbage collection task. Such
      task is intended to be run when no other active processes are running.
      
      The need_resched() will return TRUE if there are user tasks to be
      activated in near future.
      
      In such case, we don't execute remove_keys() and postpone
      the garbage collection work to try to run in next cycle,
      in order to free CPU resources to other tasks.
      
      The possible pseudo-code to trigger such scenario:
      1. Allocate a lot of MR to fill the cache above the limit.
      2. Wait a small amount of time "to calm" the system.
      3. Start CPU extensive operations on multi-node cluster.
      4. Expect performance degradation during MR cache shrink operation.
      Signed-off-by: NLeon Romanovsky <leonro@mellanox.com>
      Signed-off-by: NEli Cohen <eli@mellanox.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      ab5cdc31