1. 25 12月, 2013 1 次提交
  2. 21 12月, 2013 1 次提交
  3. 20 12月, 2013 2 次提交
  4. 19 12月, 2013 5 次提交
    • N
      target/file: Update hw_max_sectors based on current block_size · 95cadace
      Nicholas Bellinger 提交于
      This patch allows FILEIO to update hw_max_sectors based on the current
      max_bytes_per_io.  This is required because vfs_[writev,readv]() can accept
      a maximum of 2048 iovecs per call, so the enforced hw_max_sectors really
      needs to be calculated based on block_size.
      
      This addresses a >= v3.5 bug where block_size=512 was rejecting > 1M
      sized I/O requests, because FD_MAX_SECTORS was hardcoded to 2048 for
      the block_size=4096 case.
      
      (v2: Use max_bytes_per_io instead of ->update_hw_max_sectors)
      Reported-by: NHenrik Goldman <hg@x-formation.com>
      Cc: <stable@vger.kernel.org> #3.5+
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      95cadace
    • N
      iser-target: Move INIT_WORK setup into isert_create_device_ib_res · 2853c2b6
      Nicholas Bellinger 提交于
      This patch moves INIT_WORK setup for cq_desc->cq_[rx,tx]_work into
      isert_create_device_ib_res(), instead of being done each callback
      invocation in isert_cq_[rx,tx]_callback().
      
      This also fixes a 'INFO: trying to register non-static key' warning
      when cancel_work_sync() is called before INIT_WORK has setup the
      struct work_struct.
      Reported-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Cc: <stable@vger.kernel.org> #3.12+
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      2853c2b6
    • N
      iscsi-target: Fix incorrect np->np_thread NULL assignment · db6077fd
      Nicholas Bellinger 提交于
      When shutting down a target there is a race condition between
      iscsit_del_np() and __iscsi_target_login_thread().
      The latter sets the thread pointer to NULL, and the former
      tries to issue kthread_stop() on that pointer without any
      synchronization.
      
      This patch moves the np->np_thread NULL assignment into
      iscsit_del_np(), after kthread_stop() has completed. It also
      removes the signal_pending() + np_state check, and only
      exits when kthread_should_stop() is true.
      Reported-by: NHannes Reinecke <hare@suse.de>
      Cc: <stable@vger.kernel.org> #3.12+
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      db6077fd
    • D
      net_dma: mark broken · 77873803
      Dan Williams 提交于
      net_dma can cause data to be copied to a stale mapping if a
      copy-on-write fault occurs during dma.  The application sees missing
      data.
      
      The following trace is triggered by modifying the kernel to WARN if it
      ever triggers copy-on-write on a page that is undergoing dma:
      
       WARNING: CPU: 24 PID: 2529 at lib/dma-debug.c:485 debug_dma_assert_idle+0xd2/0x120()
       ioatdma 0000:00:04.0: DMA-API: cpu touching an active dma mapped page [pfn=0x16bcd9]
       Modules linked in: iTCO_wdt iTCO_vendor_support ioatdma lpc_ich pcspkr dca
       CPU: 24 PID: 2529 Comm: linbug Tainted: G        W    3.13.0-rc1+ #353
        00000000000001e5 ffff88016f45f688 ffffffff81751041 ffff88017ab0ef70
        ffff88016f45f6d8 ffff88016f45f6c8 ffffffff8104ed9c ffffffff810f3646
        ffff8801768f4840 0000000000000282 ffff88016f6cca10 00007fa2bb699349
       Call Trace:
        [<ffffffff81751041>] dump_stack+0x46/0x58
        [<ffffffff8104ed9c>] warn_slowpath_common+0x8c/0xc0
        [<ffffffff810f3646>] ? ftrace_pid_func+0x26/0x30
        [<ffffffff8104ee86>] warn_slowpath_fmt+0x46/0x50
        [<ffffffff8139c062>] debug_dma_assert_idle+0xd2/0x120
        [<ffffffff81154a40>] do_wp_page+0xd0/0x790
        [<ffffffff811582ac>] handle_mm_fault+0x51c/0xde0
        [<ffffffff813830b9>] ? copy_user_enhanced_fast_string+0x9/0x20
        [<ffffffff8175fc2c>] __do_page_fault+0x19c/0x530
        [<ffffffff8175c196>] ? _raw_spin_lock_bh+0x16/0x40
        [<ffffffff810f3539>] ? trace_clock_local+0x9/0x10
        [<ffffffff810fa1f4>] ? rb_reserve_next_event+0x64/0x310
        [<ffffffffa0014c00>] ? ioat2_dma_prep_memcpy_lock+0x60/0x130 [ioatdma]
        [<ffffffff8175ffce>] do_page_fault+0xe/0x10
        [<ffffffff8175c862>] page_fault+0x22/0x30
        [<ffffffff81643991>] ? __kfree_skb+0x51/0xd0
        [<ffffffff813830b9>] ? copy_user_enhanced_fast_string+0x9/0x20
        [<ffffffff81388ea2>] ? memcpy_toiovec+0x52/0xa0
        [<ffffffff8164770f>] skb_copy_datagram_iovec+0x5f/0x2a0
        [<ffffffff8169d0f4>] tcp_rcv_established+0x674/0x7f0
        [<ffffffff816a68c5>] tcp_v4_do_rcv+0x2e5/0x4a0
        [..]
       ---[ end trace e30e3b01191b7617 ]---
       Mapped at:
        [<ffffffff8139c169>] debug_dma_map_page+0xb9/0x160
        [<ffffffff8142bf47>] dma_async_memcpy_pg_to_pg+0x127/0x210
        [<ffffffff8142cce9>] dma_memcpy_pg_to_iovec+0x119/0x1f0
        [<ffffffff81669d3c>] dma_skb_copy_datagram_iovec+0x11c/0x2b0
        [<ffffffff8169d1ca>] tcp_rcv_established+0x74a/0x7f0:
      
      ...the problem is that the receive path falls back to cpu-copy in
      several locations and this trace is just one of the areas.  A few
      options were considered to fix this:
      
      1/ sync all dma whenever a cpu copy branch is taken
      
      2/ modify the page fault handler to hold off while dma is in-flight
      
      Option 1 adds yet more cpu overhead to an "offload" that struggles to compete
      with cpu-copy.  Option 2 adds checks for behavior that is already documented as
      broken when using get_user_pages().  At a minimum a debug mode is warranted to
      catch and flag these violations of the dma-api vs get_user_pages().
      
      Thanks to David for his reproducer.
      
      Cc: <stable@vger.kernel.org>
      Cc: Dave Jiang <dave.jiang@intel.com>
      Cc: Vinod Koul <vinod.koul@intel.com>
      Cc: Alexander Duyck <alexander.h.duyck@intel.com>
      Reported-by: NDavid Whipple <whipple@securedatainnovations.ch>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      77873803
    • W
      dma: pl330: ensure DMA descriptors are zero-initialised · 0baf8f6a
      Will Deacon 提交于
      I see the following splat with 3.13-rc1 when attempting to perform DMA:
      
      [  253.004516] Alignment trap: not handling instruction e1902f9f at [<c0204b40>]
      [  253.004583] Unhandled fault: alignment exception (0x221) at 0xdfdfdfd7
      [  253.004646] Internal error: : 221 [#1] PREEMPT SMP ARM
      [  253.004691] Modules linked in: dmatest(+) [last unloaded: dmatest]
      [  253.004798] CPU: 0 PID: 671 Comm: kthreadd Not tainted 3.13.0-rc1+ #2
      [  253.004864] task: df9b0900 ti: df03e000 task.ti: df03e000
      [  253.004937] PC is at dmaengine_unmap_put+0x14/0x34
      [  253.005010] LR is at pl330_tasklet+0x3c8/0x550
      [  253.005087] pc : [<c0204b44>]    lr : [<c0207478>]    psr: a00e0193
      [  253.005087] sp : df03fe48  ip : 00000000  fp : df03bf18
      [  253.005178] r10: bf00e108  r9 : 00000001  r8 : 00000000
      [  253.005245] r7 : df837040  r6 : dfb41800  r5 : df837048  r4 : df837000
      [  253.005316] r3 : dfdfdfcf  r2 : dfb41f80  r1 : df837048  r0 : dfdfdfd7
      [  253.005384] Flags: NzCv  IRQs off  FIQs on  Mode SVC_32  ISA ARM  Segment kernel
      [  253.005459] Control: 30c5387d  Table: 9fb9ba80  DAC: fffffffd
      [  253.005520] Process kthreadd (pid: 671, stack limit = 0xdf03e248)
      
      This is due to desc->txd.unmap containing garbage (uninitialised memory).
      
      Rather than add another dummy initialisation to _init_desc, instead
      ensure that the descriptors are zero-initialised during allocation and
      remove the dummy, per-field initialisation.
      
      Cc: Andriy Shevchenko <andriy.shevchenko@intel.com>
      Acked-by: NJassi Brar <jassisinghbrar@gmail.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Acked-by: NVinod Koul <vinod.koul@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      0baf8f6a
  5. 18 12月, 2013 26 次提交
  6. 17 12月, 2013 5 次提交