1. 15 4月, 2017 1 次提交
  2. 14 4月, 2017 1 次提交
    • D
      libnvdimm: fix clear poison locking with spinlock and GFP_NOWAIT allocation · b3b454f6
      Dave Jiang 提交于
      The following warning results from holding a lane spinlock,
      preempt_disable(), or the btt map spinlock and then trying to take the
      reconfig_mutex to walk the poison list and potentially add new entries.
      
      BUG: sleeping function called from invalid context at kernel/locking/mutex.
      c:747
      in_atomic(): 1, irqs_disabled(): 0, pid: 17159, name: dd
      [..]
      Call Trace:
      dump_stack+0x85/0xc8
      ___might_sleep+0x184/0x250
      __might_sleep+0x4a/0x90
      __mutex_lock+0x58/0x9b0
      ? nvdimm_bus_lock+0x21/0x30 [libnvdimm]
      ? __nvdimm_bus_badblocks_clear+0x2f/0x60 [libnvdimm]
      ? acpi_nfit_forget_poison+0x79/0x80 [nfit]
      ? _raw_spin_unlock+0x27/0x40
      mutex_lock_nested+0x1b/0x20
      nvdimm_bus_lock+0x21/0x30 [libnvdimm]
      nvdimm_forget_poison+0x25/0x50 [libnvdimm]
      nvdimm_clear_poison+0x106/0x140 [libnvdimm]
      nsio_rw_bytes+0x164/0x270 [libnvdimm]
      btt_write_pg+0x1de/0x3e0 [nd_btt]
      ? blk_queue_enter+0x30/0x290
      btt_make_request+0x11a/0x310 [nd_btt]
      ? blk_queue_enter+0xb7/0x290
      ? blk_queue_enter+0x30/0x290
      generic_make_request+0x118/0x3b0
      
      A spinlock is introduced to protect the poison list. This allows us to not
      having to acquire the reconfig_mutex for touching the poison list. The
      add_poison() function has been broken out into two helper functions. One to
      allocate the poison entry and the other to apppend the entry. This allows us
      to unlock the poison_lock in non-I/O path and continue to be able to allocate
      the poison entry with GFP_KERNEL. We will use GFP_NOWAIT in the I/O path in
      order to satisfy being in atomic context.
      Reviewed-by: NVishal Verma <vishal.l.verma@intel.com>
      Signed-off-by: NDave Jiang <dave.jiang@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      b3b454f6
  3. 13 4月, 2017 9 次提交
  4. 11 4月, 2017 2 次提交
    • D
      libnvdimm: band aid btt vs clear poison locking · 4aa5615e
      Dan Williams 提交于
      The following warning results from holding a lane spinlock,
      preempt_disable(), or the btt map spinlock and then trying to take the
      reconfig_mutex to walk the poison list and potentially add new entries.
      
       BUG: sleeping function called from invalid context at kernel/locking/mutex.c:747
       in_atomic(): 1, irqs_disabled(): 0, pid: 17159, name: dd
       [..]
       Call Trace:
        dump_stack+0x85/0xc8
        ___might_sleep+0x184/0x250
        __might_sleep+0x4a/0x90
        __mutex_lock+0x58/0x9b0
        ? nvdimm_bus_lock+0x21/0x30 [libnvdimm]
        ? __nvdimm_bus_badblocks_clear+0x2f/0x60 [libnvdimm]
        ? acpi_nfit_forget_poison+0x79/0x80 [nfit]
        ? _raw_spin_unlock+0x27/0x40
        mutex_lock_nested+0x1b/0x20
        nvdimm_bus_lock+0x21/0x30 [libnvdimm]
        nvdimm_forget_poison+0x25/0x50 [libnvdimm]
        nvdimm_clear_poison+0x106/0x140 [libnvdimm]
        nsio_rw_bytes+0x164/0x270 [libnvdimm]
        btt_write_pg+0x1de/0x3e0 [nd_btt]
        ? blk_queue_enter+0x30/0x290
        btt_make_request+0x11a/0x310 [nd_btt]
        ? blk_queue_enter+0xb7/0x290
        ? blk_queue_enter+0x30/0x290
        generic_make_request+0x118/0x3b0
      
      As a minimal fix, disable error clearing when the BTT is enabled for the
      namespace. For the final fix a larger rework of the poison list locking
      is needed.
      
      Note that this is not a problem in the blk case since that path never
      calls nvdimm_clear_poison().
      
      Cc: <stable@vger.kernel.org>
      Fixes: 82bf1037 ("libnvdimm: check and clear poison before writing to pmem")
      Cc: Dave Jiang <dave.jiang@intel.com>
      [jeff: dynamically disable error clearing in the btt case]
      Suggested-by: NJeff Moyer <jmoyer@redhat.com>
      Reviewed-by: NJeff Moyer <jmoyer@redhat.com>
      Reported-by: NVishal Verma <vishal.l.verma@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      4aa5615e
    • D
      libnvdimm: fix reconfig_mutex, mmap_sem, and jbd2_handle lockdep splat · 0beb2012
      Dan Williams 提交于
      Holding the reconfig_mutex over a potential userspace fault sets up a
      lockdep dependency chain between filesystem-DAX and the libnvdimm ioctl
      path. Move the user access outside of the lock.
      
           [ INFO: possible circular locking dependency detected ]
           4.11.0-rc3+ #13 Tainted: G        W  O
           -------------------------------------------------------
           fallocate/16656 is trying to acquire lock:
            (&nvdimm_bus->reconfig_mutex){+.+.+.}, at: [<ffffffffa00080b1>] nvdimm_bus_lock+0x21/0x30 [libnvdimm]
           but task is already holding lock:
            (jbd2_handle){++++..}, at: [<ffffffff813b4944>] start_this_handle+0x104/0x460
      
          which lock already depends on the new lock.
      
          the existing dependency chain (in reverse order) is:
      
          -> #2 (jbd2_handle){++++..}:
                  lock_acquire+0xbd/0x200
                  start_this_handle+0x16a/0x460
                  jbd2__journal_start+0xe9/0x2d0
                  __ext4_journal_start_sb+0x89/0x1c0
                  ext4_dirty_inode+0x32/0x70
                  __mark_inode_dirty+0x235/0x670
                  generic_update_time+0x87/0xd0
                  touch_atime+0xa9/0xd0
                  ext4_file_mmap+0x90/0xb0
                  mmap_region+0x370/0x5b0
                  do_mmap+0x415/0x4f0
                  vm_mmap_pgoff+0xd7/0x120
                  SyS_mmap_pgoff+0x1c5/0x290
                  SyS_mmap+0x22/0x30
                  entry_SYSCALL_64_fastpath+0x1f/0xc2
      
          -> #1 (&mm->mmap_sem){++++++}:
                  lock_acquire+0xbd/0x200
                  __might_fault+0x70/0xa0
                  __nd_ioctl+0x683/0x720 [libnvdimm]
                  nvdimm_ioctl+0x8b/0xe0 [libnvdimm]
                  do_vfs_ioctl+0xa8/0x740
                  SyS_ioctl+0x79/0x90
                  do_syscall_64+0x6c/0x200
                  return_from_SYSCALL_64+0x0/0x7a
      
          -> #0 (&nvdimm_bus->reconfig_mutex){+.+.+.}:
                  __lock_acquire+0x16b6/0x1730
                  lock_acquire+0xbd/0x200
                  __mutex_lock+0x88/0x9b0
                  mutex_lock_nested+0x1b/0x20
                  nvdimm_bus_lock+0x21/0x30 [libnvdimm]
                  nvdimm_forget_poison+0x25/0x50 [libnvdimm]
                  nvdimm_clear_poison+0x106/0x140 [libnvdimm]
                  pmem_do_bvec+0x1c2/0x2b0 [nd_pmem]
                  pmem_make_request+0xf9/0x270 [nd_pmem]
                  generic_make_request+0x118/0x3b0
                  submit_bio+0x75/0x150
      
      Cc: <stable@vger.kernel.org>
      Fixes: 62232e45 ("libnvdimm: control (ioctl) messages for nvdimm_bus and nvdimm devices")
      Cc: Dave Jiang <dave.jiang@intel.com>
      Reported-by: NVishal Verma <vishal.l.verma@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      0beb2012
  5. 05 4月, 2017 1 次提交
    • D
      libnvdimm: fix blk free space accounting · fe514739
      Dan Williams 提交于
      Commit a1f3e4d6 "libnvdimm, region: update nd_region_available_dpa()
      for multi-pmem support" reworked blk dpa (DIMM Physical Address)
      accounting to comprehend multiple pmem namespace allocations aliasing
      with a given blk-dpa range.
      
      The following call trace is a result of failing to account for allocated
      blk capacity.
      
       WARNING: CPU: 1 PID: 2433 at tools/testing/nvdimm/../../../drivers/nvdimm/names
      4 size_store+0x6f3/0x930 [libnvdimm]
       nd_region region5: allocation underrun: 0x0 of 0x1000000 bytes
       [..]
       Call Trace:
        dump_stack+0x86/0xc3
        __warn+0xcb/0xf0
        warn_slowpath_fmt+0x5f/0x80
        size_store+0x6f3/0x930 [libnvdimm]
        dev_attr_store+0x18/0x30
      
      If a given blk-dpa allocation does not alias with any pmem ranges then
      the full allocation should be accounted as busy space, not the size of
      the current pmem contribution to the region.
      
      The thinkos that led to this confusion was not realizing that the struct
      resource management is already guaranteeing no collisions between pmem
      allocations and blk allocations on the same dimm. Also, we do not try to
      support blk allocations in aliased pmem holes.
      
      This patch also fixes a case where the available blk goes negative.
      
      Cc: <stable@vger.kernel.org>
      Fixes: a1f3e4d6 ("libnvdimm, region: update nd_region_available_dpa() for multi-pmem support").
      Reported-by: NDariusz Dokupil <dariusz.dokupil@intel.com>
      Reported-by: NDave Jiang <dave.jiang@intel.com>
      Reported-by: NVishal Verma <vishal.l.verma@intel.com>
      Tested-by: NDave Jiang <dave.jiang@intel.com>
      Tested-by: NVishal Verma <vishal.l.verma@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      fe514739
  6. 28 3月, 2017 1 次提交
  7. 25 3月, 2017 15 次提交
  8. 24 3月, 2017 2 次提交
  9. 23 3月, 2017 8 次提交