1. 22 4月, 2015 6 次提交
  2. 25 3月, 2015 1 次提交
  3. 21 3月, 2015 3 次提交
  4. 04 3月, 2015 2 次提交
  5. 25 2月, 2015 1 次提交
  6. 23 2月, 2015 23 次提交
  7. 18 2月, 2015 3 次提交
    • M
      dm snapshot: fix a possible invalid memory access on unload · 22aa66a3
      Mikulas Patocka 提交于
      When the snapshot target is unloaded, snapshot_dtr() waits until
      pending_exceptions_count drops to zero.  Then, it destroys the snapshot.
      Therefore, the function that decrements pending_exceptions_count
      should not touch the snapshot structure after the decrement.
      
      pending_complete() calls free_pending_exception(), which decrements
      pending_exceptions_count, and then it performs up_write(&s->lock) and it
      calls retry_origin_bios() which dereferences  s->origin.  These two
      memory accesses to the fields of the snapshot may touch the dm_snapshot
      struture after it is freed.
      
      This patch moves the call to free_pending_exception() to the end of
      pending_complete(), so that the snapshot will not be destroyed while
      pending_complete() is in progress.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org
      22aa66a3
    • M
      dm: fix a race condition in dm_get_md · 2bec1f4a
      Mikulas Patocka 提交于
      The function dm_get_md finds a device mapper device with a given dev_t,
      increases the reference count and returns the pointer.
      
      dm_get_md calls dm_find_md, dm_find_md takes _minor_lock, finds the
      device, tests that the device doesn't have DMF_DELETING or DMF_FREEING
      flag, drops _minor_lock and returns pointer to the device. dm_get_md then
      calls dm_get. dm_get calls BUG if the device has the DMF_FREEING flag,
      otherwise it increments the reference count.
      
      There is a possible race condition - after dm_find_md exits and before
      dm_get is called, there are no locks held, so the device may disappear or
      DMF_FREEING flag may be set, which results in BUG.
      
      To fix this bug, we need to call dm_get while we hold _minor_lock. This
      patch renames dm_find_md to dm_get_md and changes it so that it calls
      dm_get while holding the lock.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org
      2bec1f4a
    • N
      md/raid5: Fix livelock when array is both resyncing and degraded. · 26ac1073
      NeilBrown 提交于
      Commit a7854487:
        md: When RAID5 is dirty, force reconstruct-write instead of read-modify-write.
      
      Causes an RCW cycle to be forced even when the array is degraded.
      A degraded array cannot support RCW as that requires reading all data
      blocks, and one may be missing.
      
      Forcing an RCW when it is not possible causes a live-lock and the code
      spins, repeatedly deciding to do something that cannot succeed.
      
      So change the condition to only force RCW on non-degraded arrays.
      Reported-by: NManibalan P <pmanibalan@amiindia.co.in>
      Bisected-by: NJes Sorensen <Jes.Sorensen@redhat.com>
      Tested-by: NJes Sorensen <Jes.Sorensen@redhat.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Fixes: a7854487
      Cc: stable@vger.kernel.org (v3.7+)
      26ac1073
  8. 17 2月, 2015 1 次提交
    • M
      dm crypt: sort writes · b3c5fd30
      Mikulas Patocka 提交于
      Write requests are sorted in a red-black tree structure and are
      submitted in the sorted order.
      
      In theory the sorting should be performed by the underlying disk
      scheduler, however, in practice the disk scheduler only accepts and
      sorts a finite number of requests.  To allow the sorting of all
      requests, dm-crypt needs to implement its own sorting.
      
      The overhead associated with rbtree-based sorting is considered
      negligible so it is not used conditionally.  Even on SSD sorting can be
      beneficial since in-order request dispatch promotes lower latency IO
      completion to the upper layers.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      b3c5fd30