1. 07 1月, 2014 12 次提交
  2. 14 12月, 2013 2 次提交
    • J
      dm array: fix a reference counting bug in shadow_ablock · ed9571f0
      Joe Thornber 提交于
      An old array block could have its reference count decremented below
      zero when it is being replaced in the btree by a new array block.
      
      The fix is to increment the old ablock's reference count just before
      inserting a new ablock into the btree.
      Signed-off-by: NJoe Thornber <ejt@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org # 3.9+
      ed9571f0
    • J
      dm space map: disallow decrementing a reference count below zero · 5b564d80
      Joe Thornber 提交于
      The old behaviour, returning -EINVAL if a ref_count of 0 would be
      decremented, was removed in commit f722063e ("dm space map: optimise
      sm_ll_dec and sm_ll_inc").  To fix this regression we return an error
      code from the mutator function pointer passed to sm_ll_mutate() and have
      dec_ref_count() return -EINVAL if the old ref_count is 0.
      
      Add a DMERR to reflect the potential seriousness of this error.
      
      Also, add missing dm_tm_unlock() to sm_ll_mutate()'s error path.
      
      With this fix the following dmts regression test now passes:
       dmtest run --suite cache -n /metadata_use_kernel/
      
      The next patch fixes the higher-level dm-array code that exposed this
      regression.
      Signed-off-by: NJoe Thornber <ejt@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org # 3.12+
      5b564d80
  3. 11 12月, 2013 12 次提交
    • M
      dm stats: initialize read-only module parameter · 76f5bee5
      Mikulas Patocka 提交于
      The module parameter stats_current_allocated_bytes in dm-mod is
      read-only.  This parameter informs the user about memory
      consumption.  It is not supposed to be changed by the user.
      
      However, despite being read-only, this parameter can be set on
      modprobe or insmod command line:
      modprobe dm-mod stats_current_allocated_bytes=12345
      
      The kernel doesn't expect that this variable can be non-zero at module
      initialization and if the user sets it, it results in warning.
      
      This patch initializes the variable in the module init routine, so
      that user-supplied value is ignored.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org # 3.12+
      76f5bee5
    • M
      dm bufio: initialize read-only module parameters · 4cb57ab4
      Mikulas Patocka 提交于
      Some module parameters in dm-bufio are read-only. These parameters
      inform the user about memory consumption. They are not supposed to be
      changed by the user.
      
      However, despite being read-only, these parameters can be set on
      modprobe or insmod command line, for example:
      modprobe dm-bufio current_allocated_bytes=12345
      
      The kernel doesn't expect that these variables can be non-zero at module
      initialization and if the user sets them, it results in BUG.
      
      This patch initializes the variables in the module init routine, so that
      user-supplied values are ignored.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org # 3.2+
      4cb57ab4
    • V
      dm cache: actually resize cache · 08844800
      Vincent Pelletier 提交于
      Commit f494a9c6 ("dm cache: cache
      shrinking support") broke cache resizing support.
      
      dm_cache_resize() is called with cache->cache_size before it gets
      updated to new_size, so it is a no-op.  But the dm-cache superblock is
      updated with the new_size even though the backing dm-array is not
      resized.  Fix this by passing the new_size to dm_cache_resize().
      Signed-off-by: NVincent Pelletier <plr.vincent@gmail.com>
      Acked-by: NJoe Thornber <ejt@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      08844800
    • J
      dm cache policy mq: fix promotions to occur as expected · af95e7a6
      Joe Thornber 提交于
      Micro benchmarks that repeatedly issued IO to a single block were
      failing to cause a promotion from the origin device to the cache.  Fix
      this by not updating the stats during map() if -EWOULDBLOCK will be
      returned.
      
      The mq policy will only update stats, consider migration, etc, once per
      tick period (a unit of time established between dm-cache core and the
      policies).
      
      When the IO thread calls the policy's map method, if it would like to
      migrate the associated block it returns -EWOULDBLOCK, the IO then gets
      handed over to a worker thread which handles the migration.  The worker
      thread calls map again, to check the migration is still needed (avoids a
      race among other things).  *BUT*, before this fix, if we were still in
      the same tick period the stats were already updated by the previous map
      call -- so the migration would no longer be requested.
      Signed-off-by: NJoe Thornber <ejt@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      af95e7a6
    • J
      dm thin: allow pool in read-only mode to transition to read-write mode · 9b7aaa64
      Joe Thornber 提交于
      A thin-pool may be in read-only mode because the pool's data or metadata
      space was exhausted.  To allow for recovery, by adding more space to the
      pool, we must allow a pool to transition from PM_READ_ONLY to PM_WRITE
      mode.  Otherwise, running out of space will render the pool permanently
      read-only.
      Signed-off-by: NJoe Thornber <ejt@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org
      9b7aaa64
    • J
      dm thin: re-establish read-only state when switching to fail mode · 5383ef3a
      Joe Thornber 提交于
      If the thin-pool transitioned to fail mode and the thin-pool's table
      were reloaded for some reason: the new table's default pool mode would
      be read-write, though it will transition to fail mode during resume.
      
      When the pool mode transitions directly from PM_WRITE to PM_FAIL we need
      to re-establish the intermediate read-only state in both the metadata
      and persistent-data block manager (as is usually done with the normal
      pool mode transition sequence: PM_WRITE -> PM_READ_ONLY -> PM_FAIL).
      Signed-off-by: NJoe Thornber <ejt@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org
      5383ef3a
    • J
      dm thin: always fallback the pool mode if commit fails · 020cc3b5
      Joe Thornber 提交于
      Rename commit_or_fallback() to commit().  Now all previous calls to
      commit() will trigger the pool mode to fallback if the commit fails.
      
      Also, check the error returned from commit() in alloc_data_block().
      Signed-off-by: NJoe Thornber <ejt@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org
      020cc3b5
    • M
      dm thin: switch to read-only mode if metadata space is exhausted · 4a02b34e
      Mike Snitzer 提交于
      Switch the thin pool to read-only mode in alloc_data_block() if
      dm_pool_alloc_data_block() fails because the pool's metadata space is
      exhausted.
      
      Differentiate between data and metadata space in messages about no
      free space available.
      
      This issue was noticed with the device-mapper-test-suite using:
      dmtest run --suite thin-provisioning -n /exhausting_metadata_space_causes_fail_mode/
      
      The quantity of errors logged in this case must be reduced.
      
      before patch:
      
      device-mapper: thin: 253:4: reached low water mark for metadata device: sending event.
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: space map common: dm_tm_shadow_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: space map common: dm_tm_shadow_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: space map common: dm_tm_shadow_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: space map common: dm_tm_shadow_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: space map common: dm_tm_shadow_block() failed
      <snip ... these repeat for a _very_ long while ... >
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: thin: 253:4: commit failed: error = -28
      device-mapper: thin: 253:4: switching pool to read-only mode
      
      after patch:
      
      device-mapper: thin: 253:4: reached low water mark for metadata device: sending event.
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: thin: 253:4: no free metadata space available.
      device-mapper: thin: 253:4: switching pool to read-only mode
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Acked-by: NJoe Thornber <ejt@redhat.com>
      Cc: stable@vger.kernel.org
      4a02b34e
    • J
      dm thin: switch to read only mode if a mapping insert fails · fafc7a81
      Joe Thornber 提交于
      Switch the thin pool to read-only mode when dm_thin_insert_block() fails
      since there is little reason to expect the cause of the failure to be
      resolved without further action by user space.
      
      This issue was noticed with the device-mapper-test-suite using:
      dmtest run --suite thin-provisioning -n /exhausting_metadata_space_causes_fail_mode/
      
      The quantity of errors logged in this case must be reduced.
      
      before patch:
      
      device-mapper: thin: dm_thin_insert_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: thin: dm_thin_insert_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: thin: dm_thin_insert_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: thin: dm_thin_insert_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: thin: dm_thin_insert_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: thin: dm_thin_insert_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: thin: dm_thin_insert_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: thin: dm_thin_insert_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: thin: dm_thin_insert_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: thin: dm_thin_insert_block() failed
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: space map metadata: unable to allocate new metadata block
      <snip ... these repeat for a long while ... >
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: space map common: dm_tm_shadow_block() failed
      device-mapper: thin: 253:4: no free metadata space available.
      device-mapper: thin: 253:4: switching pool to read-only mode
      
      after patch:
      
      device-mapper: space map metadata: unable to allocate new metadata block
      device-mapper: thin: 253:4: dm_thin_insert_block() failed: error = -28
      device-mapper: thin: 253:4: switching pool to read-only mode
      Signed-off-by: NJoe Thornber <ejt@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org
      fafc7a81
    • M
      dm space map metadata: return on failure in sm_metadata_new_block · f62b6b8f
      Mike Snitzer 提交于
      Commit 2fc48021 ("dm persistent
      metadata: add space map threshold callback") introduced a regression
      to the metadata block allocation path that resulted in errors being
      ignored.  This regression was uncovered by running the following
      device-mapper-test-suite test:
      dmtest run --suite thin-provisioning -n /exhausting_metadata_space_causes_fail_mode/
      
      The ignored error codes in sm_metadata_new_block() could crash the
      kernel through use of either the dm-thin or dm-cache targets, e.g.:
      
      device-mapper: thin: 253:4: reached low water mark for metadata device: sending event.
      device-mapper: space map metadata: unable to allocate new metadata block
      general protection fault: 0000 [#1] SMP
      ...
      Workqueue: dm-thin do_worker [dm_thin_pool]
      task: ffff880035ce2ab0 ti: ffff88021a054000 task.ti: ffff88021a054000
      RIP: 0010:[<ffffffffa0331385>]  [<ffffffffa0331385>] metadata_ll_load_ie+0x15/0x30 [dm_persistent_data]
      RSP: 0018:ffff88021a055a68  EFLAGS: 00010202
      RAX: 003fc8243d212ba0 RBX: ffff88021a780070 RCX: ffff88021a055a78
      RDX: ffff88021a055a78 RSI: 0040402222a92a80 RDI: ffff88021a780070
      RBP: ffff88021a055a68 R08: ffff88021a055ba4 R09: 0000000000000010
      R10: 0000000000000000 R11: 00000002a02e1000 R12: ffff88021a055ad4
      R13: 0000000000000598 R14: ffffffffa0338470 R15: ffff88021a055ba4
      FS:  0000000000000000(0000) GS:ffff88033fca0000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      CR2: 00007f467c0291b8 CR3: 0000000001a0b000 CR4: 00000000000007e0
      Stack:
       ffff88021a055ab8 ffffffffa0332020 ffff88021a055b30 0000000000000001
       ffff88021a055b30 0000000000000000 ffff88021a055b18 0000000000000000
       ffff88021a055ba4 ffff88021a055b98 ffff88021a055ae8 ffffffffa033304c
      Call Trace:
       [<ffffffffa0332020>] sm_ll_lookup_bitmap+0x40/0xa0 [dm_persistent_data]
       [<ffffffffa033304c>] sm_metadata_count_is_more_than_one+0x8c/0xc0 [dm_persistent_data]
       [<ffffffffa0333825>] dm_tm_shadow_block+0x65/0x110 [dm_persistent_data]
       [<ffffffffa0331b00>] sm_ll_mutate+0x80/0x300 [dm_persistent_data]
       [<ffffffffa0330e60>] ? set_ref_count+0x10/0x10 [dm_persistent_data]
       [<ffffffffa0331dba>] sm_ll_inc+0x1a/0x20 [dm_persistent_data]
       [<ffffffffa0332270>] sm_disk_new_block+0x60/0x80 [dm_persistent_data]
       [<ffffffff81520036>] ? down_write+0x16/0x40
       [<ffffffffa001e5c4>] dm_pool_alloc_data_block+0x54/0x80 [dm_thin_pool]
       [<ffffffffa001b23c>] alloc_data_block+0x9c/0x130 [dm_thin_pool]
       [<ffffffffa001c27e>] provision_block+0x4e/0x180 [dm_thin_pool]
       [<ffffffffa001fe9a>] ? dm_thin_find_block+0x6a/0x110 [dm_thin_pool]
       [<ffffffffa001c57a>] process_bio+0x1ca/0x1f0 [dm_thin_pool]
       [<ffffffff8111e2ed>] ? mempool_free+0x8d/0xa0
       [<ffffffffa001d755>] process_deferred_bios+0xc5/0x230 [dm_thin_pool]
       [<ffffffffa001d911>] do_worker+0x51/0x60 [dm_thin_pool]
       [<ffffffff81067872>] process_one_work+0x182/0x3b0
       [<ffffffff81068c90>] worker_thread+0x120/0x3a0
       [<ffffffff81068b70>] ? manage_workers+0x160/0x160
       [<ffffffff8106eb2e>] kthread+0xce/0xe0
       [<ffffffff8106ea60>] ? kthread_freezable_should_stop+0x70/0x70
       [<ffffffff8152af6c>] ret_from_fork+0x7c/0xb0
       [<ffffffff8106ea60>] ? kthread_freezable_should_stop+0x70/0x70
       [<ffffffff8152af6c>] ret_from_fork+0x7c/0xb0
       [<ffffffff8106ea60>] ? kthread_freezable_should_stop+0x70/0x70
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Acked-by: NJoe Thornber <ejt@redhat.com>
      Cc: stable@vger.kernel.org # v3.10+
      f62b6b8f
    • M
      dm table: fail dm_table_create on dm_round_up overflow · 5b2d0657
      Mikulas Patocka 提交于
      The dm_round_up function may overflow to zero.  In this case,
      dm_table_create() must fail rather than go on to allocate an empty array
      with alloc_targets().
      
      This fixes a possible memory corruption that could be caused by passing
      too large a number in "param->target_count".
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org
      5b2d0657
    • M
      dm snapshot: avoid snapshot space leak on crash · 230c83af
      Mikulas Patocka 提交于
      There is a possible leak of snapshot space in case of crash.
      
      The reason for space leaking is that chunks in the snapshot device are
      allocated sequentially, but they are finished (and stored in the metadata)
      out of order, depending on the order in which copying finished.
      
      For example, supposed that the metadata contains the following records
      SUPERBLOCK
      METADATA (blocks 0 ... 250)
      DATA 0
      DATA 1
      DATA 2
      ...
      DATA 250
      
      Now suppose that you allocate 10 new data blocks 251-260. Suppose that
      copying of these blocks finish out of order (block 260 finished first
      and the block 251 finished last). Now, the snapshot device looks like
      this:
      SUPERBLOCK
      METADATA (blocks 0 ... 250, 260, 259, 258, 257, 256)
      DATA 0
      DATA 1
      DATA 2
      ...
      DATA 250
      DATA 251
      DATA 252
      DATA 253
      DATA 254
      DATA 255
      METADATA (blocks 255, 254, 253, 252, 251)
      DATA 256
      DATA 257
      DATA 258
      DATA 259
      DATA 260
      
      Now, if the machine crashes after writing the first metadata block but
      before writing the second metadata block, the space for areas DATA 250-255
      is leaked, it contains no valid data and it will never be used in the
      future.
      
      This patch makes dm-snapshot complete exceptions in the same order they
      were allocated, thus fixing this bug.
      
      Note: when backporting this patch to the stable kernel, change the version
      field in the following way:
      * if version in the stable kernel is {1, 11, 1}, change it to {1, 12, 0}
      * if version in the stable kernel is {1, 10, 0} or {1, 10, 1}, change it
        to {1, 10, 2}
      Userspace reads the version to determine if the bug was fixed, so the
      version change is needed.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org
      230c83af
  4. 28 11月, 2013 3 次提交
  5. 25 11月, 2013 1 次提交
  6. 19 11月, 2013 10 次提交
    • M
      md/raid5: Use conf->device_lock protect changing of multi-thread resources. · 60aaf933
      majianpeng 提交于
      When we change group_thread_cnt from sysfs entry, it can OOPS.
      
      The kernel messages are:
      [  135.299021] BUG: unable to handle kernel NULL pointer dereference at           (null)
      [  135.299073] IP: [<ffffffff815188ab>] handle_active_stripes+0x32b/0x440
      [  135.299107] PGD 0
      [  135.299122] Oops: 0000 [#1] SMP
      [  135.299144] Modules linked in: netconsole e1000e ptp pps_core
      [  135.299188] CPU: 3 PID: 2225 Comm: md0_raid5 Not tainted 3.12.0+ #24
      [  135.299214] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./To be filled by O.E.M., BIOS 080015  11/09/2011
      [  135.299255] task: ffff8800b9638f80 ti: ffff8800b77a4000 task.ti: ffff8800b77a4000
      [  135.299283] RIP: 0010:[<ffffffff815188ab>]  [<ffffffff815188ab>] handle_active_stripes+0x32b/0x440
      [  135.299323] RSP: 0018:ffff8800b77a5c48  EFLAGS: 00010002
      [  135.299344] RAX: ffff880037bb5c70 RBX: 0000000000000000 RCX: 0000000000000008
      [  135.299371] RDX: ffff880037bb5cb8 RSI: 0000000000000001 RDI: ffff880037bb5c00
      [  135.299398] RBP: ffff8800b77a5d08 R08: 0000000000000001 R09: 0000000000000000
      [  135.299425] R10: ffff8800b77a5c98 R11: 00000000ffffffff R12: ffff880037bb5c00
      [  135.299452] R13: 0000000000000000 R14: 0000000000000000 R15: ffff880037bb5c70
      [  135.299479] FS:  0000000000000000(0000) GS:ffff88013fd80000(0000) knlGS:0000000000000000
      [  135.299510] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      [  135.299532] CR2: 0000000000000000 CR3: 0000000001c0b000 CR4: 00000000000407e0
      [  135.299559] Stack:
      [  135.299570]  ffff8800b77a5c88 ffffffff8107383e ffff8800b77a5c88 ffff880037a64300
      [  135.299611]  000000000000ec08 ffff880037bb5cb8 ffff8800b77a5c98 ffffffffffffffd8
      [  135.299654]  000000000000ec08 ffff880037bb5c60 ffff8800b77a5c98 ffff8800b77a5c98
      [  135.299696] Call Trace:
      [  135.299711]  [<ffffffff8107383e>] ? __wake_up+0x4e/0x70
      [  135.299733]  [<ffffffff81518f88>] raid5d+0x4c8/0x680
      [  135.299756]  [<ffffffff817174ed>] ? schedule_timeout+0x15d/0x1f0
      [  135.299781]  [<ffffffff81524c9f>] md_thread+0x11f/0x170
      [  135.299804]  [<ffffffff81069cd0>] ? wake_up_bit+0x40/0x40
      [  135.299826]  [<ffffffff81524b80>] ? md_rdev_init+0x110/0x110
      [  135.299850]  [<ffffffff81069656>] kthread+0xc6/0xd0
      [  135.299871]  [<ffffffff81069590>] ? kthread_freezable_should_stop+0x70/0x70
      [  135.299899]  [<ffffffff81722ffc>] ret_from_fork+0x7c/0xb0
      [  135.299923]  [<ffffffff81069590>] ? kthread_freezable_should_stop+0x70/0x70
      [  135.299951] Code: ff ff ff 0f 84 d7 fe ff ff e9 5c fe ff ff 66 90 41 8b b4 24 d8 01 00 00 45 31 ed 85 f6 0f 8e 7b fd ff ff 49 8b 9c 24 d0 01 00 00 <48> 3b 1b 49 89 dd 0f 85 67 fd ff ff 48 8d 43 28 31 d2 eb 17 90
      [  135.300005] RIP  [<ffffffff815188ab>] handle_active_stripes+0x32b/0x440
      [  135.300005]  RSP <ffff8800b77a5c48>
      [  135.300005] CR2: 0000000000000000
      [  135.300005] ---[ end trace 504854e5bb7562ed ]---
      [  135.300005] Kernel panic - not syncing: Fatal exception
      
      This is because raid5d() can be running when the multi-thread
      resources are changed via system. We see need to provide locking.
      
      mddev->device_lock is suitable, but we cannot simple call
      alloc_thread_groups under this lock as we cannot allocate memory
      while holding a spinlock.
      So change alloc_thread_groups() to allocate and return the data
      structures, then raid5_store_group_thread_cnt() can take the lock
      while updating the pointers to the data structures.
      
      This fixes a bug introduced in 3.12 and so is suitable for the 3.12.x
      stable series.
      
      Fixes: b721420e
      Cc: stable@vger.kernel.org (3.12)
      Signed-off-by: NJianpeng Ma <majianpeng@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Reviewed-by: NShaohua Li <shli@kernel.org>
      60aaf933
    • M
      md/raid5: Before freeing old multi-thread worker, it should flush them. · d206dcfa
      majianpeng 提交于
      When changing group_thread_cnt from sysfs entry, the kernel can oops.
      
      The kernel messages are:
      [  740.961389] BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
      [  740.961444] IP: [<ffffffff81062570>] process_one_work+0x30/0x500
      [  740.961476] PGD b9013067 PUD b651e067 PMD 0
      [  740.961503] Oops: 0000 [#1] SMP
      [  740.961525] Modules linked in: netconsole e1000e ptp pps_core
      [  740.961577] CPU: 0 PID: 3683 Comm: kworker/u8:5 Not tainted 3.12.0+ #23
      [  740.961602] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./To be filled by O.E.M., BIOS 080015  11/09/2011
      [  740.961646] task: ffff88013abe0000 ti: ffff88013a246000 task.ti: ffff88013a246000
      [  740.961673] RIP: 0010:[<ffffffff81062570>]  [<ffffffff81062570>] process_one_work+0x30/0x500
      [  740.961708] RSP: 0018:ffff88013a247e08  EFLAGS: 00010086
      [  740.961730] RAX: ffff8800b912b400 RBX: ffff88013a61e680 RCX: ffff8800b912b400
      [  740.961757] RDX: ffff8800b912b600 RSI: ffff8800b912b600 RDI: ffff88013a61e680
      [  740.961782] RBP: ffff88013a247e48 R08: ffff88013a246000 R09: 000000000002c09d
      [  740.961808] R10: 000000000000010f R11: 0000000000000000 R12: ffff88013b00cc00
      [  740.961833] R13: 0000000000000000 R14: ffff88013b00cf80 R15: ffff88013a61e6b0
      [  740.961861] FS:  0000000000000000(0000) GS:ffff88013fc00000(0000) knlGS:0000000000000000
      [  740.961893] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      [  740.962001] CR2: 00000000000000b8 CR3: 00000000b24fe000 CR4: 00000000000407f0
      [  740.962001] Stack:
      [  740.962001]  0000000000000008 ffff8800b912b600 ffff88013b00cc00 ffff88013a61e680
      [  740.962001]  ffff88013b00cc00 ffff88013b00cc18 ffff88013b00cf80 ffff88013a61e6b0
      [  740.962001]  ffff88013a247eb8 ffffffff810639c6 0000000000012a80 ffff88013a247fd8
      [  740.962001] Call Trace:
      [  740.962001]  [<ffffffff810639c6>] worker_thread+0x206/0x3f0
      [  740.962001]  [<ffffffff810637c0>] ? manage_workers+0x2c0/0x2c0
      [  740.962001]  [<ffffffff81069656>] kthread+0xc6/0xd0
      [  740.962001]  [<ffffffff81069590>] ? kthread_freezable_should_stop+0x70/0x70
      [  740.962001]  [<ffffffff81722ffc>] ret_from_fork+0x7c/0xb0
      [  740.962001]  [<ffffffff81069590>] ? kthread_freezable_should_stop+0x70/0x70
      [  740.962001] Code: 89 e5 41 57 41 56 41 55 45 31 ed 41 54 53 48 89 fb 48 83 ec 18 48 8b 06 4c 8b 67 48 48 89 c1 30 c9 a8 04 4c 0f 45 e9 80 7f 58 00 <49> 8b 45 08 44 8b b0 00 01 00 00 78 0c 41 f6 44 24 10 04 0f 84
      [  740.962001] RIP  [<ffffffff81062570>] process_one_work+0x30/0x500
      [  740.962001]  RSP <ffff88013a247e08>
      [  740.962001] CR2: 0000000000000008
      [  740.962001] ---[ end trace 39181460000748de ]---
      [  740.962001] Kernel panic - not syncing: Fatal exception
      
      This can happen if there are some stripes left, fewer than MAX_STRIPE_BATCH.
      A worker is queued to handle them.
      But before calling raid5_do_work, raid5d handles those
      stripes making conf->active_stripe = 0.
      So mddev_suspend() can return.
      We might then free old worker resources before the queued
      raid5_do_work() handled them.  When it runs, it crashes.
      
      	raid5d()		raid5_store_group_thread_cnt()
      	queue_work		mddev_suspend()
      				handle_strips
      				active_stripe=0
      				free(old worker resources)
      	process_one_work
      	raid5_do_work
      
      To avoid this, we should only flush the worker resources before freeing them.
      
      This fixes a bug introduced in 3.12 so is suitable for the 3.12.x
      stable series.
      
      Cc: stable@vger.kernel.org (3.12)
      Fixes: b721420eSigned-off-by: NJianpeng Ma <majianpeng@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Reviewed-by: NShaohua Li <shli@kernel.org>
      d206dcfa
    • M
      md/raid5: For stripe with R5_ReadNoMerge, we replace REQ_FLUSH with REQ_NOMERGE. · e59aa23f
      majianpeng 提交于
      For R5_ReadNoMerge,it mean this bio can't merge with other bios or
      request.It used REQ_FLUSH to achieve this. But REQ_NOMERGE can do the
      same work.
      Signed-off-by: NJianpeng Ma <majianpeng@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      e59aa23f
    • M
      raid1: Rewrite the implementation of iobarrier. · 79ef3a8a
      majianpeng 提交于
      There is an iobarrier in raid1 because of contention between normal IO and
      resync IO.  It suspends all normal IO when resync/recovery happens.
      
      However if normal IO is out side the resync window, there is no contention.
      So this patch changes the barrier mechanism to only block IO that
      could contend with the resync that is currently happening.
      
      We partition the whole space into five parts.
      |---------|-----------|------------|----------------|-------|
              start   next_resync   start_next_window    end_window
      
      start + RESYNC_WINDOW = next_resync
      next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
      start_next_window + NEXT_NORMALIO_DISTANCE = end_window
      
      Firstly we introduce some concepts:
      
      1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
            same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
            So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
      2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
            and start_next_window.  It also indicates the distance between
            start_next_window and end_window.
            It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
            this turned out not to be optimal.
      3 - next_resync: the next sector at which we will do sync IO.
      4 - start: a position which is at most RESYNC_WINDOW before
            next_resync.
      5 - start_next_window:  a position which is NEXT_NORMALIO_DISTANCE
            beyond next_resync.  Normal-io after this position doesn't need to
            wait for resync-io to complete.
      6 - end_window:  a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
            next_resync.  This also doesn't need to wait, but is counted
            differently.
      7 - current_window_requests:  the count of normalIO between
            start_next_window and end_window.
      8 - next_window_requests: the count of normalIO after end_window.
      
      NormalIO will be partitioned into four types:
      
      NormIO1:  the end sector of bio is smaller or equal the start
      NormIO2:  the start sector of bio larger or equal to end_window
      NormIO3:  the start sector of bio larger or equal to
                start_next_window.
      NormIO4:  the location between start_next_window and end_window
      
      |--------|-----------|--------------------|----------------|-------------|
          | start   |   next_resync   |  start_next_window   |  end_window |
       NormIO1   NormIO4            NormIO4                NormIO3      NormIO2
      
      For NormIO1, we don't need any io barrier.
      For NormIO4, we used a similar approach to the original iobarrier
          mechanism.  The normalIO and resyncIO must be kept separate.
      For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
          and "next_window_requests". They indicate the count of active
          requests in the two window.
          For these, we don't wait for resync io to complete.
      
      For resync action, if there are NormIO4s, we must wait for it.
      If not, we can proceed.
      But if resync action reaches start_next_window and
      current_window_requests > 0 (that is there are NormIO3s), we must
      wait until the current_window_requests becomes zero.
      When current_window_requests becomes zero,  start_next_window also
      moves forward. Then current_window_requests will replaced by
      next_window_requests.
      
      There is a problem which when and how to change from NormIO2 to
      NormIO3.  Only then can sync action progress.
      
      We add a field in struct r1conf "start_next_window".
      
      A: if start_next_window == MaxSector, it means there are no NormIO2/3.
         So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
      B: if current_window_requests == 0 && next_window_requests != 0, it
         means start_next_window move to end_window
      
      There is another problem which how to differentiate between
      old NormIO2(now it is NormIO3) and NormIO2.
      For example, there are many bios which are NormIO2 and a bio which is
      NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
      
      We add a field in struct r1bio "start_next_window".
      This is used to record the position conf->start_next_window when the call
      to wait_barrier() is made in make_request().
      
      In allow_barrier(), we check the conf->start_next_window.
      If r1bio->stat_next_window == conf->start_next_window, it means
      there is no transition between NormIO2 and NormIO3.
      If r1bio->start_next_window != conf->start_next_window, it mean
      there was a transition between NormIO2 and NormIO3.  There can only
      have been one transition.  So it only means the bio is old NormIO2.
      
      For one bio, there may be many r1bio's. So we make sure
      all the r1bio->start_next_window are the same value.
      If we met blocked_dev in make_request(), it must call allow_barrier
      and wait_barrier. So the former and the later value of
      conf->start_next_window will be change.
      If there are many r1bio's with differnet start_next_window,
      for the relevant bio, it depend on the last value of r1bio.
      It will cause error. To avoid this, we must wait for previous r1bios
      to complete.
      Signed-off-by: NJianpeng Ma <majianpeng@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      79ef3a8a
    • M
      raid1: Add some macros to make code clearly. · 8e005f7c
      majianpeng 提交于
      In a subsequent patch, we'll use some const parameters.
      Using macros will make the code clearly.
      Signed-off-by: NJianpeng Ma <majianpeng@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      8e005f7c
    • M
      raid1: Replace raise_barrier/lower_barrier with freeze_array/unfreeze_array... · 07169fd4
      majianpeng 提交于
      raid1: Replace raise_barrier/lower_barrier with freeze_array/unfreeze_array when reconfiguring the array.
      
      We used to use raise_barrier to suspend normal IO while we reconfigure
      the array.  However raise_barrier will soon only suspend some normal
      IO, not all.  So we need something else.
      Change it to use freeze_array.
      But freeze_array not only suspends normal io, it also suspends
      resync io.
      For the place where call raise_barrier for reconfigure, it isn't a
      problem.
      Signed-off-by: NJianpeng Ma <majianpeng@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      07169fd4
    • M
      raid1: Add a field array_frozen to indicate whether raid in freeze state. · b364e3d0
      majianpeng 提交于
      Because the following patch will rewrite the content between normal IO
      and resync IO. So we used a parameter to indicate whether raid is in freeze
      array.
      Signed-off-by: NJianpeng Ma <majianpeng@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      b364e3d0
    • J
      md: Convert use of typedef ctl_table to struct ctl_table · 82592c38
      Joe Perches 提交于
      This typedef is unnecessary and should just be removed.
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      82592c38
    • N
      md/raid5: avoid deadlock when raid5 array has unack badblocks during md_stop_writes. · 30b8feb7
      NeilBrown 提交于
      When raid5 recovery hits a fresh badblock, this badblock will flagged as unack
      badblock until md_update_sb() is called.
      But md_stop will take reconfig lock which means raid5d can't call
      md_update_sb() in md_check_recovery(), the badblock will always
      be unack, so raid5d thread enters an infinite loop and md_stop_write()
      can never stop sync_thread. This causes deadlock.
      
      To solve this, when STOP_ARRAY ioctl is issued and sync_thread is
      running, we need set md->recovery FROZEN and INTR flags and wait for
      sync_thread to stop before we (re)take reconfig lock.
      
      This requires that raid5 reshape_request notices MD_RECOVERY_INTR
      (which it probably should have noticed anyway) and stops waiting for a
      metadata update in that case.
      Reported-by: NJianpeng Ma <majianpeng@gmail.com>
      Reported-by: NBian Yu <bianyu@kedacom.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      30b8feb7
    • N
      md: use MD_RECOVERY_INTR instead of kthread_should_stop in resync thread. · c91abf5a
      NeilBrown 提交于
      We currently use kthread_should_stop() in various places in the
      sync/reshape code to abort early.
      However some places set MD_RECOVERY_INTR but don't immediately call
      md_reap_sync_thread() (and we will shortly get another one).
      When this happens we are relying on md_check_recovery() to reap the
      thread and that only happen when it finishes normally.
      So MD_RECOVERY_INTR must lead to a normal finish without the
      kthread_should_stop() test.
      
      So replace all relevant tests, and be more careful when the thread is
      interrupted not to acknowledge that latest step in a reshape as it may
      not be fully committed yet.
      
      Also add a test on MD_RECOVERY_INTR in the 'is_mddev_idle' loop
      so we don't wait have to wait for the speed to drop before we can abort.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      c91abf5a