1. 11 7月, 2017 1 次提交
    • X
      Raid5 should update rdev->sectors after reshape · b5d27718
      Xiao Ni 提交于
      The raid5 md device is created by the disks which we don't use the total size. For example,
      the size of the device is 5G and it just uses 3G of the devices to create one raid5 device.
      Then change the chunksize and wait reshape to finish. After reshape finishing stop the raid
      and assemble it again. It fails.
      mdadm -CR /dev/md0 -l5 -n3 /dev/loop[0-2] --size=3G --chunk=32 --assume-clean
      mdadm /dev/md0 --grow --chunk=64
      wait reshape to finish
      mdadm -S /dev/md0
      mdadm -As
      The error messages:
      [197519.814302] md: loop1 does not have a valid v1.2 superblock, not importing!
      [197519.821686] md: md_import_device returned -22
      
      After reshape the data offset is changed. It selects backwards direction in this condition.
      In function super_1_load it compares the available space of the underlying device with
      sb->data_size. The new data offset gets bigger after reshape. So super_1_load returns -EINVAL.
      rdev->sectors is updated in md_finish_reshape. Then sb->data_size is set in super_1_sync based
      on rdev->sectors. So add md_finish_reshape in end_reshape.
      Signed-off-by: NXiao Ni <xni@redhat.com>
      Acked-by: NGuoqing Jiang <gqjiang@suse.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NShaohua Li <shli@fb.com>
      b5d27718
  2. 19 6月, 2017 1 次提交
  3. 14 6月, 2017 2 次提交
    • M
      md: don't use flush_signals in userspace processes · f9c79bc0
      Mikulas Patocka 提交于
      The function flush_signals clears all pending signals for the process. It
      may be used by kernel threads when we need to prepare a kernel thread for
      responding to signals. However using this function for an userspaces
      processes is incorrect - clearing signals without the program expecting it
      can cause misbehavior.
      
      The raid1 and raid5 code uses flush_signals in its request routine because
      it wants to prepare for an interruptible wait. This patch drops
      flush_signals and uses sigprocmask instead to block all signals (including
      SIGKILL) around the schedule() call. The signals are not lost, but the
      schedule() call won't respond to them.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Cc: stable@vger.kernel.org
      Acked-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      f9c79bc0
    • N
      md: fix deadlock between mddev_suspend() and md_write_start() · cc27b0c7
      NeilBrown 提交于
      If mddev_suspend() races with md_write_start() we can deadlock
      with mddev_suspend() waiting for the request that is currently
      in md_write_start() to complete the ->make_request() call,
      and md_write_start() waiting for the metadata to be updated
      to mark the array as 'dirty'.
      As metadata updates done by md_check_recovery() only happen then
      the mddev_lock() can be claimed, and as mddev_suspend() is often
      called with the lock held, these threads wait indefinitely for each
      other.
      
      We fix this by having md_write_start() abort if mddev_suspend()
      is happening, and ->make_request() aborts if md_write_start()
      aborted.
      md_make_request() can detect this abort, decrease the ->active_io
      count, and wait for mddev_suspend().
      Reported-by: NNix <nix@esperi.org.uk>
      Fix: 68866e42(MD: no sync IO while suspended)
      Cc: stable@vger.kernel.org
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      cc27b0c7
  4. 09 6月, 2017 1 次提交
  5. 06 6月, 2017 1 次提交
  6. 25 5月, 2017 1 次提交
    • N
      md: report sector of stripes with check mismatches · e1539036
      Nix 提交于
      This makes it possible, with appropriate filesystem support, for a
      sysadmin to tell what is affected by the mismatch, and whether
      it should be ignored (if it's inside a swap partition, for
      instance).
      
      We ratelimit to prevent log flooding: if there are so many
      mismatches that ratelimiting is necessary, the individual messages
      are relatively unlikely to be important (either the machine is
      swapping like crazy or something is very wrong with the disk).
      Signed-off-by: NNick Alcock <nick.alcock@oracle.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      e1539036
  7. 12 5月, 2017 2 次提交
    • S
      md/r5cache: handle sync with data in write back cache · 5ddf0440
      Song Liu 提交于
      Currently, sync of raid456 array cannot make progress when hitting
      data in writeback r5cache.
      
      This patch fixes this issue by flushing cached data of the stripe
      before processing the sync request. This is achived by:
      
      1. In handle_stripe(), do not set STRIPE_SYNCING if the stripe is
         in write back cache;
      2. In r5c_try_caching_write(), handle the stripe in sync with write
         through;
      3. In do_release_stripe(), make stripe in sync write out and send
         it to the state machine.
      
      Shaohua: explictly set STRIPE_HANDLE after write out completed
      Signed-off-by: NSong Liu <songliubraving@fb.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      5ddf0440
    • S
      md/r5cache: gracefully handle journal device errors for writeback mode · 70d466f7
      Song Liu 提交于
      For the raid456 with writeback cache, when journal device failed during
      normal operation, it is still possible to persist all data, as all
      pending data is still in stripe cache. However, it is necessary to handle
      journal failure gracefully.
      
      During journal failures, the following logic handles the graceful shutdown
      of journal:
      1. raid5_error() marks the device as Faulty and schedules async work
         log->disable_writeback_work;
      2. In disable_writeback_work (r5c_disable_writeback_async), the mddev is
         suspended, set to write through, and then resumed. mddev_suspend()
         flushes all cached stripes;
      3. All cached stripes need to be flushed carefully to the RAID array.
      
      This patch fixes issues within the process above:
      1. In r5c_update_on_rdev_error() schedule disable_writeback_work for
         journal failures;
      2. In r5c_disable_writeback_async(), wait for MD_SB_CHANGE_PENDING,
         since raid5_error() updates superblock.
      3. In handle_stripe(), allow stripes with data in journal (s.injournal > 0)
         to make progress during log_failed;
      4. In delay_towrite(), if log failed only process data in the cache (skip
         new writes in dev->towrite);
      5. In __get_priority_stripe(), process loprio_list during journal device
         failures.
      6. In raid5_remove_disk(), wait for all cached stripes are flushed before
         calling log_exit().
      Signed-off-by: NSong Liu <songliubraving@fb.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      70d466f7
  8. 09 5月, 2017 1 次提交
  9. 05 5月, 2017 1 次提交
    • J
      md/raid5: make use of spin_lock_irq over local_irq_disable + spin_lock · 3d05f3ae
      Julia Cartwright 提交于
      On mainline, there is no functional difference, just less code, and
      symmetric lock/unlock paths.
      
      On PREEMPT_RT builds, this fixes the following warning, seen by
      Alexander GQ Gerasiov, due to the sleeping nature of spinlocks.
      
         BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:993
         in_atomic(): 0, irqs_disabled(): 1, pid: 58, name: kworker/u12:1
         CPU: 5 PID: 58 Comm: kworker/u12:1 Tainted: G        W       4.9.20-rt16-stand6-686 #1
         Hardware name: Supermicro SYS-5027R-WRF/X9SRW-F, BIOS 3.2a 10/28/2015
         Workqueue: writeback wb_workfn (flush-253:0)
         Call Trace:
          dump_stack+0x47/0x68
          ? migrate_enable+0x4a/0xf0
          ___might_sleep+0x101/0x180
          rt_spin_lock+0x17/0x40
          add_stripe_bio+0x4e3/0x6c0 [raid456]
          ? preempt_count_add+0x42/0xb0
          raid5_make_request+0x737/0xdd0 [raid456]
      Reported-by: NAlexander GQ Gerasiov <gq@redlab-i.ru>
      Tested-by: NAlexander GQ Gerasiov <gq@redlab-i.ru>
      Signed-off-by: NJulia Cartwright <julia@ni.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      3d05f3ae
  10. 26 4月, 2017 1 次提交
  11. 12 4月, 2017 1 次提交
    • N
      md/raid5: make chunk_aligned_read() split bios more cleanly. · dd7a8f5d
      NeilBrown 提交于
      chunk_aligned_read() currently uses fs_bio_set - which is meant for
      filesystems to use - and loops if multiple splits are needed, which is
      not best practice.
      As this is only used for READ requests, not writes, it is unlikely
      to cause a problem.  However it is best to be consistent in how
      we split bios, and to follow the pattern used in raid1/raid10.
      
      So create a private bioset, bio_split, and use it to perform a single
      split, submitting the remainder to generic_make_request() for later
      processing.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      dd7a8f5d
  12. 11 4月, 2017 4 次提交
    • A
      raid5-ppl: partial parity calculation optimization · ae1713e2
      Artur Paszkiewicz 提交于
      In case of read-modify-write, partial partity is the same as the result
      of ops_run_prexor5(), so we can just copy sh->dev[pd_idx].page into
      sh->ppl_page instead of calculating it again.
      Signed-off-by: NArtur Paszkiewicz <artur.paszkiewicz@intel.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      ae1713e2
    • A
      raid5-ppl: use resize_stripes() when enabling or disabling ppl · 845b9e22
      Artur Paszkiewicz 提交于
      Use resize_stripes() instead of raid5_reset_stripe_cache() to allocate
      or free sh->ppl_page at runtime for all stripes in the stripe cache.
      raid5_reset_stripe_cache() required suspending the mddev and could
      deadlock because of GFP_KERNEL allocations.
      
      Move the 'newsize' check to check_reshape() to allow reallocating the
      stripes with the same number of disks. Allocate sh->ppl_page in
      alloc_stripe() instead of grow_buffers(). Pass 'struct r5conf *conf' as
      a parameter to alloc_stripe() because it is needed to check whether to
      allocate ppl_page. Add free_stripe() and use it to free stripes rather
      than directly call kmem_cache_free(). Also free sh->ppl_page in
      free_stripe().
      
      Set MD_HAS_PPL at the end of ppl_init_log() instead of explicitly
      setting it in advance and add another parameter to log_init() to allow
      calling ppl_init_log() without the bit set. Don't try to calculate
      partial parity or add a stripe to log if it does not have ppl_page set.
      
      Enabling ppl can now be performed without suspending the mddev, because
      the log won't be used until new stripes are allocated with ppl_page.
      Calling mddev_suspend/resume is still necessary when disabling ppl,
      because we want all stripes to finish before stopping the log, but
      resize_stripes() can be called after mddev_resume() when ppl is no
      longer active.
      Suggested-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NArtur Paszkiewicz <artur.paszkiewicz@intel.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      845b9e22
    • N
      md/raid6: Fix anomily when recovering a single device in RAID6. · 7471fb77
      NeilBrown 提交于
      When recoverying a single missing/failed device in a RAID6,
      those stripes where the Q block is on the missing device are
      handled a bit differently.  In these cases it is easy to
      check that the P block is correct, so we do.  This results
      in the P block be destroy.  Consequently the P block needs
      to be read a second time in order to compute Q.  This causes
      lots of seeks and hurts performance.
      
      It shouldn't be necessary to re-read P as it can be computed
      from the DATA.  But we only compute blocks on missing
      devices, since c337869d ("md: do not compute parity
      unless it is on a failed drive").
      
      So relax the change made in that commit to allow computing
      of the P block in a RAID6 which it is the only missing that
      block.
      
      This makes RAID6 recovery run much faster as the disk just
      "before" the recovering device is no longer seeking
      back-and-forth.
      Reported-by-tested-by: NBrad Campbell <lists2009@fnarfbargle.com>
      Reviewed-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      7471fb77
    • D
      md: update slab_cache before releasing new stripes when stripes resizing · 583da48e
      Dennis Yang 提交于
      When growing raid5 device on machine with small memory, there is chance that
      mdadm will be killed and the following bug report can be observed. The same
      bug could also be reproduced in linux-4.10.6.
      
      [57600.075774] BUG: unable to handle kernel NULL pointer dereference at           (null)
      [57600.083796] IP: [<ffffffff81a6aa87>] _raw_spin_lock+0x7/0x20
      [57600.110378] PGD 421cf067 PUD 4442d067 PMD 0
      [57600.114678] Oops: 0002 [#1] SMP
      [57600.180799] CPU: 1 PID: 25990 Comm: mdadm Tainted: P           O    4.2.8 #1
      [57600.187849] Hardware name: To be filled by O.E.M. To be filled by O.E.M./MAHOBAY, BIOS QV05AR66 03/06/2013
      [57600.197490] task: ffff880044e47240 ti: ffff880043070000 task.ti: ffff880043070000
      [57600.204963] RIP: 0010:[<ffffffff81a6aa87>]  [<ffffffff81a6aa87>] _raw_spin_lock+0x7/0x20
      [57600.213057] RSP: 0018:ffff880043073810  EFLAGS: 00010046
      [57600.218359] RAX: 0000000000000000 RBX: 000000000000000c RCX: ffff88011e296dd0
      [57600.225486] RDX: 0000000000000001 RSI: ffffe8ffffcb46c0 RDI: 0000000000000000
      [57600.232613] RBP: ffff880043073878 R08: ffff88011e5f8170 R09: 0000000000000282
      [57600.239739] R10: 0000000000000005 R11: 28f5c28f5c28f5c3 R12: ffff880043073838
      [57600.246872] R13: ffffe8ffffcb46c0 R14: 0000000000000000 R15: ffff8800b9706a00
      [57600.253999] FS:  00007f576106c700(0000) GS:ffff88011e280000(0000) knlGS:0000000000000000
      [57600.262078] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [57600.267817] CR2: 0000000000000000 CR3: 00000000428fe000 CR4: 00000000001406e0
      [57600.274942] Stack:
      [57600.276949]  ffffffff8114ee35 ffff880043073868 0000000000000282 000000000000eb3f
      [57600.284383]  ffffffff81119043 ffff880043073838 ffff880043073838 ffff88003e197b98
      [57600.291820]  ffffe8ffffcb46c0 ffff88003e197360 0000000000000286 ffff880043073968
      [57600.299254] Call Trace:
      [57600.301698]  [<ffffffff8114ee35>] ? cache_flusharray+0x35/0xe0
      [57600.307523]  [<ffffffff81119043>] ? __page_cache_release+0x23/0x110
      [57600.313779]  [<ffffffff8114eb53>] kmem_cache_free+0x63/0xc0
      [57600.319344]  [<ffffffff81579942>] drop_one_stripe+0x62/0x90
      [57600.324915]  [<ffffffff81579b5b>] raid5_cache_scan+0x8b/0xb0
      [57600.330563]  [<ffffffff8111b98a>] shrink_slab.part.36+0x19a/0x250
      [57600.336650]  [<ffffffff8111e38c>] shrink_zone+0x23c/0x250
      [57600.342039]  [<ffffffff8111e4f3>] do_try_to_free_pages+0x153/0x420
      [57600.348210]  [<ffffffff8111e851>] try_to_free_pages+0x91/0xa0
      [57600.353959]  [<ffffffff811145b1>] __alloc_pages_nodemask+0x4d1/0x8b0
      [57600.360303]  [<ffffffff8157a30b>] check_reshape+0x62b/0x770
      [57600.365866]  [<ffffffff8157a4a5>] raid5_check_reshape+0x55/0xa0
      [57600.371778]  [<ffffffff81583df7>] update_raid_disks+0xc7/0x110
      [57600.377604]  [<ffffffff81592b73>] md_ioctl+0xd83/0x1b10
      [57600.382827]  [<ffffffff81385380>] blkdev_ioctl+0x170/0x690
      [57600.388307]  [<ffffffff81195238>] block_ioctl+0x38/0x40
      [57600.393525]  [<ffffffff811731c5>] do_vfs_ioctl+0x2b5/0x480
      [57600.399010]  [<ffffffff8115e07b>] ? vfs_write+0x14b/0x1f0
      [57600.404400]  [<ffffffff811733cc>] SyS_ioctl+0x3c/0x70
      [57600.409447]  [<ffffffff81a6ad97>] entry_SYSCALL_64_fastpath+0x12/0x6a
      [57600.415875] Code: 00 00 00 00 55 48 89 e5 8b 07 85 c0 74 04 31 c0 5d c3 ba 01 00 00 00 f0 0f b1 17 85 c0 75 ef b0 01 5d c3 90 31 c0 ba 01 00 00 00 <f0> 0f b1 17 85 c0 75 01 c3 55 89 c6 48 89 e5 e8 85 d1 63 ff 5d
      [57600.435460] RIP  [<ffffffff81a6aa87>] _raw_spin_lock+0x7/0x20
      [57600.441208]  RSP <ffff880043073810>
      [57600.444690] CR2: 0000000000000000
      [57600.448000] ---[ end trace cbc6b5cc4bf9831d ]---
      
      The problem is that resize_stripes() releases new stripe_heads before assigning new
      slab cache to conf->slab_cache. If the shrinker function raid5_cache_scan() gets called
      after resize_stripes() starting releasing new stripes but right before new slab cache
      being assigned, it is possible that these new stripe_heads will be freed with the old
      slab_cache which was already been destoryed and that triggers this bug.
      Signed-off-by: NDennis Yang <dennisyang@qnap.com>
      Fixes: edbe83ab ("md/raid5: allow the stripe_cache to grow and shrink.")
      Cc: stable@vger.kernel.org (4.1+)
      Reviewed-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      583da48e
  13. 09 4月, 2017 2 次提交
  14. 07 4月, 2017 1 次提交
    • N
      block: trace completion of all bios. · fbbaf700
      NeilBrown 提交于
      Currently only dm and md/raid5 bios trigger
      trace_block_bio_complete().  Now that we have bio_chain() and
      bio_inc_remaining(), it is not possible, in general, for a driver to
      know when the bio is really complete.  Only bio_endio() knows that.
      
      So move the trace_block_bio_complete() call to bio_endio().
      
      Now trace_block_bio_complete() pairs with trace_block_bio_queue().
      Any bio for which a 'queue' event is traced, will subsequently
      generate a 'complete' event.
      
      There are a few cases where completion tracing is not wanted.
      1/ If blk_update_request() has already generated a completion
         trace event at the 'request' level, there is no point generating
         one at the bio level too.  In this case the bi_sector and bi_size
         will have changed, so the bio level event would be wrong
      
      2/ If the bio hasn't actually been queued yet, but is being aborted
         early, then a trace event could be confusing.  Some filesystems
         call bio_endio() but do not want tracing.
      
      3/ The bio_integrity code interposes itself by replacing bi_end_io,
         then restoring it and calling bio_endio() again.  This would produce
         two identical trace events if left like that.
      
      To handle these, we introduce a flag BIO_TRACE_COMPLETION and only
      produce the trace event when this is set.
      We address point 1 above by clearing the flag in blk_update_request().
      We address point 2 above by only setting the flag when
      generic_make_request() is called.
      We address point 3 above by clearing the flag after generating a
      completion event.
      
      When bio_split() is used on a bio, particularly in blk_queue_split(),
      there is an extra complication.  A new bio is split off the front, and
      may be handle directly without going through generic_make_request().
      The old bio, which has been advanced, is passed to
      generic_make_request(), so it will trigger a trace event a second
      time.
      Probably the best result when a split happens is to see a single
      'queue' event for the whole bio, then multiple 'complete' events - one
      for each component.  To achieve this was can:
      - copy the BIO_TRACE_COMPLETION flag to the new bio in bio_split()
      - avoid generating a 'queue' event if BIO_TRACE_COMPLETION is already set.
      This way, the split-off bio won't create a queue event, the original
      won't either even if it re-submitted to generic_make_request(),
      but both will produce completion events, each for their own range.
      
      So if generic_make_request() is called (which generates a QUEUED
      event), then bi_endio() will create a single COMPLETE event for each
      range that the bio is split into, unless the driver has explicitly
      requested it not to.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      fbbaf700
  15. 28 3月, 2017 1 次提交
    • S
      md/raid5: use consistency_policy to remove journal feature · 0bb0c105
      Song Liu 提交于
      When journal device of an array fails, the array is forced into read-only
      mode. To make the array normal without adding another journal device, we
      need to remove journal _feature_ from the array.
      
      This patch allows remove journal _feature_ from an array, For journal
      existing journal should be either missing or faulty.
      
      To remove journal feature, it is necessary to remove the journal device
      first:
      
        mdadm --fail /dev/md0 /dev/sdb
        mdadm: set /dev/sdb faulty in /dev/md0
        mdadm --remove /dev/md0 /dev/sdb
        mdadm: hot removed /dev/sdb from /dev/md0
      
      Then the journal feature can be removed by echoing into the sysfs file:
      
       cat /sys/block/md0/md/consistency_policy
       journal
      
       echo resync > /sys/block/md0/md/consistency_policy
       cat /sys/block/md0/md/consistency_policy
       resync
      Signed-off-by: NSong Liu <songliubraving@fb.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      0bb0c105
  16. 24 3月, 2017 1 次提交
  17. 23 3月, 2017 7 次提交
    • N
      md/raid5: don't test ->writes_pending in raid5_remove_disk · 84dd97a6
      NeilBrown 提交于
      This test on ->writes_pending cannot be safe as the counter
      can be incremented at any moment and cannot be locked against.
      
      Change it to test conf->active_stripes, which at least
      can be locked against.  More changes are still needed.
      
      A future patch will change ->writes_pending, and testing it here will
      be very inconvenient.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      84dd97a6
    • N
      Revert "md/raid5: limit request size according to implementation limits" · 97d53438
      NeilBrown 提交于
      This reverts commit e8d7c332.
      
      Now that raid5 doesn't abuse bi_phys_segments any more, we no longer
      need to impose these limits.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      97d53438
    • N
      md/raid5: remove over-loading of ->bi_phys_segments. · 0472a42b
      NeilBrown 提交于
      When a read request, which bypassed the cache, fails, we need to retry
      it through the cache.
      This involves attaching it to a sequence of stripe_heads, and it may not
      be possible to get all the stripe_heads we need at once.
      We do what we can, and record how far we got in ->bi_phys_segments so
      we can pick up again later.
      
      There is only ever one bio which may have a non-zero offset stored in
      ->bi_phys_segments, the one that is either active in the single thread
      which calls retry_aligned_read(), or is in conf->retry_read_aligned
      waiting for retry_aligned_read() to be called again.
      
      So we only need to store one offset value.  This can be in a local
      variable passed between remove_bio_from_retry() and
      retry_aligned_read(), or in the r5conf structure next to the
      ->retry_read_aligned pointer.
      
      Storing it there allows the last usage of ->bi_phys_segments to be
      removed from md/raid5.c.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      0472a42b
    • N
      md/raid5: use bio_inc_remaining() instead of repurposing bi_phys_segments as a counter · 016c76ac
      NeilBrown 提交于
      md/raid5 needs to keep track of how many stripe_heads are processing a
      bio so that it can delay calling bio_endio() until all stripe_heads
      have completed.  It currently uses 16 bits of ->bi_phys_segments for
      this purpose.
      
      16 bits is only enough for 256M requests, and it is possible for a
      single bio to be larger than this, which causes problems.  Also, the
      bio struct contains a larger counter, __bi_remaining, which has a
      purpose very similar to the purpose of our counter.  So stop using
      ->bi_phys_segments, and instead use __bi_remaining.
      
      This means we don't need to initialize the counter, as our caller
      initializes it to '1'.  It also means we can call bio_endio() directly
      as it tests this counter internally.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      016c76ac
    • N
      md/raid5: call bio_endio() directly rather than queueing for later. · bd83d0a2
      NeilBrown 提交于
      We currently gather bios that need to be returned into a bio_list
      and call bio_endio() on them all together.
      The original reason for this was to avoid making the calls while
      holding a spinlock.
      Locking has changed a lot since then, and that reason is no longer
      valid.
      
      So discard return_io() and various return_bi lists, and just call
      bio_endio() directly as needed.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      bd83d0a2
    • N
      md/raid5: simplfy delaying of writes while metadata is updated. · 16d997b7
      NeilBrown 提交于
      If a device fails during a write, we must ensure the failure is
      recorded in the metadata before the completion of the write is
      acknowleged.
      
      Commit c3cce6cd ("md/raid5: ensure device failure recorded before
      write request returns.")  added code for this, but it was
      unnecessarily complicated.  We already had similar functionality for
      handling updates to the bad-block-list, thanks to Commit de393cde
      ("md: make it easier to wait for bad blocks to be acknowledged.")
      
      So revert most of the former commit, and instead avoid collecting
      completed writes if MD_CHANGE_PENDING is set.  raid5d() will then flush
      the metadata and retry the stripe_head.
      As this change can leave a stripe_head ready for handling immediately
      after handle_active_stripes() returns, we change raid5_do_work() to
      pause when MD_CHANGE_PENDING is set, so that it doesn't spin.
      
      We check MD_CHANGE_PENDING *after* analyse_stripe() as it could be set
      asynchronously.  After analyse_stripe(), we have collected stable data
      about the state of devices, which will be used to make decisions.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      16d997b7
    • N
      md/raid5: use md_write_start to count stripes, not bios · 49728050
      NeilBrown 提交于
      We use md_write_start() to increase the count of pending writes, and
      md_write_end() to decrement the count.  We currently count bios
      submitted to md/raid5.  Change it count stripe_heads that a WRITE bio
      has been attached to.
      
      So now, raid5_make_request() calls md_write_start() and then
      md_write_end() to keep the count elevated during the setup of the
      request.
      
      add_stripe_bio() calls md_write_start() for each stripe_head, and the
      completion routines always call md_write_end(), instead of only
      calling it when raid5_dec_bi_active_stripes() returns 0.
      make_discard_request also calls md_write_start/end().
      
      The parallel between md_write_{start,end} and use of bi_phys_segments
      can be seen in that:
       Whenever we set bi_phys_segments to 1, we now call md_write_start.
       Whenever we increment it on non-read requests with
         raid5_inc_bi_active_stripes(), we now call md_write_start().
       Whenever we decrement bi_phys_segments on non-read requsts with
          raid5_dec_bi_active_stripes(), we now call md_write_end().
      
      This reduces our dependence on keeping a per-bio count of active
      stripes in bi_phys_segments.
      
      md_write_inc() is added which parallels md_write_start(), but requires
      that a write has already been started, and is certain never to sleep.
      This can be used inside a spinlocked region when adding to a write
      request.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      49728050
  18. 17 3月, 2017 7 次提交
    • A
      raid5-ppl: runtime PPL enabling or disabling · ba903a3e
      Artur Paszkiewicz 提交于
      Allow writing to 'consistency_policy' attribute when the array is
      active. Add a new function 'change_consistency_policy' to the
      md_personality operations structure to handle the change in the
      personality code. Values "ppl" and "resync" are accepted and
      turn PPL on and off respectively.
      
      When enabling PPL its location and size should first be set using
      'ppl_sector' and 'ppl_size' attributes and a valid PPL header should be
      written at this location on each member device.
      
      Enabling or disabling PPL is performed under a suspended array.  The
      raid5_reset_stripe_cache function frees the stripe cache and allocates
      it again in order to allocate or free the ppl_pages for the stripes in
      the stripe cache.
      Signed-off-by: NArtur Paszkiewicz <artur.paszkiewicz@intel.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      ba903a3e
    • A
      raid5-ppl: support disk hot add/remove with PPL · 6358c239
      Artur Paszkiewicz 提交于
      Add a function to modify the log by removing an rdev when a drive fails
      or adding when a spare/replacement is activated as a raid member.
      
      Removing a disk just clears the child log rdev pointer. No new stripes
      will be accepted for this child log in ppl_write_stripe() and running io
      units will be processed without writing PPL to the device.
      
      Adding a disk sets the child log rdev pointer and writes an empty PPL
      header.
      Signed-off-by: NArtur Paszkiewicz <artur.paszkiewicz@intel.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      6358c239
    • A
      raid5-ppl: load and recover the log · 4536bf9b
      Artur Paszkiewicz 提交于
      Load the log from each disk when starting the array and recover if the
      array is dirty.
      
      The initial empty PPL is written by mdadm. When loading the log we
      verify the header checksum and signature. For external metadata arrays
      the signature is verified in userspace, so here we read it from the
      header, verifying only if it matches on all disks, and use it later when
      writing PPL.
      
      In addition to the header checksum, each header entry also contains a
      checksum of its partial parity data. If the header is valid, recovery is
      performed for each entry until an invalid entry is found. If the array
      is not degraded and recovery using PPL fully succeeds, there is no need
      to resync the array because data and parity will be consistent, so in
      this case resync will be disabled.
      
      Due to compatibility with IMSM implementations on other systems, we
      can't assume that the recovery data block size is always 4K. Writes
      generated by MD raid5 don't have this issue, but when recovering PPL
      written in other environments it is possible to have entries with
      512-byte sector granularity. The recovery code takes this into account
      and also the logical sector size of the underlying drives.
      Signed-off-by: NArtur Paszkiewicz <artur.paszkiewicz@intel.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      4536bf9b
    • A
      raid5-ppl: Partial Parity Log write logging implementation · 3418d036
      Artur Paszkiewicz 提交于
      Implement the calculation of partial parity for a stripe and PPL write
      logging functionality. The description of PPL is added to the
      documentation. More details can be found in the comments in raid5-ppl.c.
      
      Attach a page for holding the partial parity data to stripe_head.
      Allocate it only if mddev has the MD_HAS_PPL flag set.
      
      Partial parity is the xor of not modified data chunks of a stripe and is
      calculated as follows:
      
      - reconstruct-write case:
        xor data from all not updated disks in a stripe
      
      - read-modify-write case:
        xor old data and parity from all updated disks in a stripe
      
      Implement it using the async_tx API and integrate into raid_run_ops().
      It must be called when we still have access to old data, so do it when
      STRIPE_OP_BIODRAIN is set, but before ops_run_prexor5(). The result is
      stored into sh->ppl_page.
      
      Partial parity is not meaningful for full stripe write and is not stored
      in the log or used for recovery, so don't attempt to calculate it when
      stripe has STRIPE_FULL_WRITE.
      
      Put the PPL metadata structures to md_p.h because userspace tools
      (mdadm) will also need to read/write PPL.
      
      Warn about using PPL with enabled disk volatile write-back cache for
      now. It can be removed once disk cache flushing before writing PPL is
      implemented.
      Signed-off-by: NArtur Paszkiewicz <artur.paszkiewicz@intel.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      3418d036
    • A
      raid5: separate header for log functions · ff875738
      Artur Paszkiewicz 提交于
      Move raid5-cache declarations from raid5.h to raid5-log.h, add inline
      wrappers for functions which will be shared with ppl and use them in
      raid5 core instead of direct calls to raid5-cache.
      
      Remove unused parameter from r5c_cache_data(), move two duplicated
      pr_debug() calls to r5l_init_log().
      Signed-off-by: NArtur Paszkiewicz <artur.paszkiewicz@intel.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      ff875738
    • S
      md/raid5: sort bios · aaf9f12e
      Shaohua Li 提交于
      Previous patch (raid5: only dispatch IO from raid5d for harddisk raid)
      defers IO dispatching. The goal is to create better IO pattern. At that
      time, we don't sort the deffered IO and hope the block layer can do IO
      merge and sort. Now the raid5-cache writeback could create large amount
      of bios. And if we enable muti-thread for stripe handling, we can't
      control when to dispatch IO to raid disks. In a lot of time, we are
      dispatching IO which block layer can't do merge effectively.
      
      This patch moves further for the IO dispatching defer. We accumulate
      bios, but we don't dispatch all the bios after a threshold is met. This
      'dispatch partial portion of bios' stragety allows bios coming in a
      large time window are sent to disks together. At the dispatching time,
      there is large chance the block layer can merge the bios. To make this
      more effective, we dispatch IO in ascending order. This increases
      request merge chance and reduces disk seek.
      Signed-off-by: NShaohua Li <shli@fb.com>
      aaf9f12e
    • S
      md/raid5: prioritize stripes for writeback · 535ae4eb
      Shaohua Li 提交于
      In raid5-cache writeback mode, we have two types of stripes to handle.
      - stripes which aren't cached yet
      - stripes which are cached and flushing out to raid disks
      
      Upperlayer is more sensistive to latency of the first type of stripes
      generally. But we only one handle list for all these stripes, where the
      two types of stripes are mixed together. When reclaim flushes a lot of
      stripes, the first type of stripes could be noticeably delayed. On the
      other hand, if the log space is tight, we'd like to handle the second
      type of stripes faster and free log space.
      
      This patch destinguishes the two types stripes. They are added into
      different handle list. When we try to get a stripe to handl, we prefer
      the first type of stripes unless log space is tight.
      
      This should have no impact for !writeback case.
      Signed-off-by: NShaohua Li <shli@fb.com>
      535ae4eb
  19. 15 3月, 2017 1 次提交
  20. 10 3月, 2017 1 次提交
  21. 02 3月, 2017 1 次提交
  22. 17 2月, 2017 1 次提交