1. 31 5月, 2018 1 次提交
  2. 19 2月, 2018 1 次提交
  3. 02 11月, 2017 1 次提交
    • G
      License cleanup: add SPDX GPL-2.0 license identifier to files with no license · b2441318
      Greg Kroah-Hartman 提交于
      Many source files in the tree are missing licensing information, which
      makes it harder for compliance tools to determine the correct license.
      
      By default all files without license information are under the default
      license of the kernel, which is GPL version 2.
      
      Update the files which contain no license information with the 'GPL-2.0'
      SPDX license identifier.  The SPDX identifier is a legally binding
      shorthand, which can be used instead of the full boiler plate text.
      
      This patch is based on work done by Thomas Gleixner and Kate Stewart and
      Philippe Ombredanne.
      
      How this work was done:
      
      Patches were generated and checked against linux-4.14-rc6 for a subset of
      the use cases:
       - file had no licensing information it it.
       - file was a */uapi/* one with no licensing information in it,
       - file was a */uapi/* one with existing licensing information,
      
      Further patches will be generated in subsequent months to fix up cases
      where non-standard license headers were used, and references to license
      had to be inferred by heuristics based on keywords.
      
      The analysis to determine which SPDX License Identifier to be applied to
      a file was done in a spreadsheet of side by side results from of the
      output of two independent scanners (ScanCode & Windriver) producing SPDX
      tag:value files created by Philippe Ombredanne.  Philippe prepared the
      base worksheet, and did an initial spot review of a few 1000 files.
      
      The 4.13 kernel was the starting point of the analysis with 60,537 files
      assessed.  Kate Stewart did a file by file comparison of the scanner
      results in the spreadsheet to determine which SPDX license identifier(s)
      to be applied to the file. She confirmed any determination that was not
      immediately clear with lawyers working with the Linux Foundation.
      
      Criteria used to select files for SPDX license identifier tagging was:
       - Files considered eligible had to be source code files.
       - Make and config files were included as candidates if they contained >5
         lines of source
       - File already had some variant of a license header in it (even if <5
         lines).
      
      All documentation files were explicitly excluded.
      
      The following heuristics were used to determine which SPDX license
      identifiers to apply.
      
       - when both scanners couldn't find any license traces, file was
         considered to have no license information in it, and the top level
         COPYING file license applied.
      
         For non */uapi/* files that summary was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0                                              11139
      
         and resulted in the first patch in this series.
      
         If that file was a */uapi/* path one, it was "GPL-2.0 WITH
         Linux-syscall-note" otherwise it was "GPL-2.0".  Results of that was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0 WITH Linux-syscall-note                        930
      
         and resulted in the second patch in this series.
      
       - if a file had some form of licensing information in it, and was one
         of the */uapi/* ones, it was denoted with the Linux-syscall-note if
         any GPL family license was found in the file or had no licensing in
         it (per prior point).  Results summary:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|------
         GPL-2.0 WITH Linux-syscall-note                       270
         GPL-2.0+ WITH Linux-syscall-note                      169
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause)    21
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)    17
         LGPL-2.1+ WITH Linux-syscall-note                      15
         GPL-1.0+ WITH Linux-syscall-note                       14
         ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause)    5
         LGPL-2.0+ WITH Linux-syscall-note                       4
         LGPL-2.1 WITH Linux-syscall-note                        3
         ((GPL-2.0 WITH Linux-syscall-note) OR MIT)              3
         ((GPL-2.0 WITH Linux-syscall-note) AND MIT)             1
      
         and that resulted in the third patch in this series.
      
       - when the two scanners agreed on the detected license(s), that became
         the concluded license(s).
      
       - when there was disagreement between the two scanners (one detected a
         license but the other didn't, or they both detected different
         licenses) a manual inspection of the file occurred.
      
       - In most cases a manual inspection of the information in the file
         resulted in a clear resolution of the license that should apply (and
         which scanner probably needed to revisit its heuristics).
      
       - When it was not immediately clear, the license identifier was
         confirmed with lawyers working with the Linux Foundation.
      
       - If there was any question as to the appropriate license identifier,
         the file was flagged for further research and to be revisited later
         in time.
      
      In total, over 70 hours of logged manual review was done on the
      spreadsheet to determine the SPDX license identifiers to apply to the
      source files by Kate, Philippe, Thomas and, in some cases, confirmation
      by lawyers working with the Linux Foundation.
      
      Kate also obtained a third independent scan of the 4.13 code base from
      FOSSology, and compared selected files where the other two scanners
      disagreed against that SPDX file, to see if there was new insights.  The
      Windriver scanner is based on an older version of FOSSology in part, so
      they are related.
      
      Thomas did random spot checks in about 500 files from the spreadsheets
      for the uapi headers and agreed with SPDX license identifier in the
      files he inspected. For the non-uapi files Thomas did random spot checks
      in about 15000 files.
      
      In initial set of patches against 4.14-rc6, 3 files were found to have
      copy/paste license identifier errors, and have been fixed to reflect the
      correct identifier.
      
      Additionally Philippe spent 10 hours this week doing a detailed manual
      inspection and review of the 12,461 patched files from the initial patch
      version early this week with:
       - a full scancode scan run, collecting the matched texts, detected
         license ids and scores
       - reviewing anything where there was a license detected (about 500+
         files) to ensure that the applied SPDX license was correct
       - reviewing anything where there was no detection but the patch license
         was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
         SPDX license was correct
      
      This produced a worksheet with 20 files needing minor correction.  This
      worksheet was then exported into 3 different .csv files for the
      different types of files to be modified.
      
      These .csv files were then reviewed by Greg.  Thomas wrote a script to
      parse the csv files and add the proper SPDX tag to the file, in the
      format that the file expected.  This script was further refined by Greg
      based on the output to detect more types of files automatically and to
      distinguish between header and source .c files (which need different
      comment types.)  Finally Greg ran the script using the .csv files to
      generate the patches.
      Reviewed-by: NKate Stewart <kstewart@linuxfoundation.org>
      Reviewed-by: NPhilippe Ombredanne <pombredanne@nexb.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b2441318
  4. 28 4月, 2017 1 次提交
    • X
      md/raid1: Use a new variable to count flighting sync requests · 43ac9b84
      Xiao Ni 提交于
      In new barrier codes, raise_barrier waits if conf->nr_pending[idx] is not zero.
      After all the conditions are true, the resync request can go on be handled. But
      it adds conf->nr_pending[idx] again. The next resync request hit the same bucket
      idx need to wait the resync request which is submitted before. The performance
      of resync/recovery is degraded.
      So we should use a new variable to count sync requests which are in flight.
      
      I did a simple test:
      1. Without the patch, create a raid1 with two disks. The resync speed:
      Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
      sdb               0.00     0.00  166.00    0.00    10.38     0.00   128.00     0.03    0.20    0.20    0.00   0.19   3.20
      sdc               0.00     0.00    0.00  166.00     0.00    10.38   128.00     0.96    5.77    0.00    5.77   5.75  95.50
      2. With the patch, the result is:
      sdb            2214.00     0.00  766.00    0.00   185.69     0.00   496.46     2.80    3.66    3.66    0.00   1.03  79.10
      sdc               0.00  2205.00    0.00  769.00     0.00   186.44   496.52     5.25    6.84    0.00    6.84   1.30 100.10
      Suggested-by: NShaohua Li <shli@kernel.org>
      Signed-off-by: NXiao Ni <xni@redhat.com>
      Acked-by: NColy Li <colyli@suse.de>
      Signed-off-by: NShaohua Li <shli@fb.com>
      43ac9b84
  5. 12 4月, 2017 1 次提交
    • N
      md/raid1: simplify the splitting of requests. · c230e7e5
      NeilBrown 提交于
      raid1 currently splits requests in two different ways for
      two different reasons.
      
      First, bio_split() is used to ensure the bio fits within a
      resync accounting region.
      Second, multiple r1bios are allocated for each bio to handle
      the possiblity of known bad blocks on some devices.
      
      This can be simplified to just use bio_split() once, and not
      use multiple r1bios.
      We delay the split until we know a maximum bio size that can
      be handled with a single r1bio, and then split the bio and
      queue the remainder for later handling.
      
      This avoids all loops inside raid1.c request handling.  Just
      a single read, or a single set of writes, is submitted to
      lower-level devices for each bio that comes from
      generic_make_request().
      
      When the bio needs to be split, generic_make_request() will
      do the necessary looping and call md_make_request() multiple
      times.
      
      raid1_make_request() no longer queues request for raid1 to handle,
      so we can remove that branch from the 'if'.
      
      This patch also creates a new private bio_set
      (conf->bio_split) for splitting bios.  Using fs_bio_set
      is wrong, as it is meant to be used by filesystems, not
      block devices.  Using it inside md can lead to deadlocks
      under high memory pressure.
      
      Delete unused variable in raid1_write_request() (Shaohua)
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      c230e7e5
  6. 25 3月, 2017 1 次提交
  7. 20 2月, 2017 2 次提交
    • C
      RAID1: avoid unnecessary spin locks in I/O barrier code · 824e47da
      colyli@suse.de 提交于
      When I run a parallel reading performan testing on a md raid1 device with
      two NVMe SSDs, I observe very bad throughput in supprise: by fio with 64KB
      block size, 40 seq read I/O jobs, 128 iodepth, overall throughput is
      only 2.7GB/s, this is around 50% of the idea performance number.
      
      The perf reports locking contention happens at allow_barrier() and
      wait_barrier() code,
       - 41.41%  fio [kernel.kallsyms]     [k] _raw_spin_lock_irqsave
         - _raw_spin_lock_irqsave
               + 89.92% allow_barrier
               + 9.34% __wake_up
       - 37.30%  fio [kernel.kallsyms]     [k] _raw_spin_lock_irq
         - _raw_spin_lock_irq
               - 100.00% wait_barrier
      
      The reason is, in these I/O barrier related functions,
       - raise_barrier()
       - lower_barrier()
       - wait_barrier()
       - allow_barrier()
      They always hold conf->resync_lock firstly, even there are only regular
      reading I/Os and no resync I/O at all. This is a huge performance penalty.
      
      The solution is a lockless-like algorithm in I/O barrier code, and only
      holding conf->resync_lock when it has to.
      
      The original idea is from Hannes Reinecke, and Neil Brown provides
      comments to improve it. I continue to work on it, and make the patch into
      current form.
      
      In the new simpler raid1 I/O barrier implementation, there are two
      wait barrier functions,
       - wait_barrier()
         Which calls _wait_barrier(), is used for regular write I/O. If there is
         resync I/O happening on the same I/O barrier bucket, or the whole
         array is frozen, task will wait until no barrier on same barrier bucket,
         or the whold array is unfreezed.
       - wait_read_barrier()
         Since regular read I/O won't interfere with resync I/O (read_balance()
         will make sure only uptodate data will be read out), it is unnecessary
         to wait for barrier in regular read I/Os, waiting in only necessary
         when the whole array is frozen.
      
      The operations on conf->nr_pending[idx], conf->nr_waiting[idx], conf->
      barrier[idx] are very carefully designed in raise_barrier(),
      lower_barrier(), _wait_barrier() and wait_read_barrier(), in order to
      avoid unnecessary spin locks in these functions. Once conf->
      nr_pengding[idx] is increased, a resync I/O with same barrier bucket index
      has to wait in raise_barrier(). Then in _wait_barrier() if no barrier
      raised in same barrier bucket index and array is not frozen, the regular
      I/O doesn't need to hold conf->resync_lock, it can just increase
      conf->nr_pending[idx], and return to its caller. wait_read_barrier() is
      very similar to _wait_barrier(), the only difference is it only waits when
      array is frozen. For heavy parallel reading I/Os, the lockless I/O barrier
      code almostly gets rid of all spin lock cost.
      
      This patch significantly improves raid1 reading peroformance. From my
      testing, a raid1 device built by two NVMe SSD, runs fio with 64KB
      blocksize, 40 seq read I/O jobs, 128 iodepth, overall throughput
      increases from 2.7GB/s to 4.6GB/s (+70%).
      
      Changelog
      V4:
      - Change conf->nr_queued[] to atomic_t.
      - Define BARRIER_BUCKETS_NR_BITS by (PAGE_SHIFT - ilog2(sizeof(atomic_t)))
      V3:
      - Add smp_mb__after_atomic() as Shaohua and Neil suggested.
      - Change conf->nr_queued[] from atomic_t to int.
      - Change conf->array_frozen from atomic_t back to int, and use
        READ_ONCE(conf->array_frozen) to check value of conf->array_frozen
        in _wait_barrier() and wait_read_barrier().
      - In _wait_barrier() and wait_read_barrier(), add a call to
        wake_up(&conf->wait_barrier) after atomic_dec(&conf->nr_pending[idx]),
        to fix a deadlock between  _wait_barrier()/wait_read_barrier and
        freeze_array().
      V2:
      - Remove a spin_lock/unlock pair in raid1d().
      - Add more code comments to explain why there is no racy when checking two
        atomic_t variables at same time.
      V1:
      - Original RFC patch for comments.
      Signed-off-by: NColy Li <colyli@suse.de>
      Cc: Shaohua Li <shli@fb.com>
      Cc: Hannes Reinecke <hare@suse.com>
      Cc: Johannes Thumshirn <jthumshirn@suse.de>
      Cc: Guoqing Jiang <gqjiang@suse.com>
      Reviewed-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NShaohua Li <shli@fb.com>
      824e47da
    • C
      RAID1: a new I/O barrier implementation to remove resync window · fd76863e
      colyli@suse.de 提交于
      'Commit 79ef3a8a ("raid1: Rewrite the implementation of iobarrier.")'
      introduces a sliding resync window for raid1 I/O barrier, this idea limits
      I/O barriers to happen only inside a slidingresync window, for regular
      I/Os out of this resync window they don't need to wait for barrier any
      more. On large raid1 device, it helps a lot to improve parallel writing
      I/O throughput when there are background resync I/Os performing at
      same time.
      
      The idea of sliding resync widow is awesome, but code complexity is a
      challenge. Sliding resync window requires several variables to work
      collectively, this is complexed and very hard to make it work correctly.
      Just grep "Fixes: 79ef3a8a" in kernel git log, there are 8 more patches
      to fix the original resync window patch. This is not the end, any further
      related modification may easily introduce more regreassion.
      
      Therefore I decide to implement a much simpler raid1 I/O barrier, by
      removing resync window code, I believe life will be much easier.
      
      The brief idea of the simpler barrier is,
       - Do not maintain a global unique resync window
       - Use multiple hash buckets to reduce I/O barrier conflicts, regular
         I/O only has to wait for a resync I/O when both them have same barrier
         bucket index, vice versa.
       - I/O barrier can be reduced to an acceptable number if there are enough
         barrier buckets
      
      Here I explain how the barrier buckets are designed,
       - BARRIER_UNIT_SECTOR_SIZE
         The whole LBA address space of a raid1 device is divided into multiple
         barrier units, by the size of BARRIER_UNIT_SECTOR_SIZE.
         Bio requests won't go across border of barrier unit size, that means
         maximum bio size is BARRIER_UNIT_SECTOR_SIZE<<9 (64MB) in bytes.
         For random I/O 64MB is large enough for both read and write requests,
         for sequential I/O considering underlying block layer may merge them
         into larger requests, 64MB is still good enough.
         Neil also points out that for resync operation, "we want the resync to
         move from region to region fairly quickly so that the slowness caused
         by having to synchronize with the resync is averaged out over a fairly
         small time frame". For full speed resync, 64MB should take less then 1
         second. When resync is competing with other I/O, it could take up a few
         minutes. Therefore 64MB size is fairly good range for resync.
      
       - BARRIER_BUCKETS_NR
         There are BARRIER_BUCKETS_NR buckets in total, which is defined by,
              #define BARRIER_BUCKETS_NR_BITS   (PAGE_SHIFT - 2)
              #define BARRIER_BUCKETS_NR        (1<<BARRIER_BUCKETS_NR_BITS)
         this patch makes the bellowed members of struct r1conf from integer
         to array of integers,
              -       int                     nr_pending;
              -       int                     nr_waiting;
              -       int                     nr_queued;
              -       int                     barrier;
              +       int                     *nr_pending;
              +       int                     *nr_waiting;
              +       int                     *nr_queued;
              +       int                     *barrier;
         number of the array elements is defined as BARRIER_BUCKETS_NR. For 4KB
         kernel space page size, (PAGE_SHIFT - 2) indecates there are 1024 I/O
         barrier buckets, and each array of integers occupies single memory page.
         1024 means for a request which is smaller than the I/O barrier unit size
         has ~0.1% chance to wait for resync to pause, which is quite a small
         enough fraction. Also requesting single memory page is more friendly to
         kernel page allocator than larger memory size.
      
       - I/O barrier bucket is indexed by bio start sector
         If multiple I/O requests hit different I/O barrier units, they only need
         to compete I/O barrier with other I/Os which hit the same I/O barrier
         bucket index with each other. The index of a barrier bucket which a
         bio should look for is calculated by sector_to_idx() which is defined
         in raid1.h as an inline function,
              static inline int sector_to_idx(sector_t sector)
              {
                      return hash_long(sector >> BARRIER_UNIT_SECTOR_BITS,
                                      BARRIER_BUCKETS_NR_BITS);
              }
         Here sector_nr is the start sector number of a bio.
      
       - Single bio won't go across boundary of a I/O barrier unit
         If a request goes across boundary of barrier unit, it will be split. A
         bio may be split in raid1_make_request() or raid1_sync_request(), if
         sectors returned by align_to_barrier_unit_end() is smaller than
         original bio size.
      
      Comparing to single sliding resync window,
       - Currently resync I/O grows linearly, therefore regular and resync I/O
         will conflict within a single barrier units. So the I/O behavior is
         similar to single sliding resync window.
       - But a barrier unit bucket is shared by all barrier units with identical
         barrier uinit index, the probability of conflict might be higher
         than single sliding resync window, in condition that writing I/Os
         always hit barrier units which have identical barrier bucket indexs with
         the resync I/Os. This is a very rare condition in real I/O work loads,
         I cannot imagine how it could happen in practice.
       - Therefore we can achieve a good enough low conflict rate with much
         simpler barrier algorithm and implementation.
      
      There are two changes should be noticed,
       - In raid1d(), I change the code to decrease conf->nr_pending[idx] into
         single loop, it looks like this,
              spin_lock_irqsave(&conf->device_lock, flags);
              conf->nr_queued[idx]--;
              spin_unlock_irqrestore(&conf->device_lock, flags);
         This change generates more spin lock operations, but in next patch of
         this patch set, it will be replaced by a single line code,
              atomic_dec(&conf->nr_queueud[idx]);
         So we don't need to worry about spin lock cost here.
       - Mainline raid1 code split original raid1_make_request() into
         raid1_read_request() and raid1_write_request(). If the original bio
         goes across an I/O barrier unit size, this bio will be split before
         calling raid1_read_request() or raid1_write_request(),  this change
         the code logic more simple and clear.
       - In this patch wait_barrier() is moved from raid1_make_request() to
         raid1_write_request(). In raid_read_request(), original wait_barrier()
         is replaced by raid1_read_request().
         The differnece is wait_read_barrier() only waits if array is frozen,
         using different barrier function in different code path makes the code
         more clean and easy to read.
      Changelog
      V4:
      - Add alloc_r1bio() to remove redundant r1bio memory allocation code.
      - Fix many typos in patch comments.
      - Use (PAGE_SHIFT - ilog2(sizeof(int))) to define BARRIER_BUCKETS_NR_BITS.
      V3:
      - Rebase the patch against latest upstream kernel code.
      - Many fixes by review comments from Neil,
        - Back to use pointers to replace arraries in struct r1conf
        - Remove total_barriers from struct r1conf
        - Add more patch comments to explain how/why the values of
          BARRIER_UNIT_SECTOR_SIZE and BARRIER_BUCKETS_NR are decided.
        - Use get_unqueued_pending() to replace get_all_pendings() and
          get_all_queued()
        - Increase bucket number from 512 to 1024
      - Change code comments format by review from Shaohua.
      V2:
      - Use bio_split() to split the orignal bio if it goes across barrier unit
        bounday, to make the code more simple, by suggestion from Shaohua and
        Neil.
      - Use hash_long() to replace original linear hash, to avoid a possible
        confilict between resync I/O and sequential write I/O, by suggestion from
        Shaohua.
      - Add conf->total_barriers to record barrier depth, which is used to
        control number of parallel sync I/O barriers, by suggestion from Shaohua.
      - In V1 patch the bellowed barrier buckets related members in r1conf are
        allocated in memory page. To make the code more simple, V2 patch moves
        the memory space into struct r1conf, like this,
              -       int                     nr_pending;
              -       int                     nr_waiting;
              -       int                     nr_queued;
              -       int                     barrier;
              +       int                     nr_pending[BARRIER_BUCKETS_NR];
              +       int                     nr_waiting[BARRIER_BUCKETS_NR];
              +       int                     nr_queued[BARRIER_BUCKETS_NR];
              +       int                     barrier[BARRIER_BUCKETS_NR];
        This change is by the suggestion from Shaohua.
      - Remove some inrelavent code comments, by suggestion from Guoqing.
      - Add a missing wait_barrier() before jumping to retry_write, in
        raid1_make_write_request().
      V1:
      - Original RFC patch for comments
      Signed-off-by: NColy Li <colyli@suse.de>
      Cc: Johannes Thumshirn <jthumshirn@suse.de>
      Cc: Guoqing Jiang <gqjiang@suse.com>
      Reviewed-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NShaohua Li <shli@fb.com>
      fd76863e
  8. 23 11月, 2016 1 次提交
    • N
      md/raid1: add failfast handling for reads. · 2e52d449
      NeilBrown 提交于
      If a device is marked FailFast and it is not the only device
      we can read from, we mark the bio with REQ_FAILFAST_* flags.
      
      If this does fail, we don't try read repair but just allow
      failure.  If it was the last device it doesn't fail of
      course, so the retry happens on the same device - this time
      without FAILFAST.  A subsequent failure will not retry but
      will just pass up the error.
      
      During resync we may use FAILFAST requests and on a failure
      we will simply use the other device(s).
      
      During recovery we will only use FAILFAST in the unusual
      case were there are multiple places to read from - i.e. if
      there are > 2 devices.  If we get a failure we will fail the
      device and complete the resync/recovery with remaining
      devices.
      
      The new R1BIO_FailFast flag is set on read reqest to suggest
      the a FAILFAST request might be acceptable.  The rdev needs
      to have FailFast set as well for the read to actually use
      REQ_FAILFAST_*.
      
      We need to know there are at least two working devices
      before we can set R1BIO_FailFast, so we mustn't stop looking
      at the first device we find.  So the "min_pending == 0"
      handling to not exit early, but too always choose the
      best_pending_disk if min_pending == 0.
      
      The spinlocked region in raid1_error() in enlarged to ensure
      that if two bios, reading from two different devices, fail
      at the same time, then there is no risk that both devices
      will be marked faulty, leaving zero "In_sync" devices.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NShaohua Li <shli@fb.com>
      2e52d449
  9. 10 11月, 2016 1 次提交
  10. 12 10月, 2015 1 次提交
    • G
      md-cluster: Use a small window for resync · c40f341f
      Goldwyn Rodrigues 提交于
      Suspending the entire device for resync could take too long. Resync
      in small chunks.
      
      cluster's resync window (32M) is maintained in r1conf as
      cluster_sync_low and cluster_sync_high and processed in
      raid1's sync_request(). If the current resync is outside the cluster
      resync window:
      
      1. Set the cluster_sync_low to curr_resync_completed.
      2. Check if the sync will fit in the new window, if not issue a
         wait_barrier() and set cluster_sync_low to sector_nr.
      3. Set cluster_sync_high to cluster_sync_low + resync_window.
      4. Send a message to all nodes so they may add it in their suspension
         list.
      
      bitmap_cond_end_sync is modified to allow to force a sync inorder
      to get the curr_resync_completed uptodate with the sector passed.
      Signed-off-by: NGoldwyn Rodrigues <rgoldwyn@suse.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      c40f341f
  11. 01 9月, 2015 1 次提交
    • N
      md/raid1: ensure device failure recorded before write request returns. · 55ce74d4
      NeilBrown 提交于
      When a write to one of the legs of a RAID1 fails, the failure is
      recorded in the metadata of the other leg(s) so that after a restart
      the data on the failed drive wont be trusted even if that drive seems
      to be working again  (maybe a cable was unplugged).
      
      Similarly when we record a bad-block in response to a write failure,
      we must not let the write complete until the bad-block update is safe.
      
      Currently there is no interlock between the write request completing
      and the metadata update.  So it is possible that the write will
      complete, the app will confirm success in some way, and then the
      machine will crash before the metadata update completes.
      
      This is an extremely small hole for a racy to fit in, but it is
      theoretically possible and so should be closed.
      
      So:
       - set MD_CHANGE_PENDING when requesting a metadata update for a
         failed device, so we can know with certainty when it completes
       - queue requests that experienced an error on a new queue which
         is only processed after the metadata update completes
       - call raid_end_bio_io() on bios in that queue when the time comes.
      Signed-off-by: NNeilBrown <neilb@suse.com>
      55ce74d4
  12. 04 2月, 2015 1 次提交
    • N
      md: make ->congested robust against personality changes. · 5c675f83
      NeilBrown 提交于
      There is currently no locking around calls to the 'congested'
      bdi function.  If called at an awkward time while an array is
      being converted from one level (or personality) to another, there
      is a tiny chance of running code in an unreferenced module etc.
      
      So add a 'congested' function to the md_personality operations
      structure, and call it with appropriate locking from a central
      'mddev_congested'.
      
      When the array personality is changing the array will be 'suspended'
      so no IO is processed.
      If mddev_congested detects this, it simply reports that the
      array is congested, which is a safe guess.
      As mddev_suspend calls synchronize_rcu(), mddev_congested can
      avoid races by included the whole call inside an rcu_read_lock()
      region.
      This require that the congested functions for all subordinate devices
      can be run under rcu_lock.  Fortunately this is the case.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      5c675f83
  13. 14 10月, 2014 1 次提交
  14. 19 11月, 2013 2 次提交
    • M
      raid1: Rewrite the implementation of iobarrier. · 79ef3a8a
      majianpeng 提交于
      There is an iobarrier in raid1 because of contention between normal IO and
      resync IO.  It suspends all normal IO when resync/recovery happens.
      
      However if normal IO is out side the resync window, there is no contention.
      So this patch changes the barrier mechanism to only block IO that
      could contend with the resync that is currently happening.
      
      We partition the whole space into five parts.
      |---------|-----------|------------|----------------|-------|
              start   next_resync   start_next_window    end_window
      
      start + RESYNC_WINDOW = next_resync
      next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
      start_next_window + NEXT_NORMALIO_DISTANCE = end_window
      
      Firstly we introduce some concepts:
      
      1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
            same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
            So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
      2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
            and start_next_window.  It also indicates the distance between
            start_next_window and end_window.
            It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
            this turned out not to be optimal.
      3 - next_resync: the next sector at which we will do sync IO.
      4 - start: a position which is at most RESYNC_WINDOW before
            next_resync.
      5 - start_next_window:  a position which is NEXT_NORMALIO_DISTANCE
            beyond next_resync.  Normal-io after this position doesn't need to
            wait for resync-io to complete.
      6 - end_window:  a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
            next_resync.  This also doesn't need to wait, but is counted
            differently.
      7 - current_window_requests:  the count of normalIO between
            start_next_window and end_window.
      8 - next_window_requests: the count of normalIO after end_window.
      
      NormalIO will be partitioned into four types:
      
      NormIO1:  the end sector of bio is smaller or equal the start
      NormIO2:  the start sector of bio larger or equal to end_window
      NormIO3:  the start sector of bio larger or equal to
                start_next_window.
      NormIO4:  the location between start_next_window and end_window
      
      |--------|-----------|--------------------|----------------|-------------|
          | start   |   next_resync   |  start_next_window   |  end_window |
       NormIO1   NormIO4            NormIO4                NormIO3      NormIO2
      
      For NormIO1, we don't need any io barrier.
      For NormIO4, we used a similar approach to the original iobarrier
          mechanism.  The normalIO and resyncIO must be kept separate.
      For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
          and "next_window_requests". They indicate the count of active
          requests in the two window.
          For these, we don't wait for resync io to complete.
      
      For resync action, if there are NormIO4s, we must wait for it.
      If not, we can proceed.
      But if resync action reaches start_next_window and
      current_window_requests > 0 (that is there are NormIO3s), we must
      wait until the current_window_requests becomes zero.
      When current_window_requests becomes zero,  start_next_window also
      moves forward. Then current_window_requests will replaced by
      next_window_requests.
      
      There is a problem which when and how to change from NormIO2 to
      NormIO3.  Only then can sync action progress.
      
      We add a field in struct r1conf "start_next_window".
      
      A: if start_next_window == MaxSector, it means there are no NormIO2/3.
         So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
      B: if current_window_requests == 0 && next_window_requests != 0, it
         means start_next_window move to end_window
      
      There is another problem which how to differentiate between
      old NormIO2(now it is NormIO3) and NormIO2.
      For example, there are many bios which are NormIO2 and a bio which is
      NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
      
      We add a field in struct r1bio "start_next_window".
      This is used to record the position conf->start_next_window when the call
      to wait_barrier() is made in make_request().
      
      In allow_barrier(), we check the conf->start_next_window.
      If r1bio->stat_next_window == conf->start_next_window, it means
      there is no transition between NormIO2 and NormIO3.
      If r1bio->start_next_window != conf->start_next_window, it mean
      there was a transition between NormIO2 and NormIO3.  There can only
      have been one transition.  So it only means the bio is old NormIO2.
      
      For one bio, there may be many r1bio's. So we make sure
      all the r1bio->start_next_window are the same value.
      If we met blocked_dev in make_request(), it must call allow_barrier
      and wait_barrier. So the former and the later value of
      conf->start_next_window will be change.
      If there are many r1bio's with differnet start_next_window,
      for the relevant bio, it depend on the last value of r1bio.
      It will cause error. To avoid this, we must wait for previous r1bios
      to complete.
      Signed-off-by: NJianpeng Ma <majianpeng@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      79ef3a8a
    • M
      raid1: Add a field array_frozen to indicate whether raid in freeze state. · b364e3d0
      majianpeng 提交于
      Because the following patch will rewrite the content between normal IO
      and resync IO. So we used a parameter to indicate whether raid is in freeze
      array.
      Signed-off-by: NJianpeng Ma <majianpeng@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      b364e3d0
  15. 31 7月, 2012 4 次提交
    • S
      md/raid1: prevent merging too large request · 12cee5a8
      Shaohua Li 提交于
      For SSD, if request size exceeds specific value (optimal io size), request size
      isn't important for bandwidth. In such condition, if making request size bigger
      will cause some disks idle, the total throughput will actually drop. A good
      example is doing a readahead in a two-disk raid1 setup.
      
      So when should we split big requests? We absolutly don't want to split big
      request to very small requests. Even in SSD, big request transfer is more
      efficient. This patch only considers request with size above optimal io size.
      
      If all disks are busy, is it worth doing a split? Say optimal io size is 16k,
      two requests 32k and two disks. We can let each disk run one 32k request, or
      split the requests to 4 16k requests and each disk runs two. It's hard to say
      which case is better, depending on hardware.
      
      So only consider case where there are idle disks. For readahead, split is
      always better in this case. And in my test, below patch can improve > 30%
      thoughput. Hmm, not 100%, because disk isn't 100% busy.
      
      Such case can happen not just in readahead, for example, in directio. But I
      suppose directio usually will have bigger IO depth and make all disks busy, so
      I ignored it.
      
      Note: if the raid uses any hard disk, we don't prevent merging. That will make
      performace worse.
      Signed-off-by: NShaohua Li <shli@fusionio.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      12cee5a8
    • S
      md/raid1: make sequential read detection per disk based · be4d3280
      Shaohua Li 提交于
      Currently the sequential read detection is global wide. It's natural to make it
      per disk based, which can improve the detection for concurrent multiple
      sequential reads. And next patch will make SSD read balance not use distance
      based algorithm, where this change help detect truly sequential read for SSD.
      Signed-off-by: NShaohua Li <shli@fusionio.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      be4d3280
    • J
      MD: Move macros from raid1*.h to raid1*.c · 473e87ce
      Jonathan Brassow 提交于
      MD RAID1/RAID10: Move some macros from .h file to .c file
      
      There are three macros (IO_BLOCKED,IO_MADE_GOOD,BIO_SPECIAL) which are defined
      in both raid1.h and raid10.h.  They are only used in there respective .c files.
      However, if we wish to make RAID10 accessible to the device-mapper RAID
      target (dm-raid.c), then we need to move these macros into the .c files where
      they are used so that they do not conflict with each other.
      
      The macros from the two files are identical and could be moved into md.h, but
      I chose to leave the duplication and have them remain in the personality
      files.
      Signed-off-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      473e87ce
    • J
      MD RAID1: rename mirror_info structure · 0eaf822c
      Jonathan Brassow 提交于
      MD RAID1: Rename the structure 'mirror_info' to 'raid1_info'
      
      The same structure name ('mirror_info') is used by raid10.  Each of these
      structures are defined in there respective header files.  If dm-raid is
      to support both RAID1 and RAID10, the header files will be included and
      the structure names must not collide.  While only one of these structure
      names needs to change, this patch adds consistency to the naming of the
      structure.
      Signed-off-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      0eaf822c
  16. 23 12月, 2011 1 次提交
    • N
      md/raid1: Allocate spare to store replacement devices and their bios. · 8f19ccb2
      NeilBrown 提交于
      In RAID1, a replacement is much like a normal device, so we just
      double the size of the relevant arrays and look at all possible
      devices for reads and writes.
      
      This means that the array looks like it is now double the size in some
      way - we need to be careful about that.
      In particular, we checking if the array is still degraded while
      creating a recovery request we need to only consider the first 'half'
      - i.e. the real (non-replacement) devices.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      8f19ccb2
  17. 11 10月, 2011 7 次提交
  18. 07 10月, 2011 1 次提交
  19. 28 7月, 2011 4 次提交
    • N
      md/raid1: Handle write errors by updating badblock log. · cd5ff9a1
      NeilBrown 提交于
      When we get a write error (in the data area, not in metadata),
      update the badblock log rather than failing the whole device.
      
      As the write may well be many blocks, we trying writing each
      block individually and only log the ones which fail.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Reviewed-by: NNamhyung Kim <namhyung@gmail.com>
      cd5ff9a1
    • N
      md/raid1: store behind-write pages in bi_vecs. · 2ca68f5e
      NeilBrown 提交于
      When performing write-behind we allocate pages to store the data
      during write.
      Previously we just keep a list of pages.  Now we keep a list of
      bi_vec which includes offset and size.
      This means that the r1bio has complete information to create a new
      bio which will be needed for retrying after write errors.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Reviewed-by: NNamhyung Kim <namhyung@gmail.com>
      2ca68f5e
    • N
      md/raid1: clear bad-block record when write succeeds. · 4367af55
      NeilBrown 提交于
      If we succeed in writing to a block that was recorded as
      being bad, we clear the bad-block record.
      
      This requires some delayed handling as the bad-block-list update has
      to happen in process-context.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Reviewed-by: NNamhyung Kim <namhyung@gmail.com>
      4367af55
    • N
      md/raid1: avoid reading from known bad blocks. · d2eb35ac
      NeilBrown 提交于
      Now that we have a bad block list, we should not read from those
      blocks.
      There are several main parts to this:
        1/ read_balance needs to check for bad blocks, and return not only
           the chosen device, but also how many good blocks are available
           there.
        2/ fix_read_error needs to avoid trying to read from bad blocks.
        3/ read submission must be ready to issue multiple reads to
           different devices as different bad blocks on different devices
           could mean that a single large read cannot be served by any one
           device, but can still be served by the array.
           This requires keeping count of the number of outstanding requests
           per bio.  This count is stored in 'bi_phys_segments'
        4/ retrying a read needs to also be ready to submit a smaller read
           and queue another request for the rest.
      
      This does not yet handle bad blocks when reading to perform resync,
      recovery, or check.
      
      'md_trim_bio' will also be used for RAID10, so put it in md.c and
      export it.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      d2eb35ac
  20. 27 7月, 2011 1 次提交
    • N
      md: change managed of recovery_disabled. · 5389042f
      NeilBrown 提交于
      If we hit a read error while recovering a mirror, we want to abort the
      recovery without necessarily failing the disk - as having a disk this
      a read error is better than not having an array at all.
      
      Currently this is managed with a per-array flag "recovery_disabled"
      and is only implemented for RAID1.  For RAID10 we will need finer
      grained control as we might want to disable recovery for individual
      devices separately.
      
      So push more of the decision making into the personality.
      'recovery_disabled' is now a 'cookie' which is copied when the
      personality want to disable recovery and is changed when a device is
      added to the array as this is used as a trigger to 'try recovery
      again'.
      
      This will allow RAID10 to get the control that it needs.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      5389042f
  21. 08 6月, 2011 1 次提交
  22. 11 5月, 2011 1 次提交
    • N
      md/raid1: improve handling of pages allocated for write-behind. · af6d7b76
      NeilBrown 提交于
      The current handling and freeing of these pages is a bit fragile.
      We only keep the list of allocated pages in each bio, so we need to
      still have a valid bio when freeing the pages, which is a bit clumsy.
      
      So simply store the allocated page list in the r1_bio so it can easily
      be found and freed when we are finished with the r1_bio.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      af6d7b76
  23. 29 10月, 2010 1 次提交
  24. 10 9月, 2010 1 次提交
    • T
      md: implment REQ_FLUSH/FUA support · e9c7469b
      Tejun Heo 提交于
      This patch converts md to support REQ_FLUSH/FUA instead of now
      deprecated REQ_HARDBARRIER.  In the core part (md.c), the following
      changes are notable.
      
      * Unlike REQ_HARDBARRIER, REQ_FLUSH/FUA don't interfere with
        processing of other requests and thus there is no reason to mark the
        queue congested while FLUSH/FUA is in progress.
      
      * REQ_FLUSH/FUA failures are final and its users don't need retry
        logic.  Retry logic is removed.
      
      * Preflush needs to be issued to all member devices but FUA writes can
        be handled the same way as other writes - their processing can be
        deferred to request_queue of member devices.  md_barrier_request()
        is renamed to md_flush_request() and simplified accordingly.
      
      For linear, raid0 and multipath, the core changes are enough.  raid1,
      5 and 10 need the following conversions.
      
      * raid1: Handling of FLUSH/FUA bio's can simply be deferred to
        request_queues of member devices.  Barrier related logic removed.
      
      * raid5: Queue draining logic dropped.  FUA bit is propagated through
        biodrain and stripe resconstruction such that all the updated parts
        of the stripe are written out with FUA writes if any of the dirtying
        writes was FUA.  preread_active_stripes handling in make_request()
        is updated as suggested by Neil Brown.
      
      * raid10: FUA bit needs to be propagated to write clones.
      
      linear, raid0, 1, 5 and 10 tested.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      e9c7469b
  25. 14 12月, 2009 1 次提交
  26. 16 6月, 2009 1 次提交
    • N
      md: remove mddev_to_conf "helper" macro · 070ec55d
      NeilBrown 提交于
      Having a macro just to cast a void* isn't really helpful.
      I would must rather see that we are simply de-referencing ->private,
      than have to know what the macro does.
      
      So open code the macro everywhere and remove the pointless cast.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      070ec55d