1. 28 5月, 2015 4 次提交
    • N
      md/raid5: remove condition test from check_break_stripe_batch_list. · 4e3d62ff
      NeilBrown 提交于
      handle_stripe_clean_event() contains a chunk of code very
      similar to check_break_stripe_batch_list().
      If we make the latter more like the former, we can end up
      with just one copy of this code.
      
      This  first step removed the condition (and the 'check_') part
      of the name.  This has the added advantage of making it clear
      what check is being performed at the point where the function is
      called.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      4e3d62ff
    • N
      md/raid5: Ensure a batch member is not handled prematurely. · b15a9dbd
      NeilBrown 提交于
      If a stripe is a member of a batch, but not the head, it must
      not be handled separately from the rest of the batch.
      
      'clear_batch_ready()' handles this requirement to some
      extent but not completely.  If a member is passed to handle_stripe()
      a second time it returns '0' indicating the stripe can be handled,
      which is wrong.
      So add an extra test.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      b15a9dbd
    • N
      md/raid5: close race between STRIPE_BIT_DELAY and batching. · d0852df5
      NeilBrown 提交于
      When we add a write to a stripe we need to make sure the bitmap
      bit is set.  While doing that the stripe is not locked so it could
      be added to a batch after which further changes to STRIPE_BIT_DELAY
      and ->bm_seq are ineffective.
      
      So we need to hold off adding to a stripe until bitmap_startwrite has
      completed at least once, and we need to avoid further changes to
      STRIPE_BIT_DELAY once the stripe has been added to a batch.
      
      If a bitmap_startwrite() completes after the stripe was added to a
      batch, it will not have set the bit, only incremented a counter, so no
      extra delay of the stripe is needed.
      Reported-by: NShaohua Li <shli@kernel.org>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      d0852df5
    • N
      md/raid5: ensure whole batch is delayed for all required bitmap updates. · 2b6b2457
      NeilBrown 提交于
      When we add a stripe to a batch, we need to be sure that
      head stripe will wait for the bitmap update required for the new
      stripe.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      2b6b2457
  2. 21 5月, 2015 3 次提交
  3. 08 5月, 2015 7 次提交
    • N
      md/raid5: fix handling of degraded stripes in batches. · bb27051f
      NeilBrown 提交于
      There is no need for special handling of stripe-batches when the array
      is degraded.
      
      There may be if there is a failure in the batch, but STRIPE_DEGRADED
      does not imply an error.
      
      So don't set STRIPE_BATCH_ERR in ops_run_io just because the array is
      degraded.
      This actually causes a bug: the STRIPE_DEGRADED flag gets cleared in
      check_break_stripe_batch_list() and so the bitmap bit gets cleared
      when it shouldn't.
      
      So in check_break_stripe_batch_list(), split the batch up completely -
      again STRIPE_DEGRADED isn't meaningful.
      
      Also don't set STRIPE_BATCH_ERR when there is a write error to a
      replacement device.  This simply removes the replacement device and
      requires no extra handling.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      bb27051f
    • N
      md/raid5: fix allocation of 'scribble' array. · 738a2738
      NeilBrown 提交于
      As the new 'scribble' array is sized based on chunk size,
      we need to make sure the size matches the largest of 'old'
      and 'new' chunk sizes when the array is undergoing reshape.
      
      We also potentially need to resize it even when not resizing
      the stripe cache, as chunk size can change without changing
      number of devices.
      
      So move the 'resize' code into a separate function, and
      consider old and new sizes when allocating.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Fixes: 46d5b785 ("raid5: use flex_array for scribble data")
      738a2738
    • N
      md/raid5: don't record new size if resize_stripes fails. · 6e9eac2d
      NeilBrown 提交于
      If any memory allocation in resize_stripes fails we will return
      -ENOMEM, but in some cases we update conf->pool_size anyway.
      
      This means that if we try again, the allocations will be assumed
      to be larger than they are, and badness results.
      
      So only update pool_size if there is no error.
      
      This bug was introduced in 2.6.17 and the patch is suitable for
      -stable.
      
      Fixes: ad01c9e3 ("[PATCH] md: Allow stripes to be expanded in preparation for expanding an array")
      Cc: stable@vger.kernel.org (v2.6.17+)
      Signed-off-by: NNeilBrown <neilb@suse.de>
      6e9eac2d
    • N
      md/raid5: avoid reading parity blocks for full-stripe write to degraded array · 10d82c5f
      NeilBrown 提交于
      When performing a reconstruct write, we need to read all blocks
      that are not being over-written .. except the parity (P and Q) blocks.
      
      The code currently reads these (as they are not being over-written!)
      unnecessarily.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Fixes: ea664c82 ("md/raid5: need_this_block: tidy/fix last condition.")
      10d82c5f
    • N
      md/raid5: more incorrect BUG_ON in handle_stripe_fill. · b0c783b3
      NeilBrown 提交于
      It is not incorrect to call handle_stripe_fill() when
      a batch of full-stripe writes is active.
      It is, however, a BUG if fetch_block() then decides
      it needs to actually fetch anything.
      
      So move the 'BUG_ON' to where it belongs.
      Signed-off-by: NNeilBrown  <neilb@suse.de>
      Fixes: 59fc630b ("RAID5: batch adjacent full stripe write")
      b0c783b3
    • N
      md/raid5: new alloc_stripe() to allocate an initialize a stripe. · f18c1a35
      NeilBrown 提交于
      The new batch_lock and batch_list fields are being initialized in
      grow_one_stripe() but not in resize_stripes().  This causes a crash
      on resize.
      
      So separate the core initialization into a new function and call it
      from both allocation sites.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Fixes: 59fc630b ("RAID5: batch adjacent full stripe write")
      f18c1a35
    • H
      md-raid0: conditional mddev->queue access to suit dm-raid · b6538fe3
      Heinz Mauelshagen 提交于
      This patch is a prerequisite for dm-raid "raid0" support to allow
      dm-raid to access the MD RAID0 personality doing unconditional
      accesses to mddev->queue, which is NULL in case of dm-raid stacked on
      top of MD.
      
      Most of the conditional mddev->queue accesses made it to upstream but
      this missing one, which prohibits md raid0 to set disk stack limits
      (being done in dm core in case of md underneath dm).
      Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com>
      Tested-by: NHeinz Mauelshagen <heinzm@redhat.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      b6538fe3
  4. 04 5月, 2015 8 次提交
  5. 03 5月, 2015 3 次提交
    • J
      ext4: fix growing of tiny filesystems · 2c869b26
      Jan Kara 提交于
      The estimate of necessary transaction credits in ext4_flex_group_add()
      is too pessimistic. It reserves credit for sb, resize inode, and resize
      inode dindirect block for each group added in a flex group although they
      are always the same block and thus it is enough to account them only
      once. Also the number of modified GDT block is overestimated since we
      fit EXT4_DESC_PER_BLOCK(sb) descriptors in one block.
      
      Make the estimation more precise. That reduces number of requested
      credits enough that we can grow 20 MB filesystem (which has 1 MB
      journal, 79 reserved GDT blocks, and flex group size 16 by default).
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Reviewed-by: NEric Sandeen <sandeen@redhat.com>
      2c869b26
    • D
      ext4: move check under lock scope to close a race. · 280227a7
      Davide Italiano 提交于
      fallocate() checks that the file is extent-based and returns
      EOPNOTSUPP in case is not. Other tasks can convert from and to
      indirect and extent so it's safe to check only after grabbing
      the inode mutex.
      Signed-off-by: NDavide Italiano <dccitaliano@gmail.com>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Cc: stable@vger.kernel.org
      280227a7
    • L
      ext4: fix data corruption caused by unwritten and delayed extents · d2dc317d
      Lukas Czerner 提交于
      Currently it is possible to lose whole file system block worth of data
      when we hit the specific interaction with unwritten and delayed extents
      in status extent tree.
      
      The problem is that when we insert delayed extent into extent status
      tree the only way to get rid of it is when we write out delayed buffer.
      However there is a limitation in the extent status tree implementation
      so that when inserting unwritten extent should there be even a single
      delayed block the whole unwritten extent would be marked as delayed.
      
      At this point, there is no way to get rid of the delayed extents,
      because there are no delayed buffers to write out. So when a we write
      into said unwritten extent we will convert it to written, but it still
      remains delayed.
      
      When we try to write into that block later ext4_da_map_blocks() will set
      the buffer new and delayed and map it to invalid block which causes
      the rest of the block to be zeroed loosing already written data.
      
      For now we can fix this by simply not allowing to set delayed status on
      written extent in the extent status tree. Also add WARN_ON() to make
      sure that we notice if this happens in the future.
      
      This problem can be easily reproduced by running the following xfs_io.
      
      xfs_io -f -c "pwrite -S 0xaa 4096 2048" \
                -c "falloc 0 131072" \
                -c "pwrite -S 0xbb 65536 2048" \
                -c "fsync" /mnt/test/fff
      
      echo 3 > /proc/sys/vm/drop_caches
      xfs_io -c "pwrite -S 0xdd 67584 2048" /mnt/test/fff
      
      This can be theoretically also reproduced by at random by running fsx,
      but it's not very reliable, though on machines with bigger page size
      (like ppc) this can be seen more often (especially xfstest generic/127)
      Signed-off-by: NLukas Czerner <lczerner@redhat.com>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Cc: stable@vger.kernel.org
      d2dc317d
  6. 02 5月, 2015 10 次提交
  7. 01 5月, 2015 5 次提交
    • L
      Merge branch 'for-linus-4.1' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs · 64887b68
      Linus Torvalds 提交于
      Pull btrfs fixes from Chris Mason:
       "A few more btrfs fixes.
      
        These range from corners Filipe found in the new free space cache
        writeback to a grab bag of fixes from the list"
      
      * 'for-linus-4.1' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs:
        Btrfs: btrfs_release_extent_buffer_page didn't free pages of dummy extent
        Btrfs: fill ->last_trans for delayed inode in btrfs_fill_inode.
        btrfs: unlock i_mutex after attempting to delete subvolume during send
        btrfs: check io_ctl_prepare_pages return in __btrfs_write_out_cache
        btrfs: fix race on ENOMEM in alloc_extent_buffer
        btrfs: handle ENOMEM in btrfs_alloc_tree_block
        Btrfs: fix find_free_dev_extent() malfunction in case device tree has hole
        Btrfs: don't check for delalloc_bytes in cache_save_setup
        Btrfs: fix deadlock when starting writeback of bg caches
        Btrfs: fix race between start dirty bg cache writeout and bg deletion
      64887b68
    • L
      Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux · 036f351e
      Linus Torvalds 提交于
      Pull arm64 fixes from Will Deacon:
       "Not too much here, but we've addressed a couple of nasty issues in the
        dma-mapping code as well as adding the halfword and byte variants of
        load_acquire/store_release following on from the CSD locking bug that
        you fixed in the core.
      
         - fix perf devicetree warnings at probe time
      
         - fix memory leak in __dma_free()
      
         - ensure DMA buffers are always zeroed
      
         - show IRQ trigger in /proc/interrupts (for parity with ARM)
      
         - implement byte and halfword access for smp_{load_acquire,store_release}"
      
      * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
        arm64: perf: Fix the pmu node name in warning message
        arm64: perf: don't warn about missing interrupt-affinity property for PPIs
        arm64: add missing PAGE_ALIGN() to __dma_free()
        arm64: dma-mapping: always clear allocated buffers
        ARM64: Enable CONFIG_GENERIC_IRQ_SHOW_LEVEL
        arm64: add missing data types in smp_load_acquire/smp_store_release
      036f351e
    • S
      powerpc/powernv: Restore non-volatile CRs after nap · 0aab3747
      Sam Bobroff 提交于
      Patches 7cba160a "powernv/cpuidle: Redesign idle states management"
      and 77b54e9f "powernv/powerpc: Add winkle support for offline cpus"
      use non-volatile condition registers (cr2, cr3 and cr4) early in the system
      reset interrupt handler (system_reset_pSeries()) before it has been determined
      if state loss has occurred. If state loss has not occurred, control returns via
      the power7_wakeup_noloss() path which does not restore those condition
      registers, leaving them corrupted.
      
      Fix this by restoring the condition registers in the power7_wakeup_noloss()
      case.
      
      This is apparent when running a KVM guest on hardware that does not
      support winkle or sleep and the guest makes use of secondary threads. In
      practice this means Power7 machines, though some early unreleased Power8
      machines may also be susceptible.
      
      The secondary CPUs are taken off line before the guest is started and
      they call pnv_smp_cpu_kill_self(). This checks support for sleep
      states (in this case there is no support) and power7_nap() is called.
      
      When the CPU is woken, power7_nap() returns and because the CPU is
      still off line, the main while loop executes again. The sleep states
      support test is executed again, but because the tested values cannot
      have changed, the compiler has optimized the test away and instead we
      rely on the result of the first test, which has been left in cr3
      and/or cr4. With the result overwritten, the wrong branch is taken and
      power7_winkle() is called on a CPU that does not support it, leading
      to it stalling.
      
      Fixes: 7cba160a ("powernv/cpuidle: Redesign idle states management")
      Fixes: 77b54e9f ("powernv/powerpc: Add winkle support for offline cpus")
      [mpe: Massage change log a bit more]
      Signed-off-by: NSam Bobroff <sam.bobroff@au1.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      0aab3747
    • G
      powerpc/eeh: Delay probing EEH device during hotplug · d91dafc0
      Gavin Shan 提交于
      Commit 1c509148b ("powerpc/eeh: Do probe on pci_dn") probes EEH
      devices in early stage, which is reasonable to pSeries platform.
      However, it's wrong for PowerNV platform because the PE# isn't
      determined until the resources (IO and MMIO) are assigned to
      PE in hotplug case. So we have to delay probing EEH devices
      for PowerNV platform until the PE# is assigned.
      
      Fixes: ff57b454 ("powerpc/eeh: Do probe on pci_dn")
      Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      d91dafc0
    • G
      powerpc/eeh: Fix race condition in pcibios_set_pcie_reset_state() · 1ae79b78
      Gavin Shan 提交于
      When asserting reset in pcibios_set_pcie_reset_state(), the PE
      is enforced to (hardware) frozen state in order to drop unexpected
      PCI transactions (except PCI config read/write) automatically by
      hardware during reset, which would cause recursive EEH error.
      However, the (software) frozen state EEH_PE_ISOLATED is missed.
      When users get 0xFF from PCI config or MMIO read, EEH_PE_ISOLATED
      is set in PE state retrival backend. Unfortunately, nobody (the
      reset handler or the EEH recovery functinality in host) will clear
      EEH_PE_ISOLATED when the PE has been passed through to guest.
      
      The patch sets and clears EEH_PE_ISOLATED properly during reset
      in function pcibios_set_pcie_reset_state() to fix the issue.
      
      Fixes: 28158cd1 ("Enhance pcibios_set_pcie_reset_state()")
      Reported-by: NCarol L. Soto <clsoto@us.ibm.com>
      Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com>
      Tested-by: NCarol L. Soto <clsoto@us.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      1ae79b78