1. 12 4月, 2018 1 次提交
  2. 26 3月, 2018 5 次提交
  3. 22 1月, 2018 13 次提交
  4. 30 10月, 2017 1 次提交
  5. 24 8月, 2017 1 次提交
  6. 18 8月, 2017 1 次提交
  7. 16 8月, 2017 3 次提交
    • N
      btrfs: Remove unused parameters from volume.c functions · e4ff5fb5
      Nikolay Borisov 提交于
      This also adjusts the respective callers in other files. Those were
      found with -Wunused-parameter.
      
      btrfs_full_stripe_len's mapping_tree - introduced by 53b381b3
      ("Btrfs: RAID5 and RAID6") but it was never really used even in that
      commit
      
      btrfs_is_parity_mirror's mirror_num - same as above
      
      chunk_drange_filter's chunk_offset - introduced by 94e60d5a ("Btrfs:
      devid subset filter") and never used.
      Signed-off-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      e4ff5fb5
    • Q
      btrfs: Enhance message when a device is missing during mount · c5502451
      Qu Wenruo 提交于
      For a missing device, btrfs will just refuse to mount with almost
      meaningless kernel message like:
      
       BTRFS info (device vdb6): disk space caching is enabled
       BTRFS info (device vdb6): has skinny extents
       BTRFS error (device vdb6): failed to read the system array: -5
       BTRFS error (device vdb6): open_ctree failed
      
      This patch will print a new message about the missing device:
      
       BTRFS info (device vdb6): disk space caching is enabled
       BTRFS info (device vdb6): has skinny extents
       BTRFS warning (device vdb6): devid 2 uuid 80470722-cad2-4b90-b7c3-fee294552f1b is missing
       BTRFS error (device vdb6): failed to read the system array: -5
       BTRFS error (device vdb6): open_ctree failed
      Signed-off-by: NQu Wenruo <quwenruo@cn.fujitsu.com>
      Reviewed-by: NAnand Jain <anand.jain@oracle.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      c5502451
    • Q
      btrfs: Introduce a function to check if all chunks a OK for degraded rw mount · 21634a19
      Qu Wenruo 提交于
      Introduce a new function, btrfs_check_rw_degradable(), to check if all
      chunks in btrfs is OK for degraded rw mount.
      
      It provides the new basis for accurate btrfs mount/remount and even
      runtime degraded mount check other than old one-size-fit-all method.
      
      Btrfs currently uses num_tolerated_disk_barrier_failures to do global
      check for tolerated missing device.
      
      Although the one-size-fit-all solution is quite safe, it's too strict
      if data and metadata has different duplication level.
      
      For example, if one use Single data and RAID1 metadata for 2 disks, it
      means any missing device will make the fs unable to be degraded
      mounted.
      
      But in fact, some times all single chunks may be in the existing
      device and in that case, we should allow it to be rw degraded mounted.
      
      Such case can be easily reproduced using the following script:
       # mkfs.btrfs -f -m raid1 -d sing /dev/sdb /dev/sdc
       # wipefs -f /dev/sdc
       # mount /dev/sdb -o degraded,rw
      
      If using btrfs-debug-tree to check /dev/sdb, one should find that the
      data chunk is only in sdb, so in fact it should allow degraded mount.
      
      This patchset will introduce a new per-chunk degradable check for
      btrfs, allow above case to succeed, and it's quite small anyway.
      Signed-off-by: NQu Wenruo <quwenruo@cn.fujitsu.com>
      Signed-off-by: NAnand Jain <anand.jain@oracle.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      [ copied text from cover letter with more details about the problem being
        solved ]
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      21634a19
  8. 22 6月, 2017 1 次提交
    • D
      btrfs: preallocate device flush bio · e0ae9994
      David Sterba 提交于
      For devices that support flushing, we allocate a bio, submit, wait for
      it and then free it. The bio allocation does not fail so ENOMEM is not a
      problem but we still may unnecessarily stress the allocation subsystem.
      
      Instead, we can allocate the bio at the same time we allocate the device
      and reuse it each time we need to flush the barriers. The bio is reset
      before each use. Reference counting is simplified to just device
      allocation (get) and freeing (put).
      
      The bio used to be submitted through the integrity checker which will
      find out that bio has no data attached and call submit_bio.
      
      Status of the bio in flight needs to be tracked separately in case the
      device caches get switched off between write and wait.
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      e0ae9994
  9. 20 6月, 2017 3 次提交
  10. 18 4月, 2017 4 次提交
  11. 28 2月, 2017 1 次提交
  12. 06 12月, 2016 6 次提交