1. 29 1月, 2014 10 次提交
    • J
      Btrfs: attach delayed ref updates to delayed ref heads · d7df2c79
      Josef Bacik 提交于
      Currently we have two rb-trees, one for delayed ref heads and one for all of the
      delayed refs, including the delayed ref heads.  When we process the delayed refs
      we have to hold onto the delayed ref lock for all of the selecting and merging
      and such, which results in quite a bit of lock contention.  This was solved by
      having a waitqueue and only one flusher at a time, however this hurts if we get
      a lot of delayed refs queued up.
      
      So instead just have an rb tree for the delayed ref heads, and then attach the
      delayed ref updates to an rb tree that is per delayed ref head.  Then we only
      need to take the delayed ref lock when adding new delayed refs and when
      selecting a delayed ref head to process, all the rest of the time we deal with a
      per delayed ref head lock which will be much less contentious.
      
      The locking rules for this get a little more complicated since we have to lock
      up to 3 things to properly process delayed refs, but I will address that problem
      later.  For now this passes all of xfstests and my overnight stress tests.
      Thanks,
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      d7df2c79
    • W
      Btrfs: handle EAGAIN case properly in btrfs_drop_snapshot() · 90515e7f
      Wang Shilong 提交于
      We may return early in btrfs_drop_snapshot(), we shouldn't
      call btrfs_std_err() for this case, fix it.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NWang Shilong <wangsl.fnst@cn.fujitsu.com>
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      90515e7f
    • L
      Btrfs: return free space to global_rsv as much as possible · 17504584
      Liu Bo 提交于
      @full is not protected within global_rsv.lock, so we may think global_rsv
      is already full but in fact it's not, so we miss the opportunity to return
      free space to global_rsv directly when we release other block_rsvs.
      Signed-off-by: NLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      17504584
    • J
      Btrfs: stop caching thread if extent_commit_sem is contended · c9ea7b24
      Josef Bacik 提交于
      We can starve out the transaction commit with a bunch of caching threads all
      running at the same time.  This is because we will only drop the
      extent_commit_sem if we need_resched(), which isn't likely to happen since we
      will be reading a lot from the disk so have already schedule()'ed plenty.  Alex
      observed that he could starve out a transaction commit for up to a minute with
      32 caching threads all running at once.  This will allow us to drop the
      extent_commit_sem to allow the transaction commit to swap the commit_root out
      and then all the cachers will start back up. Here is an explanation provided by
      Igno
      
      So, just to fill in what happens in this loop:
      
                                      mutex_unlock(&caching_ctl->mutex);
                                      cond_resched();
                                      goto again;
      
      where 'again:' takes caching_ctl->mutex and fs_info->extent_commit_sem
      again:
      
              again:
                      mutex_lock(&caching_ctl->mutex);
                      /* need to make sure the commit_root doesn't disappear */
                      down_read(&fs_info->extent_commit_sem);
      
      So, if I'm reading the code correct, there can be a fair amount of
      concurrency here: there may be multiple 'caching kthreads' per filesystem
      active, while there's one fs_info->extent_commit_sem per filesystem
      AFAICS.
      
      So, what happens if there are a lot of CPUs all busy holding the
      ->extent_commit_sem rwsem read-locked and a writer arrives? They'd all
      rush to try to release the fs_info->extent_commit_sem, and they'd block in
      the down_read() because there's a writer waiting.
      
      So there's a guarantee of forward progress. This should answer akpm's
      concern I think.
      
      Thanks,
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      c9ea7b24
    • F
      Btrfs: convert printk to btrfs_ and fix BTRFS prefix · efe120a0
      Frank Holton 提交于
      Convert all applicable cases of printk and pr_* to the btrfs_* macros.
      
      Fix all uses of the BTRFS prefix.
      Signed-off-by: NFrank Holton <fholton@gmail.com>
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      efe120a0
    • M
      Btrfs: fix double initialization of the raid kobject · 536cd964
      Miao Xie 提交于
      We met the following oops when doing space balance:
       kobject (ffff88081b590278): tried to init an initialized object, something is seriously wrong.
       ...
       Call Trace:
        [<ffffffff81937262>] dump_stack+0x49/0x5f
        [<ffffffff8137d259>] kobject_init+0x89/0xa0
        [<ffffffff8137d36a>] kobject_init_and_add+0x2a/0x70
        [<ffffffffa009bd79>] ? clear_extent_bit+0x199/0x470 [btrfs]
        [<ffffffffa005e82c>] __link_block_group+0xfc/0x120 [btrfs]
        [<ffffffffa006b9db>] btrfs_make_block_group+0x24b/0x370 [btrfs]
        [<ffffffffa00a899b>] __btrfs_alloc_chunk+0x54b/0x7e0 [btrfs]
        [<ffffffffa00a8c6f>] btrfs_alloc_chunk+0x3f/0x50 [btrfs]
        [<ffffffffa0060123>] do_chunk_alloc+0x363/0x440 [btrfs]
        [<ffffffffa00633d4>] btrfs_check_data_free_space+0x104/0x310 [btrfs]
        [<ffffffffa0069f4d>] btrfs_write_dirty_block_groups+0x48d/0x600 [btrfs]
        [<ffffffffa007aad4>] commit_cowonly_roots+0x184/0x250 [btrfs]
        ...
      
      Steps to reproduce:
       # mkfs.btrfs -f <dev>
       # mount -o nospace_cache <dev> <mnt>
       # btrfs balance start <mnt>
       # dd if=/dev/zero of=<mnt>/tmpfile bs=1M count=1
      
      The reason of this problem is that we initialized the raid kobject when we added
      a block group into a empty raid list. As we know, when we mounted a btrfs filesystem,
      the raid list was empty, we would initialize the raid kobject when we added the first
      block group. But if there was not data stored in the block group, the block group
      would be freed when doing balance, and the raid list would be empty. And then if we
      allocated a new block group and added it into the raid list, we would initialize
      the raid kobject again, the oops happened.
      
      Fix this problem by initializing the raid kobject just when mounting the fs.
      Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com>
      Reported-by: NWang Shilong <wangsl.fnst@cn.fujitsu.com>
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      536cd964
    • J
      btrfs: fix static checker warnings · 1b8e5df6
      Jeff Mahoney 提交于
      This patch fixes the following warnings:
      fs/btrfs/extent-tree.c:6201:12: sparse: symbol 'get_raid_name' was not declared. Should it be static?
      fs/btrfs/extent-tree.c:8430:9: error: format not a string literal and no format arguments [-Werror=format-security] get_raid_name(index));
      Signed-off-by: NJeff Mahoney <jeffm@suse.com>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      1b8e5df6
    • V
      btrfs: remove unused variable from find_free_extent · 4b447bfa
      Valentina Giusti 提交于
      The variable found_uncached_bg in find_free_extent is not used since commit
      285ff5af
      (Btrfs: remove the ideal caching code)
      Signed-off-by: NValentina Giusti <valentina.giusti@microon.de>
      Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      4b447bfa
    • J
      btrfs: publish allocation data in sysfs · 6ab0a202
      Jeff Mahoney 提交于
      While trying to debug ENOSPC issues, it's helpful to understand what the
      kernel's view of the available space is. We export this information
      via ioctl, but sysfs files are more easily used.
      Signed-off-by: NJeff Mahoney <jeffm@suse.com>
      Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      6ab0a202
    • L
      Btrfs: introduce a head ref rbtree · c46effa6
      Liu Bo 提交于
      The way how we process delayed refs is
      1) get a bunch of head refs,
      2) pick up one head ref,
      3) go one node back for any delayed ref updates.
      
      The head ref is also linked in the same rbtree as the delayed ref is,
      so in 1) stage, we have to walk one by one including not only head refs, but
      delayed refs.
      
      When we have a great number of delayed refs pending to process,
      this'll cost time a lot.
      
      Here we introduce a head ref specific rbtree, it only has head refs, so troubles
      go away.
      Signed-off-by: NLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      c46effa6
  2. 12 12月, 2013 1 次提交
    • F
      Btrfs: don't miss skinny extent items on delayed ref head contention · 639eefc8
      Filipe David Borba Manana 提交于
      Currently extent-tree.c:btrfs_lookup_extent_info() can miss the lookup
      of skinny extent items. This can happen when the execution flow is the
      following:
      
      * We do an extent tree lookup and fail to find a skinny extent item;
      
      * As a result, we attempt to see if a non-skinny extent item exists,
        either by looking at previous item in the leaf or by doing another
        full extent tree search;
      
      * We have a transaction and then we check for a matching delayed ref
        head in the transaction's delayed refs rbtree;
      
      * We find such delayed ref head and then we try to lock it with a
        call to mutex_trylock();
      
      * The lock was contended so we jump to the label "again", which repeats
        the extent tree search but for a non-skinny extent item, because we set
        previously metadata variable to 0 and the search key to look for a
        non-skinny extent-item;
      
      * After the jump (and after releasing the transaction's delayed refs
        lock), a skinny extent item might have been added to the extent tree
        but we will miss it because metadata is set to 0 and the search key
        is set for a non-skinny extent-item.
      
      The fix here is to not reset metadata to 0 and to jump to the initial search
      key setup if the delayed ref head is contended, instead of jumping directly
      to the extent tree search label ("again").
      
      This issue was found while investigating the issue reported at Bugzilla 64961.
      
      David Sterba suspected this function was missing extent items, and that
      this could be caused by the last change to this function, which was made
      in the following patch:
      
          [PATCH] Btrfs: optimize btrfs_lookup_extent_info()
          (commit 74be9510)
      
      But in fact this issue already existed before, because after failing to find
      a skinny extent item, the code set the search key for a non-skinny extent
      item, and on contention of a matching delayed ref head it would not search
      the extent tree for a skinny extent item anymore.
      Signed-off-by: NFilipe David Borba Manana <fdmanana@gmail.com>
      Reviewed-by: NLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      639eefc8
  3. 12 11月, 2013 18 次提交
  4. 21 9月, 2013 4 次提交
  5. 01 9月, 2013 7 次提交