- 10 6月, 2014 2 次提交
-
-
由 Josef Bacik 提交于
This exercises the various parts of the new qgroup accounting code. We do some basic stuff and do some things with the shared refs to make sure all that code works. I had to add a bunch of infrastructure because I needed to be able to insert items into a fake tree without having to do all the hard work myself, hopefully this will be usefull in the future. Thanks, Signed-off-by: NJosef Bacik <jbacik@fb.com> Signed-off-by: NChris Mason <clm@fb.com>
-
由 Josef Bacik 提交于
Currently qgroups account for space by intercepting delayed ref updates to fs trees. It does this by adding sequence numbers to delayed ref updates so that it can figure out how the tree looked before the update so we can adjust the counters properly. The problem with this is that it does not allow delayed refs to be merged, so if you say are defragging an extent with 5k snapshots pointing to it we will thrash the delayed ref lock because we need to go back and manually merge these things together. Instead we want to process quota changes when we know they are going to happen, like when we first allocate an extent, we free a reference for an extent, we add new references etc. This patch accomplishes this by only adding qgroup operations for real ref changes. We only modify the sequence number when we need to lookup roots for bytenrs, this reduces the amount of churn on the sequence number and allows us to merge delayed refs as we add them most of the time. This patch encompasses a bunch of architectural changes 1) qgroup ref operations: instead of tracking qgroup operations through the delayed refs we simply add new ref operations whenever we notice that we need to when we've modified the refs themselves. 2) tree mod seq: we no longer have this separation of major/minor counters. this makes the sequence number stuff much more sane and we can remove some locking that was needed to protect the counter. 3) delayed ref seq: we now read the tree mod seq number and use that as our sequence. This means each new delayed ref doesn't have it's own unique sequence number, rather whenever we go to lookup backrefs we inc the sequence number so we can make sure to keep any new operations from screwing up our world view at that given point. This allows us to merge delayed refs during runtime. With all of these changes the delayed ref stuff is a little saner and the qgroup accounting stuff no longer goes negative in some cases like it was before. Thanks, Signed-off-by: NJosef Bacik <jbacik@fb.com> Signed-off-by: NChris Mason <clm@fb.com>
-
- 11 3月, 2014 2 次提交
-
-
由 Qu Wenruo 提交于
Since the "_struct" suffix is mainly used for distinguish the differnt btrfs_work between the original and the newly created one, there is no need using the suffix since all btrfs_workers are changed into btrfs_workqueue. Also this patch fixed some codes whose code style is changed due to the too long "_struct" suffix. Signed-off-by: NQu Wenruo <quwenruo@cn.fujitsu.com> Tested-by: NDavid Sterba <dsterba@suse.cz> Signed-off-by: NJosef Bacik <jbacik@fb.com>
-
由 Qu Wenruo 提交于
Replace the fs_info->qgroup_rescan_worker with the newly created btrfs_workqueue. Signed-off-by: NQu Wenruo <quwenruo@cn.fujitsu.com> Tested-by: NDavid Sterba <dsterba@suse.cz> Signed-off-by: NJosef Bacik <jbacik@fb.com>
-
- 29 1月, 2014 3 次提交
-
-
由 Josef Bacik 提交于
Could have sworn I fixed this before but apparently not. This makes us pass btrfs/022 with skinny metadata enabled. Thanks, Signed-off-by: NJosef Bacik <jbacik@fb.com> Signed-off-by: NChris Mason <clm@fb.com>
-
由 Frank Holton 提交于
Convert all applicable cases of printk and pr_* to the btrfs_* macros. Fix all uses of the BTRFS prefix. Signed-off-by: NFrank Holton <fholton@gmail.com> Signed-off-by: NJosef Bacik <jbacik@fb.com> Signed-off-by: NChris Mason <clm@fb.com>
-
由 Valentina Giusti 提交于
Use otherwise unused local variables slot in update_qgroup_limit_item and in update_qgroup_info_item, and remove unused variable ins from btrfs_qgroup_account_ref. Signed-off-by: NValentina Giusti <valentina.giusti@microon.de> Signed-off-by: NJosef Bacik <jbacik@fusionio.com> Signed-off-by: NChris Mason <clm@fb.com>
-
- 01 9月, 2013 5 次提交
-
-
由 Geert Uytterhoeven 提交于
u64 is "unsigned long long" on all architectures now, so there's no need to cast it when formatting it using the "ll" length modifier. Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NJosef Bacik <jbacik@fusionio.com> Signed-off-by: NChris Mason <chris.mason@fusionio.com>
-
由 Wang Shilong 提交于
We have checked 'quota_root' with qgroup_ioctl_lock held before,So here the check is reduplicate, remove it. Signed-off-by: NWang Shilong <wangsl.fnst@cn.fujitsu.com> Reviewed-by: NMiao Xie <miaox@cn.fujitsu.com> Reviewed-by: NArne Jansen <sensille@gmx.net> Signed-off-by: NJosef Bacik <jbacik@fusionio.com> Signed-off-by: NChris Mason <chris.mason@fusionio.com>
-
由 Wang Shilong 提交于
btrfs_free_qgroup_config() is not only called by open/close_ctree(),but also btrfs_disable_quota().And for btrfs_disable_quota(),we have set 'quota_root' to be null before calling btrfs_free_qgroup_config(),so it is safe to cleanup in-memory structures without lock held. Signed-off-by: NWang Shilong <wangsl.fnst@cn.fujitsu.com> Reviewed-by: NMiao Xie <miaox@cn.fujitsu.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com> Signed-off-by: NChris Mason <chris.mason@fusionio.com>
-
由 Wang Shilong 提交于
When disabling quota, we should clear out list 'dirty_qgroups',otherwise, we will get oops if enabling quota again. Fix this by abstracting similar code from del_qgroup_rb(). Signed-off-by: NWang Shilong <wangsl.fnst@cn.fujitsu.com> Reviewed-by: NMiao Xie <miaox@cn.fujitsu.com> Reviewed-by: NArne Jansen <sensille@gmx.net> Signed-off-by: NJosef Bacik <jbacik@fusionio.com> Signed-off-by: NChris Mason <chris.mason@fusionio.com>
-
由 Wang Shilong 提交于
We call ulist_free(qgroup_ulist) in btrfs_free_qgroup_config(), and btrfs_free_qgroup_config() may be called in two cases: (1)umount filesystem (2)disabling quota However, if we firstly disable quota and then umount filesystem, a double free happens. Fix it. Signed-off-by: NWang Shilong <wangsl.fnst@cn.fujitsu.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com> Signed-off-by: NChris Mason <chris.mason@fusionio.com>
-
- 14 6月, 2013 5 次提交
-
-
由 Jan Schmidt 提交于
When called during mount, we cannot start the rescan worker thread until open_ctree is done. This commit restuctures the qgroup rescan internals to enable a clean deferral of the rescan resume operation. First of all, the struct qgroup_rescan is removed, saving us a malloc and some initialization synchronizations problems. Its only element (the worker struct) now lives within fs_info just as the rest of the rescan code. Then setting up a rescan worker is split into several reusable stages. Currently we have three different rescan startup scenarios: (A) rescan ioctl (B) rescan resume by mount (C) rescan by quota enable Each case needs its own combination of the four following steps: (1) set the progress [A, C: zero; B: state of umount] (2) commit the transaction [A] (3) set the counters [A, C: zero; B: state of umount] (4) start worker [A, B, C] qgroup_rescan_init does step (1). There's no extra function added to commit a transaction, we've got that already. qgroup_rescan_zero_tracking does step (3). Step (4) is nothing more than a call to the generic btrfs_queue_worker. We also get rid of a double check for the rescan progress during btrfs_qgroup_account_ref, which is no longer required due to having step 2 from the list above. As a side effect, this commit prepares to move the rescan start code from btrfs_run_qgroups (which is run during commit) to a less time critical section. Signed-off-by: NJan Schmidt <list.btrfs@jan-o-sch.net> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Jan Schmidt 提交于
When btrfs_read_qgroup_config or btrfs_quota_enable return non-zero, we've already freed the fs_info->qgroup_ulist. The final btrfs_free_qgroup_config called from quota_disable makes another ulist_free(fs_info->qgroup_ulist) call. We set fs_info->qgroup_ulist to NULL on the mentioned error paths, turning the ulist_free in btrfs_free_qgroup_config into a noop. Cc: Wang Shilong <wangsl-fnst@cn.fujitsu.com> Signed-off-by: NJan Schmidt <list.btrfs@jan-o-sch.net> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Jan Schmidt 提交于
Commit 5b7c665e introduced fs_info->qgroup_ulist, that is allocated during btrfs_read_qgroup_config and meant to be used later by the qgroup accounting code. However, it is always freed before btrfs_read_qgroup_config returns, becuase the commit mentioned above adds a check for (ret), where a check for (ret < 0) would have been the right choice. This commit fixes the check. Cc: Wang Shilong <wangsl-fnst@cn.fujitsu.com> Signed-off-by: NJan Schmidt <list.btrfs@jan-o-sch.net> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Jan Schmidt 提交于
btrfs_qgroup_wait_for_completion waits until the currently running qgroup operation completes. It returns immediately when no rescan process is in progress. This is useful to automate things around the rescan process (e.g. testing). Signed-off-by: NJan Schmidt <list.btrfs@jan-o-sch.net> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Wang Shilong 提交于
When doing qgroup accounting, we call ulist_alloc()/ulist_free() every time when we want to walk qgroup tree. By introducing 'qgroup_ulist', we only need to call ulist_alloc()/ulist_free() once. This reduce some sys time to allocate memory, see the measurements below fsstress -p 4 -n 10000 -d $dir With this patch: real 0m50.153s user 0m0.081s sys 0m6.294s real 0m51.113s user 0m0.092s sys 0m6.220s real 0m52.610s user 0m0.096s sys 0m6.125s avg 6.213 ----------------------------------------------------- Without the patch: real 0m54.825s user 0m0.061s sys 0m10.665s real 1m6.401s user 0m0.089s sys 0m11.218s real 1m13.768s user 0m0.087s sys 0m10.665s avg 10.849 we can see the sys time reduce ~43%. Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
- 07 5月, 2013 14 次提交
-
-
由 Jan Schmidt 提交于
When qgroup tracking is enabled, we do an automatic cycle of the new rescan mechanism. Signed-off-by: NJan Schmidt <list.btrfs@jan-o-sch.net> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Jan Schmidt 提交于
If qgroup tracking is out of sync, a rescan operation can be started. It iterates the complete extent tree and recalculates all qgroup tracking data. This is an expensive operation and should not be used unless required. A filesystem under rescan can still be umounted. The rescan continues on the next mount. Status information is provided with a separate ioctl while a rescan operation is in progress. Signed-off-by: NJan Schmidt <list.btrfs@jan-o-sch.net> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Jan Schmidt 提交于
The function is separated into a preparation part and the three accounting steps mentioned in the qgroups documentation. The goal is to make steps two and three usable by the rescan functionality. A side effect is that the function is restructured into readable subunits. Signed-off-by: NJan Schmidt <list.btrfs@jan-o-sch.net> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Jan Schmidt 提交于
Sequence numbers for delayed refs have been introduced in the first version of the qgroup patch set. To solve the problem of find_all_roots on a busy file system, the tree mod log was introduced. The sequence numbers for that were simply shared between those two users. However, at one point in qgroup's quota accounting, there's a statement accessing the previous sequence number, that's still just doing (seq - 1) just as it would have to in the very first version. To satisfy that requirement, this patch makes the sequence number counter 64 bit and splits it into a major part (used for qgroup sequence number counting) and a minor part (incremented for each tree modification in the log). This enables us to go exactly one major step backwards, as required for qgroups, while still incrementing the sequence counter for tree mod log insertions to keep track of their order. Keeping them in a single variable means there's no need to change all the code dealing with comparisons of two sequence numbers. The sequence number is reset to 0 on commit (not new in this patch), which ensures we won't overflow the two 32 bit counters. Without this fix, the qgroup tracking can occasionally go wrong and WARN_ONs from the tree mod log code may happen. Signed-off-by: NJan Schmidt <list.btrfs@jan-o-sch.net> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Wang Shilong 提交于
Since all the quota configurations are loaded in memory, and we can have ioctl checks before operating in the disk. It is safe to do such things because qgroup_ioctl_lock is held outside. Without these extra checks firstly, it should be ok to do user change for quota operations. For example: if we want to add an existed qgroup, we will do: ->add_qgroup_item() ->add_qgroup_rb() add_qgroup_item() will return -EEXIST to us, however, qgroups are all in memory, why not check them in memory firstly. Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Wang Shilong 提交于
ulist_add() may return -ENOMEM, fix missing check about return value. Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Wang Shilong 提交于
Step to reproduce: mkfs.btrfs <disk> mount <disk> <mnt> dd if=/dev/zero of=/<mnt>/data bs=1M count=10 sync btrfs quota enable <mnt> btrfs qgroup create 0/5 <mnt> btrfs qgroup limit 5M 0/5 <mnt> rm -f /<mnt>/data sync btrfs qgroup show <mnt> dd if=/dev/zero of=data bs=1M count=1 >From the perspective of users, qgroup's referenced or exclusive is negative,but user can not continue to write data! a workaround way is to cast u64 to s64 when doing qgroup reservation. Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com> Reviewed-by: NArne Jansen <sensille@gmx.net> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Wang Shilong 提交于
Step to reproduce: mkfs.btrfs <disk> mount <disk> <mnt> btrfs quota enable <mnt> btrfs qgroup limit 0/1 <mnt> dmesg If the relative qgroup dosen't exist, flag 'BTRFS_QGROUP_STATUS_ FLAG_INCONSISTENT' will be set, and print the noise message. This is wrong, we can just move find_qgroup_rb() before update_qgroup_limit_item().this dosen't change the logic of the function. But it can avoid unnecessary noise message and wrong set of flag. Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Wang Shilong 提交于
The original code forgot to check 'inherit', we should gurantee that all the qgroups in the struct 'inherit' exist. Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com> Reviewed-by: NMiao Xie <miaox@cn.fujitsu.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Wang Shilong 提交于
Step to reproduce: mkfs.btrfs <disk> mount <disk> <mnt> btrfs quota enable <mnt> btrfs qgroup assign 0/1 1/1 <mnt> umount <mnt> btrfs-debug-tree <disk> | grep QGROUP If we want to add a qgroup relation, we should gurantee that 'src' and 'dst' exist, otherwise, such qgroup relation should not be allowed to create. Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com> Reviewed-by: NMiao Xie <miaox@cn.fujitsu.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Wang Shilong 提交于
We use mutex lock to protect all the user change operations. So when we are calling find_qgroup_rb() to check whether qgroup exists, we don't have to hold spin_lock. Besides, when enabling/disabling quota, it must be single thread when operations come here. spin lock must be firstly used to clear quota_root when disabling quota, while enabling quota, spin lock must be used to complete the last assign work. Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com> Reviewed-by: NMiao Xie <miaox@cn.fujitsu.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Wang Shilong 提交于
The original code has one spin_lock 'qgroup_lock' to protect quota configurations in memory. If we want to add a BTRFS_QGROUP_INFO_KEY, it will be added to Btree firstly, and then update configurations in memory,however, a race condition may happen between these operations. For example: ->add_qgroup_info_item() ->add_qgroup_rb() For the above case, del_qgroup_info_item() may happen just before add_qgroup_rb(). What's worse, when we want to add a qgroup relation: ->add_qgroup_relation_item() ->add_qgroup_relations() We don't have any checks whether 'src' and 'dst' exist before add_qgroup_relation_item(), a race condition can also happen for the above case. To avoid race condition and have all the necessary checks, we introduce a mutex lock 'qgroup_ioctl_lock', and we make all the user change operations protected by the mutex lock. Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com> Reviewed-by: NMiao Xie <miaox@cn.fujitsu.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Wang Shilong 提交于
Creating the subvolume/snapshots(including root subvolume) qgroup auotomatically when enabling quota. Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com> Reviewed-by: NMiao Xie <miaox@cn.fujitsu.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Wang Shilong 提交于
Steps to reproduce: mkfs.btrfs <disk> mount <disk> <mnt> btrfs quota enable <mnt> btrfs sub create <mnt>/subv i=1 while [ $i -le 10000 ] do dd if=/dev/zero of=<mnt>/subv/data_$i bs=1K count=1 i=$(($i+1)) if [ $i -eq 500 ] then btrfs quota disable $mnt fi done dmesg Obviously, this warn_on() is unnecessary, and it will be easily triggered. Just remove it. Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
- 28 3月, 2013 1 次提交
-
-
由 Wang Shilong 提交于
The function btrfs_find_all_roots is responsible to allocate memory for 'roots' and free it if errors happen,so the caller should not free it again since the work has been done. Besides,'tmp' is allocated after the function btrfs_find_all_roots, so we can return directly if btrfs_find_all_roots() fails. Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com> Reviewed-by: NMiao Xie <miaox@cn.fujitsu.com> Reviewed-by: NJan Schmidt <list.btrfs@jan-o-sch.net> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
- 15 3月, 2013 1 次提交
-
-
由 Wang Shilong 提交于
If one of qgroup fails to reserve firstly, we should return immediately, it is unnecessary to continue check. Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com> Signed-off-by: NChris Mason <chris.mason@fusionio.com>
-
- 01 3月, 2013 3 次提交
-
-
由 Wang Shilong 提交于
The check work has been done just before the function btrfs_clean_quota_tree is called, it is not necessary to check it again, remove it. Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Wang Shilong 提交于
Return ENOMEM rather trigger BUG_ON, fix it. Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com> Reviewed-by: NMiao Xie <miaox@cn.fujitsu.com> Reviewed-by: NZach Brown <zab@redhat.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Wang Shilong 提交于
Steps to reproduce: i=0 ncases=100 mkfs.btrfs <disk> mount <disk> <mnt> btrfs quota enable <mnt> btrfs qgroup create 2/1 <mnt> while [ $i -le $ncases ] do btrfs qgroup create 1/$i <mnt> btrfs qgroup assign 1/$i 2/1 <mnt> i=$(($i+1)) done btrfs quota disable <mnt> umount <mnt> btrfsck <mnt> You can also use the commands: btrfs-debug-tree <disk> | grep QGROUP You will find there are still items existed.The reasons why this happens is because the original code just checks slots[0]==0 and returns. We try to fix it by deleting the leaf one by one. Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com> Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
- 21 2月, 2013 1 次提交
-
-
由 Wang Shilong 提交于
The original code forget to check whether quota has been disabled firstly, and it will return 'EINVAL' and return error to users if quota has been disabled,it will be unfriendly and confusing for users to see that. So just return directly if quota has been disabled. Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com> Cc: Arne Jansen <sensille@gmx.net> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
- 20 2月, 2013 1 次提交
-
-
由 Filipe Brandenburger 提交于
The header file will then be installed under /usr/include/linux so that userspace applications can refer to Btrfs ioctls by name and use the same structs used internally in the kernel. Signed-off-by: NFilipe Brandenburger <filbranden@google.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
- 22 1月, 2013 2 次提交
-
-
由 Arne Jansen 提交于
Currently you can just destroy a qgroup even though it is in use by other qgroups or has qgroups assigned to it. This patch prevents destruction of qgroups unless they are completely unused. Otherwise destroy will return EBUSY. Reported-by: NEric Hopper <hopper@omnifarious.org> Signed-off-by: NArne Jansen <sensille@gmx.net> Signed-off-by: NChris Mason <chris.mason@fusionio.com>
-
由 Arne Jansen 提交于
If a qgroup that has still assignments is deleted by the user, the corresponding relations are left in the tree. This leads to an unmountable filesystem. With this patch, those relations are simple ignored. Reported-by: NEric Hopper <hopper@omnifarious.org> Signed-off-by: NArne Jansen <sensille@gmx.net> Signed-off-by: NChris Mason <chris.mason@fusionio.com>
-