- 16 5月, 2010 1 次提交
-
-
由 Dan Rosenberg 提交于
The existing code would have allowed you to clone a file that was only open for writing Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
- 06 4月, 2010 6 次提交
-
-
由 Chris Mason 提交于
A recent commit allowed for smaller chunks to be created, but didn't make sure they were always bigger than a stripe. After some divides, this led to zero length stripes. Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Josef Bacik 提交于
Because we account for reserved space we get from the allocator before we actually account for allocating delalloc space, we can have a small window where the amount of "used" space in a space_info is more than the total amount of space in the space_info. This will cause a overflow in our check, so it will seem like we have _tons_ of free space, and we'll allow reservations to occur that will end up larger than the amount of space we have. I've seen users report ENOSPC panic's in cow_file_range a few times recently, so I tried to reproduce this problem and found I could reproduce it if I ran one of my tests in a loop for like 20 minutes. With this patch my test ran all night without issues. Thanks, Signed-off-by: NJosef Bacik <josef@redhat.com> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Chris Mason 提交于
setup_leaf_for_split needs to drop the path and search again, and has checks to see if the item we want to split changed size. But, it misses the case where the leaf changed and now has enough room for the item we want to insert. This adds an extra check to make sure the leaf really needs splitting before we call btrfs_split_leaf(), which keeps us from trying to split a leaf with a single item. btrfs_split_leaf() will blindly split the single item leaf, leaving us with one good leaf and one empty leaf and then a crash. Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Sage Weil 提交于
This creates the reference to a new snapshot in the same commit as the snapshot itself. This avoids the need for a second commit in order for a snapshot to be persistent, and also avoids the problem of "leaking" a new snapshot tree root if the host crashes before the second commit takes place. It is not at all clear to me why it wasn't always done this way. If there is still a reason for the two-stage {create,finish}_pending_snapshots() approach I'm missing something! :) I've been running this for a couple weeks under pretty heavy usage (a few snapshots per minute) without obvious problems. Signed-off-by: NSage Weil <sage@newdream.net> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Josef Bacik 提交于
Everytime we start a new flushing thread, we init the waitqueue if there isn't a flushing thread running. The problem with this is we check space_info->flushing, which we clear right before doing a wake_up on the flushing waitqueue, which causes problems if we init the waitqueue in the middle of clearing the flushing flagh and calling wake_up. This is hard to hit, but the code is wrong anyway, so init the flushing/allocating waitqueue when creating the space info and let it be. I haven't seen the panic since I've been using this patch. Thanks, Signed-off-by: NJosef Bacik <josef@redhat.com> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Nick Piggin 提交于
Pagecache pages should be allocated with __page_cache_alloc, so they obey pagecache memory policies. add_to_page_cache_lru is exported, so it should be used. Benefits over using a private pagevec: neater code, 128 bytes fewer stack used, percpu lru ordering is preserved, and finally don't need to flush pagevec before returning so batching may be shared with other LRU insertions. Signed-off-by: Nick Piggin <npiggin@suse.de>: Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
- 31 3月, 2010 11 次提交
-
-
由 Josef Bacik 提交于
If the amount of free space left in a device is less than what we think should be the minimum size, just ignore the minimum size and use the amount we have. I ran into this running tests on a 600mb volume, the chunk allocator wouldn't let me allocate the last 52mb of the disk for data because we want to have at least 64mb chunks for data. This patch fixes that problem. Thanks, Signed-off-by: NJosef Bacik <josef@redhat.com> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Josef Bacik 提交于
As Yan pointed out, theres not much reason for all this complicated math to account for file extents being split up into max_extent chunks, since they are likely to all end up in the same leaf anyway. Since there isn't much reason to use max_extent, just remove the option altogether so we have one less thing we need to test. Signed-off-by: NJosef Bacik <josef@redhat.com> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Josef Bacik 提交于
We don't actually check the return value of btrfs_read_block_groups, so we can possibly succeed to mount, but then fail to say read the superblock xattr for selinux which will cause the vfs code to deactivate the super. This is a problem because in find_free_extent we just assume that we will find the right space_info for the allocation we want. But if we failed to read the block groups, we won't have setup any space_info's, and we'll hit a NULL pointer deref in find_free_extent. This patch fixes that problem by checking the return value of btrfs_read_block_groups, and failing out properly. I've also added a check in find_free_extent so if for some reason we don't find an appropriate space_info, we just return -ENOSPC. Signed-off-by: NJosef Bacik <josef@redhat.com> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Dan Carpenter 提交于
btrfs_get_extent() never returns NULL, only a valid pointer or ERR_PTR() Signed-off-by: NDan Carpenter <error27@gmail.com> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Dan Carpenter 提交于
Return -ENOMEM if kmalloc() fails. Signed-off-by: NDan Carpenter <error27@gmail.com> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Dan Carpenter 提交于
The original code dereferenced range on the next line. Signed-off-by: NDan Carpenter <error27@gmail.com> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Zhao Lei 提交于
We can use this simple method to make source more readable. Signed-off-by: NZhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Zhao Lei 提交于
We need to check return value of btrfs_search_slot() in btrfs_read_chunk_tree() and do corresponding error handing. Signed-off-by: NZhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Zhao Lei 提交于
We only need to call finish_wait() after wait loop. By the way, this patch makes code of waiting loop similar to example in wait.h(no functional change) Signed-off-by: NZhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Miao Xie 提交于
btrfs_find_create_tree_block() may return NULL, so we must check the returned value, or we will access a NULL pointer. Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Andrea Gelmini 提交于
fs/btrfs/ioctl.c: ctree.h is included more than once. Signed-off-by: NAndrea Gelmini <andrea.gelmini@gelma.net> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
- 19 3月, 2010 4 次提交
-
-
由 Chris Mason 提交于
This is used by the inode lookup ioctl to follow all the backrefs up to the subvol root. But the search being done would sometimes land one past the last item in the leaf instead of finding the backref. This changes the search to look for the highest possible backref and hop back one item. It also fixes a leaked path on failure to find the root. Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Chris Mason 提交于
When a root id of 0 is sent to the inode lookup ioctl, it will use the root of the file we're ioctling and pass the root id back to userland along with the results. This allows userland to do searches based on that root later on. Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Chris Mason 提交于
The search ioctl was skipping large items entirely (ones that are too big for the results buffer). This changes things to at least copy the item header so that we can send information about the item back to userland. Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Chris Mason 提交于
The search ioctl was working well for finding tree roots, but using it for generic searches requires a few changes to how the keys are advanced. This treats the search control min fields for objectid, type and offset more like a key, where we drop the offset to zero once we bump the type, etc. The downside of this is that we are changing the min_type and min_offset fields during the search, and so the ioctl caller needs extra checks to make sure the keys in the result are the ones it wanted. This also changes key_in_sk to use btrfs_comp_cpu_keys, just to make things more readable. Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
- 17 3月, 2010 3 次提交
-
-
由 Chris Mason 提交于
The space_info ioctl was using copy_to_user inside rcu_read_lock. This commit changes things to copy into a buffer first and then dump the result down to userland. Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Sage Weil 提交于
Signed-off-by: NSage Weil <sage@newdream.net> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Sage Weil 提交于
key->type is u8, not u64. fs/btrfs/ioctl.c: In function 'copy_to_sk': fs/btrfs/ioctl.c:1024: warning: comparison is always true due to limited range of data type Signed-off-by: NSage Weil <sage@newdream.net> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
- 15 3月, 2010 15 次提交
-
-
由 Nick Piggin 提交于
GFP_FS must be masked out, NOFS can't be or'd in. Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Chris Mason 提交于
After callling submit_bio, the bio can be freed at any time. The btrfs submission thread helper was checking the bio flags too late, which might not give the correct answer. When CONFIG_DEBUG_PAGE_ALLOC is turned on, it can lead to oopsen. Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Xiao Guangrong 提交于
We can use btrfs_stack_device_id() to get dev_item->devid Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Akinobu Mita 提交于
Use memparse() instead of its own private implementation. Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Cc: Chris Mason <chris.mason@oracle.com> Cc: linux-btrfs@vger.kernel.org Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Josef Bacik 提交于
df is a very loaded question in btrfs. This gives us a way to get the per-space usage information so we can tell exactly what is in use where. This will help us figure out ENOSPC problems, and help users better understand where their disk space is going. Signed-off-by: NJosef Bacik <josef@redhat.com> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Josef Bacik 提交于
This patch just goes through and fixes everybody that does lock_extent() blah unlock_extent() to use lock_extent_bits() blah unlock_extent_cached() and pass around a extent_state so we only have to do the searches once per function. This gives me about a 3 mb/s boots on my random write test. I have not converted some things, like the relocation and ioctl's, since they aren't heavily used and the relocation stuff is in the middle of being re-written. I also changed the clear_extent_bit() to only unset the cached state if we are clearing EXTENT_LOCKED and related stuff, so we can do things like this lock_extent_bits() clear delalloc bits unlock_extent_cached() without losing our cached state. I tested this thoroughly and turned on LEAK_DEBUG to make sure we weren't leaking extent states, everything worked out fine. Signed-off-by: NJosef Bacik <josef@redhat.com> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Josef Bacik 提交于
When finishing io we run btrfs_dec_test_ordered_pending, and then immediately run btrfs_lookup_ordered_extent, but btrfs_dec_test_ordered_pending does that already, so we're searching twice when we don't have to. This patch lets us pass a btrfs_ordered_extent in to btrfs_dec_test_ordered_pending so if we do complete io on that ordered extent we can just use the one we found then instead of having to do another btrfs_lookup_ordered_extent. This made my fio job with the other patch go from 24 mb/s to 29 mb/s. Signed-off-by: NJosef Bacik <josef@redhat.com> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Josef Bacik 提交于
This patch makes us cache the extent state we find in find_delalloc_range since we'll have to lock the extent later on in the function. This will keep us from re-searching for the rang when we try to lock the extent. Signed-off-by: NJosef Bacik <josef@redhat.com> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Josef Bacik 提交于
The ordered tree used to need a mutex, but currently all we use it for is to protect the rb_tree, and a spin_lock is just fine for that. Using a spin_lock instead makes dbench run a little faster, 58 mb/s instead of 51 mb/s, and have less latency, 3445.138 ms instead of 3820.633 ms. Signed-off-by: NJosef Bacik <josef@redhat.com> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Chris Mason 提交于
The endio is done at reverse order of bio vectors. That means for a sequential read, the page first submitted will finish last in a bio. Considering we will do checksum (making cache hot) for every page, this does introduce delay (and chance to squeeze cache used soon) for pages submitted at the begining. I don't observe obvious performance difference with below patch at my simple test, but seems more natural to finish read in the order they are submitted. Signed-off-by: NShaohua Li <shaohua.li@intel.com> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Miao Xie 提交于
btrfs_mkdir() must jump to the place of ending transaction after btrfs_find_free_objectid() failed. Or this transaction can't end. Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Sage Weil 提交于
Flush any delalloc extents when we create a snapshot, so that recently written file data is always included in the snapshot. A later commit will add the ability to snapshot without the flush, but most people expect flushing. Signed-off-by: NSage Weil <sage@newdream.net> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Josef Bacik 提交于
The way we report df usage is way confusing for everybody, including some other utilities (bacula for one). So this patch makes df a little bit more understandable. First we make used actually count the total amount of used space in all space info's. This will give us a real view of how much disk space is in use. Second, for blocks available, only count data space. This makes things like bacula work because it says 0 when you can no longer write anymore data to the disk. I think this is a nice compromise, since you will end up with something like the following [root@alpha ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 148G 30G 111G 21% / /dev/sda1 194M 116M 68M 64% /boot tmpfs 985M 12K 985M 1% /dev/shm /dev/mapper/VolGroup-LogVol02 145G 140G 0 100% /mnt/btrfs-test Compare this with btrfsctl -i output [root@alpha btrfs-progs-unstable]# ./btrfsctl -i /mnt/btrfs-test/ Metadata, DUP: total=4.62GB, used=2.46GB System, DUP: total=8.00MB, used=24.00KB Data: total=134.80GB, used=134.80GB Metadata: total=8.00MB, used=0.00 System: total=4.00MB, used=0.00 operation complete This way we show that there is no more data space to be used, but we have another 5GB of space left for metadata. Thanks, Signed-off-by: NJosef Bacik <josef@redhat.com> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 TARUISI Hiroaki 提交于
When we scan devices in a multi-device filesystem, we memorize the original name. If the device gets a new name, later scans don't update the in-kernel structures related to it, and we're not able to mount the filesystem. This patch updates device name during scaning. Signed-off-by: NTARUISI Hiroaki <taruishi.hiroak@jp.fujitsu.com> Signed-off-by: NChris Mason <chris.mason@oracle.com>
-
由 Chris Mason 提交于
The btrfs defrag ioctl was limited to doing the entire file. This commit adds a new interface that can defrag a specific range inside the file. It can also force compression on the file, allowing you to selectively compress individual files after they were created, even when mount -o compress isn't turned on. Signed-off-by: NChris Mason <chris.mason@oracle.com>
-