- 18 3月, 2013 3 次提交
-
-
由 Jaegeuk Kim 提交于
Previously, f2fs reads several node pages ahead when get_dnode_of_data is called with RDONLY_NODE flag. And, this flag is set by the following functions. - get_data_block_ro - get_lock_data_page - do_write_data_page - truncate_blocks - truncate_hole However, this readahead mechanism is initially introduced for the use of get_data_block_ro to enhance the sequential read performance. So, let's clarify all the cases with the additional modes as follows. enum { ALLOC_NODE, /* allocate a new node page if needed */ LOOKUP_NODE, /* look up a node without readahead */ LOOKUP_NODE_RA, /* * look up a node with readahead called * by get_datablock_ro. */ } Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com> Reviewed-by: NNamjae Jeon <namjae.jeon@samsung.com>
-
由 Jaegeuk Kim 提交于
The get_node_page_ra tries to: 1. grab or read a target node page for the given nid, 2. then, call ra_node_page to read other adjacent node pages in advance. So, when we try to read a target node page by #1, we should submit bio with READ_SYNC instead of READA. And, in #2, READA should be used. Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com> Reviewed-by: NNamjae Jeon <namjae.jeon@samsung.com>
-
由 Jaegeuk Kim 提交于
If the node page was truncated, its block address became zero. This means that we don't need to write the node page, but have to unlock NODE_WRITE, decrease the number of dirty node pages, and then unlock_page before returning the f2fs_write_node_page with zero. Reviewed-by: NNamjae Jeon <namjae.jeon@samsung.com> Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
-
- 08 3月, 2013 1 次提交
-
-
由 Changman Lee 提交于
Use div_u64 to fix overflow when calculating utilization. *long int* is 4-bytes on 32-bit so (user blocks * 100) might be overflow if disk size is over e.g. 512GB. Signed-off-by: NChangman Lee <cm224.lee@samsung.com> Reviewed-by: NNamjae Jeon <namjae.jeon@samsung.com> Signed-off-by: NJaegeuk Kim <jaegeuk.kim@samsung.com>
-
- 04 3月, 2013 1 次提交
-
-
由 Kees Cook 提交于
When the userspace messaging (for the less common case of userspace key wrap/unwrap via ecryptfsd) is not needed, allow eCryptfs to build with it removed. This saves on kernel code size and reduces potential attack surface by removing the /dev/ecryptfs node. Signed-off-by: NKees Cook <keescook@chromium.org> Signed-off-by: NTyler Hicks <tyhicks@canonical.com>
-
- 03 3月, 2013 8 次提交
-
-
由 Geert Uytterhoeven 提交于
tilegx_defconfig: fs/btrfs/raid56.c: In function 'btrfs_alloc_stripe_hash_table': fs/btrfs/raid56.c:206:3: error: implicit declaration of function 'vzalloc' [-Werror=implicit-function-declaration] fs/btrfs/raid56.c:206:9: warning: assignment makes pointer from integer without a cast [enabled by default] fs/btrfs/raid56.c:226:4: error: implicit declaration of function 'vfree' [-Werror=implicit-function-declaration] Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NChris Mason <chris.mason@fusionio.com>
-
由 Jan Kara 提交于
When using quota feature we need to enable quotas before orphan cleanup so that changes happening during it are properly reflected in quota accounting. Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Jan Kara 提交于
So far we silently ignored when quota mount options were set while quota feature was enabled. But this can create confusion in userspace when mount options are set but silently ignored and also creates opportunities for bugs when we don't properly test all quota types. Actually ext4_mark_dquot_dirty() forgets to test for quota feature so it was dependent on journaled quota options being set. OTOH ext4_orphan_cleanup() tries to enable journaled quota when quota options are specified which is wrong when quota feature is enabled. Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Zheng Liu 提交于
ext4_dir_llseek is only used as a callback function, and no one calls it directly. So make it as a static function in order to remove a warning message from sparse check. Signed-off-by: NZheng Liu <wenqing.lz@taobao.com> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Lukas Czerner 提交于
We're using macro EXT4_B2C() to convert number of blocks to number of clusters for bigalloc file systems. However, we should be using EXT4_NUM_B2C(). Signed-off-by: NLukas Czerner <lczerner@redhat.com> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu> Cc: stable@vger.kernel.org
-
由 Wei Yongjun 提交于
'orig_data' is malloced in ext4_remount() and should be freed before leaving from the error handling cases, otherwise it will cause memory leak. Signed-off-by: NWei Yongjun <yongjun_wei@trendmicro.com.cn> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu> Reviewed-by: NLukas Czerner <lczerner@redhat.com> Cc: stable@vger.kernel.org
-
由 Dmitry Monakhov 提交于
If start_this_handle() failed handle will be initialized to ERR_PTR() and can not be dereferenced. paging request at fffffffffffffff6 IP: [<ffffffff813c073f>] jbd2__journal_start+0x18f/0x290 PGD 200e067 PUD 200f067 PMD 0 Oops: 0000 [#1] SMP Modules linked in: cpufreq_ondemand acpi_cpufreq freq_table mperf coretemp kvm_intel kvm crc32c_intel ghash_clmulni_intel microcode sg xhci_hcd button sd_mod crc_t10dif aesni_intel ablk_helper cryptd lrw aes_x86_64 xts gf128mul ahci libahci pata_acpi ata_generic dm_mirror dm_region_hash dm_log dm_mod CPU 0 journal commit I/O error Pid: 2694, comm: fio Not tainted 3.8.0-rc3+ #79 /DQ67SW RIP: 0010:[<ffffffff813c073f>] [<ffffffff813c073f>] jbd2__journal_start+0x18f/0x290 RSP: 0018:ffff880233b8ba58 EFLAGS: 00010292 RAX: 00000000ffffffe2 RBX: ffffffffffffffe2 RCX: 0000000000000006 RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff82128f48 RBP: ffff880233b8ba98 R08: 0000000000000000 R09: ffff88021440a6e0 Signed-off-by: NDmitry Monakhov <dmonakhov@openvz.org> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 James Hogan 提交于
The commit "binfmt_elf: cleanups" (f670d0ec) removed an ifndef elf_map but this breaks compilation for metag which does define elf_map. This adds the ifndef back in as it was before, but does not affect the other cleanups made by that patch. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: linux-fsdevel@vger.kernel.org Acked-by: NMikael Pettersson <mikpe@it.uu.se>
-
- 02 3月, 2013 8 次提交
-
-
由 Theodore Ts'o 提交于
Use a percpu counter rather than atomic types for shrinker accounting. There's no need for ultimate accuracy in the shrinker, so this should come a little more cheaply. The percpu struct is somewhat large, but there was a big gap before the cache-aligned s_es_lru_lock anyway, and it fits nicely in there. Signed-off-by: NEric Sandeen <sandeen@redhat.com> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Heiko Carstens 提交于
Both compat syscalls got lost with 9d94b9e2 "switch timerfd compat syscalls to COMPAT_SYSCALL_DEFINE" because of a typo: COMPAT instead of CONFIG_COMPAT. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Note that this thing does *not* contribute to inode refcount; it's pinned down by dentry. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Tim Gardner 提交于
smatch analysis: fs/autofs4/waitq.c:46 autofs4_catatonic_mode() info: redundant null check on wq->name.name calling kfree() Signed-off-by: NTim Gardner <tim.gardner@canonical.com> Signed-off-by: NIan Kent <raven@themaw.net> Cc: autofs@vger.kernel.org Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Peter Huewe 提交于
autofs - Fix sparse warning: context imbalance in autofs4_d_automount() different lock contexts for basic block Sparse complains: fs/autofs4/root.c:409:9: sparse: context imbalance in 'autofs4_d_automount' - different lock contexts for basic block This was introduced by commit f55fb0c2 ("autofs4 - dont clear DCACHE_NEED_AUTOMOUNT on rootless mount") The function autofs4_d_automount can be left with the (&sbi->fs_lock) held if sbi->version <= 4 and simple_empty(dentry) == false so the warning seems valid. --> Add an spin_unlock in this case before we jump to done Unfortunately compile tested only. Reported-by: NFengguang Wu <fengguang.wu@intel.com> Signed-off-by: NPeter Huewe <peterhuewe@gmx.de> Acked-by: NIan Kent <raven@themaw.net> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Paul Gortmaker 提交于
We want to avoid module.h where posible, since it in turn includes nearly all of header space. This means removing it where it is not required, and using export.h where we are only exporting symbols via EXPORT_SYMBOL and friends. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: NChris Mason <chris.mason@fusionio.com>
-
由 Josef Bacik 提交于
Apparently when we do inline extents we allow the data to overlap the last chunk of the btrfs_file_extent_item, which means that we can possibly have a btrfs_file_extent_item that isn't actually as large as a btrfs_file_extent_item. This messes with us when we try to overwrite the extent when logging new extents since we expect for it to be the right size. To fix this just delete the item and try to do the insert again which will give us the proper sized btrfs_file_extent_item. This fixes a panic where map_private_extent_buffer would blow up because we're trying to write past the end of the leaf. Thanks, Cc: stable@vger.kernel.org Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
- 01 3月, 2013 18 次提交
-
-
由 David Sterba 提交于
The stripe hash table is large, starting with allocation order 4 and can go as high as order 7 in case lock debugging is turned on and structure padding happens. Observed mount failure: mount: page allocation failure: order:7, mode:0x200050 Pid: 8234, comm: mount Tainted: G W 3.8.0-default+ #267 Call Trace: [<ffffffff81114353>] warn_alloc_failed+0xf3/0x140 [<ffffffff811171d2>] ? __alloc_pages_direct_compact+0x92/0x250 [<ffffffff81117ac3>] __alloc_pages_nodemask+0x733/0x9d0 [<ffffffff81152878>] ? cache_alloc_refill+0x3f8/0x840 [<ffffffff811528bc>] cache_alloc_refill+0x43c/0x840 [<ffffffff811302eb>] ? is_kernel_percpu_address+0x4b/0x90 [<ffffffffa00a00ac>] ? btrfs_alloc_stripe_hash_table+0x5c/0x130 [btrfs] [<ffffffff811531d7>] kmem_cache_alloc_trace+0x247/0x270 [<ffffffffa00a00ac>] btrfs_alloc_stripe_hash_table+0x5c/0x130 [btrfs] [<ffffffffa003133f>] open_ctree+0xb2f/0x1f90 [btrfs] [<ffffffff81397289>] ? string+0x49/0xe0 [<ffffffff813987b3>] ? vsnprintf+0x443/0x5d0 [<ffffffffa0007cb6>] btrfs_mount+0x526/0x600 [btrfs] [<ffffffff8115127c>] ? cache_alloc_debugcheck_after+0x4c/0x200 [<ffffffff81162b90>] mount_fs+0x20/0xe0 [<ffffffff8117db26>] vfs_kern_mount+0x76/0x120 [<ffffffff811801b6>] do_mount+0x386/0x980 [<ffffffff8112a5cb>] ? strndup_user+0x5b/0x80 [<ffffffff81180840>] sys_mount+0x90/0xe0 [<ffffffff81962e99>] system_call_fastpath+0x16/0x1b Signed-off-by: NDavid Sterba <dsterba@suse.cz> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Wang Shilong 提交于
The original code is a little confusing and not clear, The right way to deal with the kernel code like this: [...] if (ret) goto out; [...] So i move the common clean_up code to the place labeled with out_fail, this will be easier to maintain. Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Wang Shilong 提交于
commit eb6b88d9 leads into another bug. If it is just because qgroup_reserve fails, the function btrfs_qgroup_free should not be called, otherwise, it will cause the wrong quota accounting. Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Wang Shilong 提交于
The check work has been done just before the function btrfs_clean_quota_tree is called, it is not necessary to check it again, remove it. Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Wang Shilong 提交于
Return ENOMEM rather trigger BUG_ON, fix it. Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com> Reviewed-by: NMiao Xie <miaox@cn.fujitsu.com> Reviewed-by: NZach Brown <zab@redhat.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Wang Shilong 提交于
Steps to reproduce: i=0 ncases=100 mkfs.btrfs <disk> mount <disk> <mnt> btrfs quota enable <mnt> btrfs qgroup create 2/1 <mnt> while [ $i -le $ncases ] do btrfs qgroup create 1/$i <mnt> btrfs qgroup assign 1/$i 2/1 <mnt> i=$(($i+1)) done btrfs quota disable <mnt> umount <mnt> btrfsck <mnt> You can also use the commands: btrfs-debug-tree <disk> | grep QGROUP You will find there are still items existed.The reasons why this happens is because the original code just checks slots[0]==0 and returns. We try to fix it by deleting the leaf one by one. Signed-off-by: NWang Shilong <wangsl-fnst@cn.fujitsu.com> Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Theodore Ts'o 提交于
When the system is under memory pressure, ext4_es_srhink() will get called very often. So optimize returning the number of items in the file system's extent status cache by keeping a per-filesystem count, instead of calculating it each time by scanning all of the inodes in the extent status cache. Also rename the slab used for the extent status cache to be "ext4_extent_status" so it's obviousl the slab in question is created by ext4. Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu> Cc: Zheng Liu <gnehzuil.liu@gmail.com>
-
由 Weston Andros Adamson 提交于
The client will currently try LAYOUTGETs forever if a server is returning NFS4ERR_LAYOUTTRYLATER or NFS4ERR_RECALLCONFLICT - even if the client no longer needs the layout (ie process killed, unmounted). This patch uses the DS timeout value (module parameter 'dataserver_timeo' via rpc layer) to set an upper limit of how long the client tries LATOUTGETs in this situation. Once the timeout is reached, IO is redirected to the MDS. This also changes how the client checks if a layout is on the clp list to avoid a double list_add. Signed-off-by: NWeston Andros Adamson <dros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Weston Andros Adamson 提交于
The client should have 60 second default timeouts for DS operations, not 6 seconds. NFS4_DEF_DS_TIMEO is used as "timeout in tenths of a second" in nfs_init_timeout_values (and is not used anywhere else). This matches up with the description of the module param dataserver_timeo. Signed-off-by: NWeston Andros Adamson <dros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 Trond Myklebust 提交于
If we don't release the open seqid before we wait for state recovery, then we may end up deadlocking the state recovery thread. This patch addresses a new deadlock that was introduced by commit c21443c2 (NFSv4: Fix a reboot recovery race when opening a file) Reported-by: NAndy Adamson <andros@netapp.com> Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
-
由 David Sterba 提交于
The nodesize is capped at 64k and there are enough pages preallocated in extent_buffer::inline_pages. The fallback to kmalloc never happened because even on the smallest page size considered (4k) inline_pages covered the needs. Signed-off-by: NDavid Sterba <dsterba@suse.cz> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Miao Xie 提交于
When deleting a snapshot/subvolume, we need remove root ref/backref, dir entries and update the dir inode, so we must reserve free space for those operations. Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Miao Xie 提交于
There are two problems in the space reservation of the snapshot/ subvolume creation. - don't reserve the space for the root item insertion - the space which is reserved in the qgroup is different with the free space reservation. we need reserve free space for 7 items, but in qgroup reservation, we need reserve space only for 3 items. So we implement new metadata reservation functions for the snapshot/subvolume creation. Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Miao Xie 提交于
Since we have grabbed the parent inode at the beginning of the snapshot creation, and both sync and async snapshot creation release it after the pending snapshots are actually created, it is safe to access the parent inode directly during the snapshot creation, we needn't use dget_parent/dput to fix the parent dentry and get the dir inode. Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 David Sterba 提交于
Dave pointed out that he saw messages from btrfs although there was no such filesystem on his computers. The automatic device scan is called on every new blockdevice if the usual distro udev rule set is used. The printk introduced in 6f60cbd3 was a remainder from copying portions of code from btrfs_get_bdev_and_sb which is used under different conditions and the warning makes sense there. Reported-by: NDave Chinner <dchinner@redhat.com> Signed-off-by: NDavid Sterba <dsterba@suse.cz> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Liu Bo 提交于
While doing cleanup work on an aborted transaction, we've set the global running transaction pointer to NULL _before_ waiting all other transaction handles to finish, so others'd hit NULL pointer crash when referencing the global running transaction pointer. This first sets a hint to avoid new transaction handle joining, then waits other existing handles to abort or finish so that we can safely set the above global pointer to NULL. Signed-off-by: NLiu Bo <bo.li.liu@oracle.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Liu Bo 提交于
When we abort a transaction while fsyncing, we'll skip freeing log roots part of committing a transaction, which leads to memory leak. This adds a 'free log roots' in putting super when no more users hold references on log roots, so it's safe and clean. Signed-off-by: NLiu Bo <bo.li.liu@oracle.com> Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
由 Josef Bacik 提交于
I noticed while looking into a tree logging bug that we aren't logging inline extents properly. Since this requires copying and it shouldn't happen too often just force us to copy everything for the inode into the tree log when we have an inline extent. With this patch we have valid data after a crash when we write an inline extent. Thanks, Cc: stable@vger.kernel.org Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
-
- 28 2月, 2013 1 次提交
-
-
由 Ouyang Maochun 提交于
Pages get the PG_writeback flag set before cifs sends its request to SMB server in cifs_writepages(), if the SMB service goes down, cifs may try to recommit the writing requests in cifs_writev_requeue(). However, it does not clean its PG_writeback flag and relaimed the pages even if it fails again in cifs_writev_requeue(), which may lead to the hanging of the processes accessing the cifs directory. This patch just cleans the PG_writeback flags and reclaims the pages under that circumstances. Steps to reproduce the bug(trying serveral times may trigger the issue): 1.Write from cifs client continuously.(e.g dd if=/dev/zero of=<cifs file>) 2.Stop SMB service from server.(e.g service smb stop) 3.Wait for two minutes, and then start SMB service from server.(e.g service smb start) 4.The processes which are accessing cifs directory may hang up. Signed-off-by: NOuyang Maochun <ouyang.maochun@zte.com.cn> Signed-off-by: NJiang Yong <jian.yong5@zte.com.cn> Tested-by: NZhang Xianwei <zhang.xianwei8@zte.com.cn> Reviewed-by: NWang Liang <wang.liang82@zte.com.cn> Reviewed-by: NCai Qu <cai.qu@zte.com.cn> Reviewed-by: NJiang Biao <jiang.biao2@zte.com.cn> Reviewed-by: NJeff Layton <jlayton@redhat.com> Reviewed-by: NPavel Shilovsky <piastry@etersoft.ru> Signed-off-by: NSteve French <sfrench@us.ibm.com>
-