- 01 10月, 2013 2 次提交
-
-
由 Yan, Zheng 提交于
If client has outdated directory fragments information, it may request readdir an non-existent directory fragment. In this case, the MDS finds an approximate directory fragment and sends its contents back to the client. When receiving a reply with fragment that is different than the requested one, the client need to reset the 'readdir offset'. Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com> Reviewed-by: NSage Weil <sage@inktank.com>
-
由 Yan, Zheng 提交于
If directory fragments change, fill_inode() inserts new frags into the fragtree, but it does not remove outdated frags from the fragtree. This patch fixes it. Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com> Reviewed-by: NSage Weil <sage@inktank.com>
-
- 26 9月, 2013 1 次提交
-
-
由 Milosz Tanski 提交于
In some cases I'm on my ceph client cluster I'm seeing hunk kernel tasks in the invalidate page code path. This is due to the fact that we don't check if the page is marked as cache before calling fscache_wait_on_page_write(). This is the log from the hang INFO: task XXXXXX:12034 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. ... Call Trace: [<ffffffff81568d09>] schedule+0x29/0x70 [<ffffffffa01d4cbd>] __fscache_wait_on_page_write+0x6d/0xb0 [fscache] [<ffffffff81083520>] ? add_wait_queue+0x60/0x60 [<ffffffffa029a3e9>] ceph_invalidate_fscache_page+0x29/0x50 [ceph] [<ffffffffa027df00>] ceph_invalidatepage+0x70/0x190 [ceph] [<ffffffff8112656f>] ? delete_from_page_cache+0x5f/0x70 [<ffffffff81133cab>] truncate_inode_page+0x8b/0x90 [<ffffffff81133ded>] truncate_inode_pages_range.part.12+0x13d/0x620 [<ffffffff8113431d>] truncate_inode_pages_range+0x4d/0x60 [<ffffffff811343b5>] truncate_inode_pages+0x15/0x20 [<ffffffff8119bbf6>] evict+0x1a6/0x1b0 [<ffffffff8119c3f3>] iput+0x103/0x190 ... Signed-off-by: NMilosz Tanski <milosz@adfin.com> Reviewed-by: NSage Weil <sage@inktank.com>
-
- 07 9月, 2013 8 次提交
-
-
由 Yan, Zheng 提交于
d_invalidate() is the standard VFS method to invalidate dentry. compare to d_delete(), it also try shrinking children dentries. Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com> Reviewed-by: NSage Weil <sage@inktank.com>
-
由 Yan, Zheng 提交于
commit 6f60f889 (ceph: fix freeing inode vs removing session caps race) introduced ceph_lookup_inode(). But there is already a ceph_find_inode() which provides similar function. So remove ceph_lookup_inode(), use ceph_find_inode() instead. Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com> Reviewed-by: NAlex Elder <alex.elder@linary.org> Reviewed-by: NSage Weil <sage@inktank.com>
-
由 Milosz Tanski 提交于
The linux-next build bot found a three of warnings, this addresses all of them. * non-ANSI function declaration of function 'ceph_fscache_register' and 'ceph_fscache_unregister' * symbol 'ceph_cache_netfs' was not declared, now it's extern in the header. * warning: "pr_fmt" redefined Signed-off-by: NMilosz Tanski <milosz@adfin.com>
-
由 Milosz Tanski 提交于
Previously we would always try to enqueue work even if the filesystem is not mounted with fscache enabled (or the file has no cookie). In the case of the filesystem mouned nofsc (but with fscache compiled in) this would lead to a crash. Signed-off-by: NMilosz Tanski <milosz@adfin.com>
-
由 Milosz Tanski 提交于
Previous patch that allowed us to cleanup most of the issues with pages marked as private_2 when calling ceph_readpages. However, there seams to be a case in the error case clean up in start read that still trigers this from time to time. I've only seen this one a couple times. BUG: Bad page state in process petabucket pfn:335b82 page:ffffea000cd6e080 count:0 mapcount:0 mapping: (null) index:0x0 page flags: 0x200000000001000(private_2) Call Trace: [<ffffffff81563442>] dump_stack+0x46/0x58 [<ffffffff8112c7f7>] bad_page+0xc7/0x120 [<ffffffff8112cd9e>] free_pages_prepare+0x10e/0x120 [<ffffffff8112e580>] free_hot_cold_page+0x40/0x160 [<ffffffff81132427>] __put_single_page+0x27/0x30 [<ffffffff81132d95>] put_page+0x25/0x40 [<ffffffffa02cb409>] ceph_readpages+0x2e9/0x6f0 [ceph] [<ffffffff811313cf>] __do_page_cache_readahead+0x1af/0x260 Signed-off-by: NMilosz Tanski <milosz@adfin.com> Signed-off-by: NSage Weil <sage@inktank.com>
-
由 Milosz Tanski 提交于
Previously ceph_readpage_to_fscache did not call if page was marked as cached before calling fscache_write_page resulting in a BUG inside of fscache. FS-Cache: Assertion failed ------------[ cut here ]------------ kernel BUG at fs/fscache/page.c:874! invalid opcode: 0000 [#1] SMP Call Trace: [<ffffffffa02e6566>] __ceph_readpage_to_fscache+0x66/0x80 [ceph] [<ffffffffa02caf84>] readpage_nounlock+0x124/0x210 [ceph] [<ffffffffa02cb08d>] ceph_readpage+0x1d/0x40 [ceph] [<ffffffff81126db6>] generic_file_aio_read+0x1f6/0x700 [<ffffffffa02c6fcc>] ceph_aio_read+0x5fc/0xab0 [ceph] Signed-off-by: NMilosz Tanski <milosz@adfin.com> Signed-off-by: NSage Weil <sage@inktank.com>
-
由 Milosz Tanski 提交于
In some cases the ceph readapages code code bails without filling all the pages already marked by fscache. When we return back to readahead code this causes a BUG. Signed-off-by: NMilosz Tanski <milosz@adfin.com>
-
由 Milosz Tanski 提交于
Adding support for fscache to the Ceph filesystem. This would bring it to on par with some of the other network filesystems in Linux (like NFS, AFS, etc...) In order to mount the filesystem with fscache the 'fsc' mount option must be passed. Signed-off-by: NMilosz Tanski <milosz@adfin.com> Signed-off-by: NSage Weil <sage@inktank.com>
-
- 28 8月, 2013 5 次提交
-
-
由 Sha Zhengju 提交于
Following we will begin to add memcg dirty page accounting around __set_page_dirty_{buffers,nobuffers} in vfs layer, so we'd better use vfs interface to avoid exporting those details to filesystems. Since vfs set_page_dirty() should be called under page lock, here we don't need elaborate codes to handle racy anymore, and two WARN_ON() are added to detect such exceptions. Thanks very much for Sage and Yan Zheng's coaching! I tested it in a two server's ceph environment that one is client and the other is mds/osd/mon, and run the following fsx test from xfstests: ./fsx 1MB -N 50000 -p 10000 -l 1048576 ./fsx 10MB -N 50000 -p 10000 -l 10485760 ./fsx 100MB -N 50000 -p 10000 -l 104857600 The fsx does lots of mmap-read/mmap-write/truncate operations and the tests completed successfully without triggering any of WARN_ON. Signed-off-by: NSha Zhengju <handai.szj@taobao.com> Reviewed-by: NSage Weil <sage@inktank.com>
-
由 majianpeng 提交于
For sync_read/write, it may do multi stripe operations.If one of those met erro, we return the former successed size rather than a error value. There is a exception for write-operation met -EOLDSNAPC.If this occur,we retry the whole write again. Signed-off-by: NJianpeng Ma <majianpeng@gmail.com>
-
由 majianpeng 提交于
cephfs . show_layout >layyout.data_pool: 0 >layout.object_size: 4194304 >layout.stripe_unit: 4194304 >layout.stripe_count: 1 TestA: >dd if=/dev/urandom of=test bs=1M count=2 oflag=direct >dd if=/dev/urandom of=test bs=1M count=2 seek=4 oflag=direct >dd if=test of=/dev/null bs=6M count=1 iflag=direct The messages from func striped_read are: ceph: file.c:350 : striped_read 0~6291456 (read 0) got 2097152 HITSTRIPE SHORT ceph: file.c:350 : striped_read 2097152~4194304 (read 2097152) got 0 HITSTRIPE SHORT ceph: file.c:381 : zero tail 4194304 ceph: file.c:390 : striped_read returns 6291456 The hole of file is from 2M--4M.But actualy it zero the last 4M include the last 2M area which isn't a hole. Using this patch, the messages are: ceph: file.c:350 : striped_read 0~6291456 (read 0) got 2097152 HITSTRIPE SHORT ceph: file.c:358 : zero gap 2097152 to 4194304 ceph: file.c:350 : striped_read 4194304~2097152 (read 4194304) got 2097152 ceph: file.c:384 : striped_read returns 6291456 TestB: >echo majianpeng > test >dd if=test of=/dev/null bs=2M count=1 iflag=direct The messages are: ceph: file.c:350 : striped_read 0~6291456 (read 0) got 11 HITSTRIPE SHORT ceph: file.c:350 : striped_read 11~6291445 (read 11) got 0 HITSTRIPE SHORT ceph: file.c:390 : striped_read returns 11 For this case,it did once more striped_read.It's no meaningless. Using this patch, the message are: ceph: file.c:350 : striped_read 0~6291456 (read 0) got 11 HITSTRIPE SHORT ceph: file.c:384 : striped_read returns 11 Big thanks to Yan Zheng for the patch. Reviewed-by: NYan, Zheng <zheng.z.yan@intel.com> Signed-off-by: NJianpeng Ma <majianpeng@gmail.com>
-
由 Li Wang 提交于
Cleanup in handle_cap_grant(). Signed-off-by: NLi Wang <liwang@ubuntukylin.com> Reviewed-by: NSage Weil <sage@inktank.com>
-
由 Sage Weil 提交于
We need to use do_div to divide by a 64-bit value. Signed-off-by: NSage Weil <sage@inktank.com> Reviewed-by: NJosh Durgin <josh.durgin@inktank.com>
-
- 16 8月, 2013 4 次提交
-
-
由 Li Wang 提交于
This patch implements fallocate and punch hole support for Ceph kernel client. Signed-off-by: NLi Wang <liwang@ubuntukylin.com> Signed-off-by: NYunchuan Wen <yunchuanwen@ubuntukylin.com>
-
由 Yan, Zheng 提交于
ceph_check_caps() requests new max size only when there is Fw cap. If we call check_max_size() while there is no Fw cap. It updates i_wanted_max_size and calls ceph_check_caps(), but ceph_check_caps() does nothing. Later when Fw cap is issued, we call check_max_size() again. But i_wanted_max_size is equal to 'endoff' at this time, so check_max_size() doesn't call ceph_check_caps() and we end up with waiting for the new max size forever. The fix is duplicate ceph_check_caps()'s "request max size" code in check_max_size(), and make try_get_cap_refs() wait for the Fw cap before retry requesting new max size. This patch also removes the "endoff > (inode->i_size << 1)" check in check_max_size(). It's useless because there is no corresponding logic in ceph_check_caps(). Reviewed-by: NSage Weil <sage@inktank.com> Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com>
-
由 Yan, Zheng 提交于
I encountered below deadlock when running fsstress wmtruncate work truncate MDS --------------- ------------------ -------------------------- lock i_mutex <- truncate file lock i_mutex (blocked) <- revoking Fcb (filelock to MIX) send request -> handle request (xlock filelock) At the initial time, there are some dirty pages in the page cache. When the kclient receives the truncate message, it reduces inode size and creates some 'out of i_size' dirty pages. wmtruncate work can't truncate these dirty pages because it's blocked by the i_mutex. Later when the kclient receives the cap message that revokes Fcb caps, It can't flush all dirty pages because writepages() only flushes dirty pages within the inode size. When the MDS handles the 'truncate' request from kclient, it waits for the filelock to become stable. But the filelock is stuck in unstable state because it can't finish revoking kclient's Fcb caps. The truncate pagecache locking has already caused lots of trouble for use. I think it's time simplify it by introducing a new mutex. We use the new mutex to prevent concurrent truncate_inode_pages(). There is no need to worry about race between buffered write and truncate_inode_pages(), because our "get caps" mechanism prevents them from concurrent execution. Reviewed-by: NSage Weil <sage@inktank.com> Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com>
-
由 Milosz Tanski 提交于
The invalidatepage code bails if it encounters a non-zero page offset. The current logic that does is non-obvious with multiple if statements. This should be logically and functionally equivalent. Signed-off-by: NMilosz Tanski <milosz@adfin.com> Reviewed-by: NSage Weil <sage@inktank.com>
-
- 10 8月, 2013 12 次提交
-
-
由 Milosz Tanski 提交于
The early bug checks are moot because the VMA layer ensures those things. 1. It will not call invalidatepage unless PagePrivate (or PagePrivate2) are set 2. It will not call invalidatepage without taking a PageLock first. 3. Guantrees that the inode page is mapped. Signed-off-by: NMilosz Tanski <milosz@adfin.com> Reviewed-by: NSage Weil <sage@inktank.com>
-
由 Sage Weil 提交于
All of the early exit paths need to drop the mutex; it is only the normal path through the function that does not. Skip the unlock in that case with a goto out_unlocked. Signed-off-by: NSage Weil <sage@inktank.com> Reviewed-by: NJianpeng Ma <majianpeng@gmail.com>
-
由 majianpeng 提交于
Only for ceph_sync_write, the osd can return EOLDSNAPC.so move the related codes after the call ceph_sync_write. Signed-off-by: NJianpeng Ma <majianpeng@gmail.com> Reviewed-by: NSage Weil <sage@inktank.com>
-
由 Yan, Zheng 提交于
remove_session_caps() uses iterate_session_caps() to remove caps, but iterate_session_caps() skips inodes that are being deleted. So session->s_nr_caps can be non-zero after iterate_session_caps() return. We can fix the issue by waiting until deletions are complete. __wait_on_freeing_inode() is designed for the job, but it is not exported, so we use lookup inode function to access it. Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com>
-
由 majianpeng 提交于
Func ceph_calc_ceph_pg maybe failed.So add check for returned value. Signed-off-by: NJianpeng Ma <majianpeng@gmail.com> Reviewed-by: NSage Weil <sage@inktank.com> Signed-off-by: NSage Weil <sage@inktank.com>
-
由 majianpeng 提交于
Sending reads and writes through the sync read/write paths bypasses the page cache, which is not expected or generally a good idea. Removing the write check is safe as there is a conditional vfs_fsync_range() later in ceph_aio_write that already checks for the same flag (via IS_SYNC(inode)). Signed-off-by: NJianpeng Ma <majianpeng@gmail.com> Reviewed-by: NSage Weil <sage@inktank.com>
-
由 Dan Carpenter 提交于
We pass in a u64 value for "len" and then immediately truncate away the upper 32 bits. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Reviewed-by: NSage Weil <sage@inktank.com> Reviewed-by: NAlex Elder <alex.elder@linaro.org>
-
由 Yan, Zheng 提交于
The MDS uses caps message to notify clients about deleted inode. when receiving a such message, invalidate any alias of the inode. This makes the kernel release the inode ASAP. Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com> Reviewed-by: NSage Weil <sage@inktank.com>
-
由 Yan, Zheng 提交于
To write data, the writer first acquires the i_mutex, then try getting caps. The writer may sleep while holding the i_mutex. If the MDS revokes Fb cap in this case, vmtruncate work can't do its job because i_mutex is locked. We should wake up the writer and let it truncate the pages. Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com> Reviewed-by: NSage Weil <sage@inktank.com>
-
由 Yan, Zheng 提交于
To handle "link" request, the MDS need to xlock inode's linklock, which requires revoking any CAP_LINK_SHARED. Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com> Reviewed-by: NSage Weil <sage@inktank.com>
-
由 Nathaniel Yazdani 提交于
When register_session() is given an out-of-range argument for mds, ceph_mdsmap_get_addr() will return a null pointer, which would be given to ceph_con_open() & be dereferenced, causing a kernel oops. This fixes bug #4685 in the Ceph bug tracker <http://tracker.ceph.com/issues/4685>. Signed-off-by: NNathaniel Yazdani <n1ght.4nd.d4y@gmail.com> Reviewed-by: NSage Weil <sage@inktank.com>
-
由 majianpeng 提交于
CC: stable@vger.kernel.org Signed-off-by: NJianpeng Ma <majianpeng@gmail.com> Reviewed-by: NSage Weil <sage@inktank.com>
-
- 05 7月, 2013 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 04 7月, 2013 7 次提交
-
-
由 Yan, Zheng 提交于
If we receive new caps from the auth MDS and the non-auth MDS is revoking the newly issued caps, we should release the caps from the non-auth MDS. The scenario is filelock's state changes from SYNC to LOCK. Non-auth MDS revokes Fc cap, the client gets Fc cap from the auth MDS at the same time. Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com> Reviewed-by: NSage Weil <sage@inktank.com>
-
由 Yan, Zheng 提交于
If caps are been revoking by the auth MDS, don't consider them as issued even they are still issued by non-auth MDS. The non-auth MDS should also be revoking/exporting these caps, the client just hasn't received the cap revoke/export message. The race I encountered is: When caps are exporting to new MDS, the client receives cap import message and cap revoke message from the new MDS, then receives cap export message from the old MDS. When the client receives cap revoke message from the new MDS, the revoking caps are still issued by the old MDS, so the client does nothing. Later when the cap export message is received, the client removes the caps issued by the old MDS. (Another way to fix the race is calling ceph_check_caps() in handle_cap_export()) Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com> Reviewed-by: NSage Weil <sage@inktank.com>
-
由 Yan, Zheng 提交于
The locking order for pending vmtruncate is wrong, it can lead to following race: write wmtruncate work ------------------------ ---------------------- lock i_mutex check i_truncate_pending check i_truncate_pending truncate_inode_pages() lock i_mutex (blocked) copy data to page cache unlock i_mutex truncate_inode_pages() The fix is take i_mutex before calling __ceph_do_pending_vmtruncate() Fixes: http://tracker.ceph.com/issues/5453Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com> Reviewed-by: NSage Weil <sage@inktank.com>
-
由 Sasha Levin 提交于
when mounting ceph with a dev name that starts with a slash, ceph would attempt to access the character before that slash. Since we don't actually own that byte of memory, we would trigger an invalid access: [ 43.499934] BUG: unable to handle kernel paging request at ffff880fa3a97fff [ 43.500984] IP: [<ffffffff818f3884>] parse_mount_options+0x1a4/0x300 [ 43.501491] PGD 743b067 PUD 10283c4067 PMD 10282a6067 PTE 8000000fa3a97060 [ 43.502301] Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC [ 43.503006] Dumping ftrace buffer: [ 43.503596] (ftrace buffer empty) [ 43.504046] CPU: 0 PID: 10879 Comm: mount Tainted: G W 3.10.0-sasha #1129 [ 43.504851] task: ffff880fa625b000 ti: ffff880fa3412000 task.ti: ffff880fa3412000 [ 43.505608] RIP: 0010:[<ffffffff818f3884>] [<ffffffff818f3884>] parse_mount_options$ [ 43.506552] RSP: 0018:ffff880fa3413d08 EFLAGS: 00010286 [ 43.507133] RAX: ffff880fa3a98000 RBX: ffff880fa3a98000 RCX: 0000000000000000 [ 43.507893] RDX: ffff880fa3a98001 RSI: 000000000000002f RDI: ffff880fa3a98000 [ 43.508610] RBP: ffff880fa3413d58 R08: 0000000000001f99 R09: ffff880fa3fe64c0 [ 43.509426] R10: ffff880fa3413d98 R11: ffff880fa38710d8 R12: ffff880fa3413da0 [ 43.509792] R13: ffff880fa3a97fff R14: 0000000000000000 R15: ffff880fa3413d90 [ 43.509792] FS: 00007fa9c48757e0(0000) GS:ffff880fd2600000(0000) knlGS:000000000000$ [ 43.509792] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b [ 43.509792] CR2: ffff880fa3a97fff CR3: 0000000fa3bb9000 CR4: 00000000000006b0 [ 43.509792] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 43.509792] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [ 43.509792] Stack: [ 43.509792] 0000e5180000000e ffffffff85ca1900 ffff880fa38710d8 ffff880fa3413d98 [ 43.509792] 0000000000000120 0000000000000000 ffff880fa3a98000 0000000000000000 [ 43.509792] ffffffff85cf32a0 0000000000000000 ffff880fa3413dc8 ffffffff818f3c72 [ 43.509792] Call Trace: [ 43.509792] [<ffffffff818f3c72>] ceph_mount+0xa2/0x390 [ 43.509792] [<ffffffff81226314>] ? pcpu_alloc+0x334/0x3c0 [ 43.509792] [<ffffffff81282f8d>] mount_fs+0x8d/0x1a0 [ 43.509792] [<ffffffff812263d0>] ? __alloc_percpu+0x10/0x20 [ 43.509792] [<ffffffff8129f799>] vfs_kern_mount+0x79/0x100 [ 43.509792] [<ffffffff812a224d>] do_new_mount+0xcd/0x1c0 [ 43.509792] [<ffffffff812a2e8d>] do_mount+0x15d/0x210 [ 43.509792] [<ffffffff81220e55>] ? strndup_user+0x45/0x60 [ 43.509792] [<ffffffff812a2fdd>] SyS_mount+0x9d/0xe0 [ 43.509792] [<ffffffff83fd816c>] tracesys+0xdd/0xe2 [ 43.509792] Code: 4c 8b 5d c0 74 0a 48 8d 50 01 49 89 14 24 eb 17 31 c0 48 83 c9 ff $ [ 43.509792] RIP [<ffffffff818f3884>] parse_mount_options+0x1a4/0x300 [ 43.509792] RSP <ffff880fa3413d08> [ 43.509792] CR2: ffff880fa3a97fff [ 43.509792] ---[ end trace 22469cd81e93af51 ]--- Signed-off-by: NSasha Levin <sasha.levin@oracle.com> Reviewed-by: NSage Weil <sage@inktan.com>
-
由 majianpeng 提交于
Drop ignored return value. Fix allocation failure case to not leak. Signed-off-by: NJianpeng Ma <majianpeng@gmail.com> Reviewed-by: NSage Weil <sage@inktank.com>
-
由 majianpeng 提交于
Signed-off-by: NJianpeng Ma <majianpeng@gmail.com> Reviewed-by: NSage Weil <sage@inktank.com>
-
由 Jianpeng Ma 提交于
Either in vfs_write or io_submit,it call file_start/end_write. The different between file_start/end_write and sb_start/end_write is file_ only handle regular file.But i think in ceph_aio_write,it only for regular file. Signed-off-by: NJianpeng Ma <majianpeng@gmail.com> Acked-by: NYan, Zheng <zheng.z.yan@intel.com>
-