1. 18 1月, 2010 7 次提交
    • J
      Btrfs: fix possible panic on unmount · 11dfe35a
      Josef Bacik 提交于
      We can race with the unmount of an fs and the stopping of a kthread where we
      will free the block group before we're done using it.  The reason for this is
      because we do not hold a reference on the block group while its caching, since
      the allocator drops its reference once it exits or moves on to the next block
      group.  This patch fixes the problem by taking a reference to the block group
      before we start caching and dropping it when we're done to make sure all
      accesses to the block group are safe.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      11dfe35a
    • C
      Btrfs: deal with NULL acl sent to btrfs_set_acl · a9cc71a6
      Chris Mason 提交于
      It is legal for btrfs_set_acl to be sent a NULL acl.  This
      makes sure we don't dereference it.  A similar patch was sent by
      Johannes Hirte <johannes.hirte@fem.tu-ilmenau.de>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      a9cc71a6
    • J
      Btrfs: fix regression in orphan cleanup · 6c090a11
      Josef Bacik 提交于
      Currently orphan cleanup only ever gets triggered if we cross subvolumes during
      a lookup, which means that if we just mount a plain jane fs that has orphans in
      it, they will never get cleaned up.  This results in panic's like these
      
      http://www.kerneloops.org/oops.php?number=1109085
      
      where adding an orphan entry results in -EEXIST being returned and we panic.  In
      order to fix this, we check to see on lookup if our root has had the orphan
      cleanup done, and if not go ahead and do it.  This is easily reproduceable by
      running this testcase
      
      #include <sys/types.h>
      #include <sys/stat.h>
      #include <fcntl.h>
      #include <string.h>
      #include <unistd.h>
      #include <stdio.h>
      
      int main(int argc, char **argv)
      {
      	char data[4096];
      	char newdata[4096];
      	int fd1, fd2;
      
      	memset(data, 'a', 4096);
      	memset(newdata, 'b', 4096);
      
      	while (1) {
      		int i;
      
      		fd1 = creat("file1", 0666);
      		if (fd1 < 0)
      			break;
      
      		for (i = 0; i < 512; i++)
      			write(fd1, data, 4096);
      
      		fsync(fd1);
      		close(fd1);
      
      		fd2 = creat("file2", 0666);
      		if (fd2 < 0)
      			break;
      
      		ftruncate(fd2, 4096 * 512);
      
      		for (i = 0; i < 512; i++)
      			write(fd2, newdata, 4096);
      		close(fd2);
      
      		i = rename("file2", "file1");
      		unlink("file1");
      	}
      
      	return 0;
      }
      
      and then pulling the power on the box, and then trying to run that test again
      when the box comes back up.  I've tested this locally and it fixes the problem.
      Thanks to Tomas Carnecky for helping me track this down initially.
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      6c090a11
    • Y
      Btrfs: Fix race in btrfs_mark_extent_written · 6c7d54ac
      Yan, Zheng 提交于
      Fix bug reported by Johannes Hirte. The reason of that bug
      is btrfs_del_items is called after btrfs_duplicate_item and
      btrfs_del_items triggers tree balance. The fix is check that
      case and call btrfs_search_slot when needed.
      Signed-off-by: NYan Zheng <zheng.yan@oracle.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      6c7d54ac
    • J
      Btrfs, fix memory leaks in error paths · 2423fdfb
      Jiri Slaby 提交于
      Stanse found 2 memory leaks in relocate_block_group and
      __btrfs_map_block. cluster and multi are not freed/assigned on all
      paths. Fix that.
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Cc: linux-btrfs@vger.kernel.org
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      2423fdfb
    • Y
      Btrfs: align offsets for btrfs_ordered_update_i_size · a038fab0
      Yan, Zheng 提交于
      Some callers of btrfs_ordered_update_i_size can now pass in
      a NULL for the ordered extent to update against.  This makes
      sure we properly align the offset they pass in when deciding
      how much to bump the on disk i_size.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      a038fab0
    • J
      btrfs: fix missing last-entry in readdir(3) · 406266ab
      Jan Engelhardt 提交于
      parent 49313cdac7b34c9f7ecbb1780cfc648b1c082cd7 (v2.6.32-1-g49313cd)
      commit ff48c08e1c05c67e8348ab6f8a24de8034e0e34d
      Author: Jan Engelhardt <jengelh@medozas.de>
      Date:   Wed Dec 9 22:57:36 2009 +0100
      
      Btrfs: fix missing last-entry in readdir(3)
      
      When one does a 32-bit readdir(3), the last entry of a directory is
      missing. This is however not due to passing a large value to filldir,
      but it seems to have to do with glibc doing telldir or something
      quirky. In any case, this patch fixes it in practice.
      Signed-off-by: NJan Engelhardt <jengelh@medozas.de>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      406266ab
  2. 18 12月, 2009 15 次提交
  3. 16 12月, 2009 3 次提交
  4. 01 12月, 2009 3 次提交
    • M
      CacheFiles: Update IMA counters when using dentry_open · 3350b2ac
      Marc Dionne 提交于
      When IMA is active, using dentry_open without updating the
      IMA counters will result in free/open imbalance errors when
      fput is eventually called.
      Signed-off-by: NMarc Dionne <marc.c.dionne@gmail.com>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3350b2ac
    • D
      9p: fix build breakage introduced by FS-Cache · 6f054164
      David Howells 提交于
      While building 2.6.32-rc8-git2 for Fedora I noticed the following thinko
      in commit 201a1542 ("FS-Cache: Handle
      pages pending storage that get evicted under OOM conditions"):
      
        fs/9p/cache.c: In function '__v9fs_fscache_release_page':
        fs/9p/cache.c:346: error: 'vnode' undeclared (first use in this function)
        fs/9p/cache.c:346: error: (Each undeclared identifier is reported only once
        fs/9p/cache.c:346: error: for each function it appears in.)
        make[2]: *** [fs/9p/cache.o] Error 1
      
      Fix the 9P filesystem to correctly construct the argument to
      fscache_maybe_release_page().
      Signed-off-by: NKyle McMartin <kyle@redhat.com>
      Signed-off-by: Xiaotian Feng <dfeng@redhat.com> [from identical patch]
      Signed-off-by: Stefan Lippers-Hollmann <s.l-h@gmx.de> [from identical patch]
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6f054164
    • D
      jffs2: Fix memory corruption in jffs2_read_inode_range() · 199bc9ff
      David Woodhouse 提交于
      In 2.6.23 kernel, commit a32ea1e1
      ("Fix read/truncate race") fixed a race in the generic code, and as a
      side effect, now do_generic_file_read() can ask us to readpage() past
      the i_size. This seems to be correctly handled by the block routines
      (e.g. block_read_full_page() fills the page with zeroes in case if
      somebody is trying to read past the last inode's block).
      
      JFFS2 doesn't handle this; it assumes that it won't be asked to read
      pages which don't exist -- and thus that there will be at least _one_
      valid 'frag' on the page it's being asked to read. It will fill any
      holes with the following memset:
      
        memset(buf, 0, min(end, frag->ofs + frag->size) - offset);
      
      When the 'closest smaller match' returned by jffs2_lookup_node_frag() is
      actually on a previous page and ends before 'offset', that results in:
      
        memset(buf, 0, <huge unsigned negative>);
      
      Hopefully, in most cases the corruption is fatal, and quickly causing
      random oopses, like this:
      
        root@10.0.0.4:~/ltp-fs-20090531# ./testcases/kernel/fs/ftest/ftest01
        Unable to handle kernel paging request for data at address 0x00000008
        Faulting instruction address: 0xc01cd980
        Oops: Kernel access of bad area, sig: 11 [#1]
        [...]
        NIP [c01cd980] rb_insert_color+0x38/0x184
        LR [c0043978] enqueue_hrtimer+0x88/0xc4
        Call Trace:
        [c6c63b60] [c004f9a8] tick_sched_timer+0xa0/0xe4 (unreliable)
        [c6c63b80] [c0043978] enqueue_hrtimer+0x88/0xc4
        [c6c63b90] [c0043a48] __run_hrtimer+0x94/0xbc
        [c6c63bb0] [c0044628] hrtimer_interrupt+0x140/0x2b8
        [c6c63c10] [c000f8e8] timer_interrupt+0x13c/0x254
        [c6c63c30] [c001352c] ret_from_except+0x0/0x14
        --- Exception: 901 at memset+0x38/0x5c
            LR = jffs2_read_inode_range+0x144/0x17c
        [c6c63cf0] [00000000] (null) (unreliable)
      
      This patch fixes the issue, plus fixes all LTP tests on NAND/UBI with
      JFFS2 filesystem that were failing since 2.6.23 (seems like the bug
      above also broke the truncation).
      Reported-By: NAnton Vorontsov <avorontsov@ru.mvista.com>
      Tested-By: NAnton Vorontsov <avorontsov@ru.mvista.com>
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      Cc: stable@kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      199bc9ff
  5. 27 11月, 2009 1 次提交
    • C
      fuse: reject O_DIRECT flag also in fuse_create · 1b732396
      Csaba Henk 提交于
      The comment in fuse_open about O_DIRECT:
      
        "VFS checks this, but only _after_ ->open()"
      
      also holds for fuse_create, however, the same kind of check was missing there.
      
      As an impact of this bug, open(newfile, O_RDWR|O_CREAT|O_DIRECT) fails, but a
      stub newfile will remain if the fuse server handled the implied FUSE_CREATE
      request appropriately.
      
      Other impact: in the above situation ima_file_free() will complain to open/free
      imbalance if CONFIG_IMA is set.
      Signed-off-by: NCsaba Henk <csaba@gluster.com>
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      Cc: Harshavardhana <harsha@gluster.com>
      Cc: stable@kernel.org
      1b732396
  6. 25 11月, 2009 3 次提交
    • S
      [CIFS] Fix sparse warning · 2f81e752
      Steve French 提交于
      Also update CHANGES file
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      2f81e752
    • S
      [CIFS] Duplicate data on appending to some Samba servers · cea62343
      Steve French 提交于
      SMB writes are sent with a starting offset and length. When the server
      supports the newer SMB trans2 posix open (rather than using the SMB
      NTCreateX) a file can be opened with SMB_O_APPEND flag, and for that
      case Samba server assumes that the offset sent in SMBWriteX is unneeded
      since the write should go to the end of the file - which can cause
      problems if the write was cached (since the beginning part of a
      page could be written twice by the client mm).  Jeff suggested that
      masking the flag on posix open on the client is easiest for the time
      being. Note that recent Samba server also had an unrelated problem with
      SMB NTCreateX and append (see samba bugzilla bug number 6898) which
      should not affect current Linux clients (unless cifs Unix Extensions
      are disabled).
      
      The cifs client did not send the O_APPEND flag on posix open
      before 2.6.29 so the fix is unneeded on early kernels.
      
      In the future, for the non-cached case (O_DIRECT, and forcedirectio mounts)
      it would be possible and useful to send O_APPEND on posix open (for Windows
      case: FILE_APPEND_DATA but not FILE_WRITE_DATA on SMB NTCreateX) but for
      cached writes although the vfs sets the offset to end of file it
      may fragment a write across pages - so we can't send O_APPEND on
      open (could result in sending part of a page twice).
      
      CC: Stable <stable@kernel.org>
      Reviewed-by: NShirish Pargaonkar <shirishp@us.ibm.com>
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      cea62343
    • S
      [CIFS] fix oops in cifs_lookup during net boot · 8e6c0332
      Steve French 提交于
      Fixes bugzilla.kernel.org bug number 14641
      
      Lookup called during network boot (network root filesystem
      for diskless workstation) has case where nd is null in
      lookup.  This patch fixes that in cifs_lookup.
      
      (Shirish noted that 2.6.30 and 2.6.31 stable need the same check)
      Signed-off-by: NShirish Pargaonkar <shirishp@us.ibm.com>
      Acked-by: NJeff Layton <jlayton@redhat.com>
      Tested-by: NVladimir Stavrinov <vs@inist.ru>
      CC: Stable <stable@kernel.org>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      8e6c0332
  7. 21 11月, 2009 3 次提交
    • D
      FS-Cache: Provide nop fscache_stat_d() if CONFIG_FSCACHE_STATS=n · 4fa9f4ed
      David Howells 提交于
      Provide nop fscache_stat_d() macro if CONFIG_FSCACHE_STATS=n lest errors like
      the following occur:
      
      	fs/fscache/cache.c: In function 'fscache_withdraw_cache':
      	fs/fscache/cache.c:386: error: implicit declaration of function 'fscache_stat_d'
      	fs/fscache/cache.c:386: error: 'fscache_n_cop_sync_cache' undeclared (first use in this function)
      	fs/fscache/cache.c:386: error: (Each undeclared identifier is reported only once
      	fs/fscache/cache.c:386: error: for each function it appears in.)
      	fs/fscache/cache.c:392: error: 'fscache_n_cop_dissociate_pages' undeclared (first use in this function)
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      4fa9f4ed
    • D
      SLOW_WORK: Fix GFS2 to #include <linux/module.h> before using THIS_MODULE · 1c2ea8a2
      David Howells 提交于
      GFS2 has been altered to pass THIS_MODULE to slow_work_register_user(), but
      hasn't been altered to #include <linux/module.h> to provide it, resulting in
      the following error:
      
      	fs/gfs2/recovery.c:596: error: 'THIS_MODULE' undeclared here (not in a function)
      
      Add the missing #include.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      1c2ea8a2
    • D
      SLOW_WORK: Fix CIFS to pass THIS_MODULE to slow_work_register_user() · 0109d7e6
      David Howells 提交于
      As of the patch:
      
      	SLOW_WORK: Wait for outstanding work items belonging to a module to clear
      
      	Wait for outstanding slow work items belonging to a module to clear
      	when unregistering that module as a user of the facility.  This
      	prevents the put_ref code of a work item from being taken away before
      	it returns.
      
      slow_work_register_user() takes a module pointer as an argument.  CIFS must now
      pass THIS_MODULE as that argument, lest the following error be observed:
      
      	fs/cifs/cifsfs.c: In function 'init_cifs':
      	fs/cifs/cifsfs.c:1040: error: too few arguments to function 'slow_work_register_user'
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      0109d7e6
  8. 20 11月, 2009 5 次提交
    • D
      CacheFiles: Don't log lookup/create failing with ENOBUFS · 14e69647
      David Howells 提交于
      Don't log the CacheFiles lookup/create object routined failing with ENOBUFS as
      under high memory load or high cache load they can do this quite a lot.  This
      error simply means that the requested object cannot be created on disk due to
      lack of space, or due to failure of the backing filesystem to find sufficient
      resources.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      14e69647
    • D
      CacheFiles: Catch an overly long wait for an old active object · fee096de
      David Howells 提交于
      Catch an overly long wait for an old, dying active object when we want to
      replace it with a new one.  The probability is that all the slow-work threads
      are hogged, and the delete can't get a look in.
      
      What we do instead is:
      
       (1) if there's nothing in the slow work queue, we sleep until either the dying
           object has finished dying or there is something in the slow work queue
           behind which we can queue our object.
      
       (2) if there is something in the slow work queue, we return ETIMEDOUT to
           fscache_lookup_object(), which then puts us back on the slow work queue,
           presumably behind the deletion that we're blocked by.  We are then
           deferred for a while until we work our way back through the queue -
           without blocking a slow-work thread unnecessarily.
      
      A backtrace similar to the following may appear in the log without this patch:
      
      	INFO: task kslowd004:5711 blocked for more than 120 seconds.
      	"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      	kslowd004     D 0000000000000000     0  5711      2 0x00000080
      	 ffff88000340bb80 0000000000000046 ffff88002550d000 0000000000000000
      	 ffff88002550d000 0000000000000007 ffff88000340bfd8 ffff88002550d2a8
      	 000000000000ddf0 00000000000118c0 00000000000118c0 ffff88002550d2a8
      	Call Trace:
      	 [<ffffffff81058e21>] ? trace_hardirqs_on+0xd/0xf
      	 [<ffffffffa011c4d8>] ? cachefiles_wait_bit+0x0/0xd [cachefiles]
      	 [<ffffffffa011c4e1>] cachefiles_wait_bit+0x9/0xd [cachefiles]
      	 [<ffffffff81353153>] __wait_on_bit+0x43/0x76
      	 [<ffffffff8111ae39>] ? ext3_xattr_get+0x1ec/0x270
      	 [<ffffffff813531ef>] out_of_line_wait_on_bit+0x69/0x74
      	 [<ffffffffa011c4d8>] ? cachefiles_wait_bit+0x0/0xd [cachefiles]
      	 [<ffffffff8104c125>] ? wake_bit_function+0x0/0x2e
      	 [<ffffffffa011bc79>] cachefiles_mark_object_active+0x203/0x23b [cachefiles]
      	 [<ffffffffa011c209>] cachefiles_walk_to_object+0x558/0x827 [cachefiles]
      	 [<ffffffffa011a429>] cachefiles_lookup_object+0xac/0x12a [cachefiles]
      	 [<ffffffffa00aa1e9>] fscache_lookup_object+0x1c7/0x214 [fscache]
      	 [<ffffffffa00aafc5>] fscache_object_state_machine+0xa5/0x52d [fscache]
      	 [<ffffffffa00ab4ac>] fscache_object_slow_work_execute+0x5f/0xa0 [fscache]
      	 [<ffffffff81082093>] slow_work_execute+0x18f/0x2d1
      	 [<ffffffff8108239a>] slow_work_thread+0x1c5/0x308
      	 [<ffffffff8104c0f1>] ? autoremove_wake_function+0x0/0x34
      	 [<ffffffff810821d5>] ? slow_work_thread+0x0/0x308
      	 [<ffffffff8104be91>] kthread+0x7a/0x82
      	 [<ffffffff8100beda>] child_rip+0xa/0x20
      	 [<ffffffff8100b87c>] ? restore_args+0x0/0x30
      	 [<ffffffff8104be17>] ? kthread+0x0/0x82
      	 [<ffffffff8100bed0>] ? child_rip+0x0/0x20
      	1 lock held by kslowd004/5711:
      	 #0:  (&sb->s_type->i_mutex_key#7/1){+.+.+.}, at: [<ffffffffa011be64>] cachefiles_walk_to_object+0x1b3/0x827 [cachefiles]
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      fee096de
    • D
      CacheFiles: Better showing of debugging information in active object problems · d0e27b78
      David Howells 提交于
      Show more debugging information if cachefiles_mark_object_active() is asked to
      activate an active object.
      
      This may happen, for instance, if the netfs tries to register an object with
      the same key multiple times.
      
      The code is changed to (a) get the appropriate object lock to protect the
      cookie pointer whilst we dereference it, and (b) get and display the cookie key
      if available.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      d0e27b78
    • D
      CacheFiles: Mark parent directory locks as I_MUTEX_PARENT to keep lockdep happy · 6511de33
      David Howells 提交于
      Mark parent directory locks as I_MUTEX_PARENT in the callers of
      cachefiles_bury_object() so that lockdep doesn't complain when that invokes
      vfs_unlink():
      
      =============================================
      [ INFO: possible recursive locking detected ]
      2.6.32-rc6-cachefs #47
      ---------------------------------------------
      kslowd002/3089 is trying to acquire lock:
       (&sb->s_type->i_mutex_key#7){+.+.+.}, at: [<ffffffff810bbf72>] vfs_unlink+0x8b/0x128
      
      but task is already holding lock:
       (&sb->s_type->i_mutex_key#7){+.+.+.}, at: [<ffffffffa00e4e61>] cachefiles_walk_to_object+0x1b0/0x831 [cachefiles]
      
      other info that might help us debug this:
      1 lock held by kslowd002/3089:
       #0:  (&sb->s_type->i_mutex_key#7){+.+.+.}, at: [<ffffffffa00e4e61>] cachefiles_walk_to_object+0x1b0/0x831 [cachefiles]
      
      stack backtrace:
      Pid: 3089, comm: kslowd002 Not tainted 2.6.32-rc6-cachefs #47
      Call Trace:
       [<ffffffff8105ad7b>] __lock_acquire+0x1649/0x16e3
       [<ffffffff8118170e>] ? inode_has_perm+0x5f/0x61
       [<ffffffff8105ae6c>] lock_acquire+0x57/0x6d
       [<ffffffff810bbf72>] ? vfs_unlink+0x8b/0x128
       [<ffffffff81353ac3>] mutex_lock_nested+0x54/0x292
       [<ffffffff810bbf72>] ? vfs_unlink+0x8b/0x128
       [<ffffffff8118179e>] ? selinux_inode_permission+0x8e/0x90
       [<ffffffff8117e271>] ? security_inode_permission+0x1c/0x1e
       [<ffffffff810bb4fb>] ? inode_permission+0x99/0xa5
       [<ffffffff810bbf72>] vfs_unlink+0x8b/0x128
       [<ffffffff810adb19>] ? kfree+0xed/0xf9
       [<ffffffffa00e3f00>] cachefiles_bury_object+0xb6/0x420 [cachefiles]
       [<ffffffff81058e21>] ? trace_hardirqs_on+0xd/0xf
       [<ffffffffa00e7e24>] ? cachefiles_check_object_xattr+0x233/0x293 [cachefiles]
       [<ffffffffa00e51b0>] cachefiles_walk_to_object+0x4ff/0x831 [cachefiles]
       [<ffffffff81032238>] ? finish_task_switch+0x0/0xb2
       [<ffffffffa00e3429>] cachefiles_lookup_object+0xac/0x12a [cachefiles]
       [<ffffffffa00741e9>] fscache_lookup_object+0x1c7/0x214 [fscache]
       [<ffffffffa0074fc5>] fscache_object_state_machine+0xa5/0x52d [fscache]
       [<ffffffffa00754ac>] fscache_object_slow_work_execute+0x5f/0xa0 [fscache]
       [<ffffffff81082093>] slow_work_execute+0x18f/0x2d1
       [<ffffffff8108239a>] slow_work_thread+0x1c5/0x308
       [<ffffffff8104c0f1>] ? autoremove_wake_function+0x0/0x34
       [<ffffffff810821d5>] ? slow_work_thread+0x0/0x308
       [<ffffffff8104be91>] kthread+0x7a/0x82
       [<ffffffff8100beda>] child_rip+0xa/0x20
       [<ffffffff8100b87c>] ? restore_args+0x0/0x30
       [<ffffffff8104be17>] ? kthread+0x0/0x82
       [<ffffffff8100bed0>] ? child_rip+0x0/0x20
      Signed-off-by: NDaivd Howells <dhowells@redhat.com>
      6511de33
    • D
      CacheFiles: Handle truncate unlocking the page we're reading · 5e929b33
      David Howells 提交于
      Handle truncate unlocking the page we're attempting to read from the backing
      device before the read has completed.
      
      This was causing reports like the following to occur:
      
      	Pid: 4765, comm: kslowd Not tainted 2.6.30.1 #1
      	Call Trace:
      	 [<ffffffffa0331d7a>] ? cachefiles_read_waiter+0xd9/0x147 [cachefiles]
      	 [<ffffffff804b74bd>] ? __wait_on_bit+0x60/0x6f
      	 [<ffffffff8022bbbb>] ? __wake_up_common+0x3f/0x71
      	 [<ffffffff8022cc32>] ? __wake_up+0x30/0x44
      	 [<ffffffff8024a41f>] ? __wake_up_bit+0x28/0x2d
      	 [<ffffffffa003a793>] ? ext3_truncate+0x4d7/0x8ed [ext3]
      	 [<ffffffff80281f90>] ? pagevec_lookup+0x17/0x1f
      	 [<ffffffff8028c2ff>] ? unmap_mapping_range+0x59/0x1ff
      	 [<ffffffff8022cc32>] ? __wake_up+0x30/0x44
      	 [<ffffffff8028e286>] ? vmtruncate+0xc2/0xe2
      	 [<ffffffff802b82cf>] ? inode_setattr+0x22/0x10a
      	 [<ffffffffa003baa5>] ? ext3_setattr+0x17b/0x1e6 [ext3]
      	 [<ffffffff802b853d>] ? notify_change+0x186/0x2c9
      	 [<ffffffffa032d9de>] ? cachefiles_attr_changed+0x133/0x1cd [cachefiles]
      	 [<ffffffffa032df7f>] ? cachefiles_lookup_object+0xcf/0x12a [cachefiles]
      	 [<ffffffffa0318165>] ? fscache_lookup_object+0x110/0x122 [fscache]
      	 [<ffffffffa03188c3>] ? fscache_object_slow_work_execute+0x590/0x6bc
      	[fscache]
      	 [<ffffffff80278f82>] ? slow_work_thread+0x285/0x43a
      	 [<ffffffff8024a446>] ? autoremove_wake_function+0x0/0x2e
      	 [<ffffffff80278cfd>] ? slow_work_thread+0x0/0x43a
      	 [<ffffffff8024a317>] ? kthread+0x54/0x81
      	 [<ffffffff8020c93a>] ? child_rip+0xa/0x20
      	 [<ffffffff8024a2c3>] ? kthread+0x0/0x81
      	 [<ffffffff8020c930>] ? child_rip+0x0/0x20
      	CacheFiles: I/O Error: Readpage failed on backing file 200000000000810
      	FS-Cache: Cache cachefiles stopped due to I/O error
      Reported-by: NChristian Kujau <lists@nerdbynature.de>
      Reported-by: NTakashi Iwai <tiwai@suse.de>
      Reported-by: NDuc Le Minh <duclm.vn@gmail.com>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      5e929b33