1. 15 5月, 2010 14 次提交
  2. 14 5月, 2010 3 次提交
  3. 13 5月, 2010 2 次提交
  4. 12 5月, 2010 11 次提交
    • R
      Inotify: undefined reference to `anon_inode_getfd' · e7b702b1
      Russell King 提交于
      Fix:
      
      fs/built-in.o: In function `sys_inotify_init1':
      summary.c:(.text+0x347a4): undefined reference to `anon_inode_getfd'
      
      found by kautobuild with arms bcmring_defconfig, which ends up with
      INOTIFY_USER enabled (through the 'default y') but leaves ANON_INODES
      unset.  However, inotify_user.c uses anon_inode_getfd().
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: NEric Paris <eparis@redhat.com>
      e7b702b1
    • S
      ceph: preserve seq # on requeued messages after transient transport errors · e84346b7
      Sage Weil 提交于
      If the tcp connection drops and we reconnect to reestablish a stateful
      session (with the mds), we need to resend previously sent (and possibly
      received) messages with the _same_ seq # so that they can be dropped on
      the other end if needed.  Only assign a new seq once after the message is
      queued.
      Signed-off-by: NSage Weil <sage@newdream.net>
      e84346b7
    • S
      ceph: fix cap removal races · f818a736
      Sage Weil 提交于
      The iterate_session_caps helper traverses the session caps list and tries
      to grab an inode reference.  However, the __ceph_remove_cap was clearing
      the inode backpointer _before_ removing itself from the session list,
      causing a null pointer dereference.
      
      Clear cap->ci under protection of s_cap_lock to avoid the race, and to
      tightly couple the list and backpointer state.  Use a local flag to
      indicate whether we are releasing the cap, as cap->session may be modified
      by a racing thread in iterate_session_caps.
      Signed-off-by: NSage Weil <sage@newdream.net>
      f818a736
    • R
      revert "procfs: provide stack information for threads" and its fixup commits · 34441427
      Robin Holt 提交于
      Originally, commit d899bf7b ("procfs: provide stack information for
      threads") attempted to introduce a new feature for showing where the
      threadstack was located and how many pages are being utilized by the
      stack.
      
      Commit c44972f1 ("procfs: disable per-task stack usage on NOMMU") was
      applied to fix the NO_MMU case.
      
      Commit 89240ba0 ("x86, fs: Fix x86 procfs stack information for threads on
      64-bit") was applied to fix a bug in ia32 executables being loaded.
      
      Commit 9ebd4eba ("procfs: fix /proc/<pid>/stat stack pointer for kernel
      threads") was applied to fix a bug which had kernel threads printing a
      userland stack address.
      
      Commit 1306d603 ('proc: partially revert "procfs: provide stack
      information for threads"') was then applied to revert the stack pages
      being used to solve a significant performance regression.
      
      This patch nearly undoes the effect of all these patches.
      
      The reason for reverting these is it provides an unusable value in
      field 28.  For x86_64, a fork will result in the task->stack_start
      value being updated to the current user top of stack and not the stack
      start address.  This unpredictability of the stack_start value makes
      it worthless.  That includes the intended use of showing how much stack
      space a thread has.
      
      Other architectures will get different values.  As an example, ia64
      gets 0.  The do_fork() and copy_process() functions appear to treat the
      stack_start and stack_size parameters as architecture specific.
      
      I only partially reverted c44972f1 ("procfs: disable per-task stack usage
      on NOMMU") .  If I had completely reverted it, I would have had to change
      mm/Makefile only build pagewalk.o when CONFIG_PROC_PAGE_MONITOR is
      configured.  Since I could not test the builds without significant effort,
      I decided to not change mm/Makefile.
      
      I only partially reverted 89240ba0 ("x86, fs: Fix x86 procfs stack
      information for threads on 64-bit") .  I left the KSTK_ESP() change in
      place as that seemed worthwhile.
      Signed-off-by: NRobin Holt <holt@sgi.com>
      Cc: Stefani Seibold <stefani@seibold.net>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      34441427
    • S
      ceph: zero unused message header, footer fields · 45c6ceb5
      Sage Weil 提交于
      We shouldn't leak any prior memory contents to other parties.  And random
      data, particularly in the 'version' field, can cause problems down the
      line.
      Signed-off-by: NSage Weil <sage@newdream.net>
      45c6ceb5
    • J
      cifs: guard against hardlinking directories · 3d694380
      Jeff Layton 提交于
      When we made serverino the default, we trusted that the field sent by the
      server in the "uniqueid" field was actually unique. It turns out that it
      isn't reliably so.
      
      Samba, in particular, will just put the st_ino in the uniqueid field when
      unix extensions are enabled. When a share spans multiple filesystems, it's
      quite possible that there will be collisions. This is a server bug, but
      when the inodes in question are a directory (as is often the case) and
      there is a collision with the root inode of the mount, the result is a
      kernel panic on umount.
      
      Fix this by checking explicitly for directory inodes with the same
      uniqueid. If that is the case, then we can assume that using server inode
      numbers will be a problem and that they should be disabled.
      
      Fixes Samba bugzilla 7407
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      CC: Stable <stable@kernel.org>
      Reviewed-and-Tested-by: NSuresh Jayaraman <sjayaraman@suse.de>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      3d694380
    • D
      CacheFiles: Fix occasional EIO on call to vfs_unlink() · c61ea31d
      David Howells 提交于
      Fix an occasional EIO returned by a call to vfs_unlink():
      
      	[ 4868.465413] CacheFiles: I/O Error: Unlink failed
      	[ 4868.465444] FS-Cache: Cache cachefiles stopped due to I/O error
      	[ 4947.320011] CacheFiles: File cache on md3 unregistering
      	[ 4947.320041] FS-Cache: Withdrawing cache "mycache"
      	[ 5127.348683] FS-Cache: Cache "mycache" added (type cachefiles)
      	[ 5127.348716] CacheFiles: File cache on md3 registered
      	[ 7076.871081] CacheFiles: I/O Error: Unlink failed
      	[ 7076.871130] FS-Cache: Cache cachefiles stopped due to I/O error
      	[ 7116.780891] CacheFiles: File cache on md3 unregistering
      	[ 7116.780937] FS-Cache: Withdrawing cache "mycache"
      	[ 7296.813394] FS-Cache: Cache "mycache" added (type cachefiles)
      	[ 7296.813432] CacheFiles: File cache on md3 registered
      
      What happens is this:
      
       (1) A cached NFS file is seen to have become out of date, so NFS retires the
           object and immediately acquires a new object with the same key.
      
       (2) Retirement of the old object is done asynchronously - so the lookup/create
           to generate the new object may be done first.
      
           This can be a problem as the old object and the new object must exist at
           the same point in the backing filesystem (i.e. they must have the same
           pathname).
      
       (3) The lookup for the new object sees that a backing file already exists,
           checks to see whether it is valid and sees that it isn't.  It then deletes
           that file and creates a new one on disk.
      
       (4) The retirement phase for the old file is then performed.  It tries to
           delete the dentry it has, but ext4_unlink() returns -EIO because the inode
           attached to that dentry no longer matches the inode number associated with
           the filename in the parent directory.
      
      The trace below shows this quite well.
      
      	[md5sum] ==> __fscache_relinquish_cookie(ffff88002d12fb58{NFS.fh,ffff88002ce62100},1)
      	[md5sum] ==> __fscache_acquire_cookie({NFS.server},{NFS.fh},ffff88002ce62100)
      
      NFS has retired the old cookie and asked for a new one.
      
      	[kslowd] ==> fscache_object_state_machine({OBJ52,OBJECT_ACTIVE,24})
      	[kslowd] <== fscache_object_state_machine() [->OBJECT_DYING]
      	[kslowd] ==> fscache_object_state_machine({OBJ53,OBJECT_INIT,0})
      	[kslowd] <== fscache_object_state_machine() [->OBJECT_LOOKING_UP]
      	[kslowd] ==> fscache_object_state_machine({OBJ52,OBJECT_DYING,24})
      	[kslowd] <== fscache_object_state_machine() [->OBJECT_RECYCLING]
      
      The old object (OBJ52) is going through the terminal states to get rid of it,
      whilst the new object - (OBJ53) - is coming into being.
      
      	[kslowd] ==> fscache_object_state_machine({OBJ53,OBJECT_LOOKING_UP,0})
      	[kslowd] ==> cachefiles_walk_to_object({ffff88003029d8b8},OBJ53,@68,)
      	[kslowd] lookup '@68'
      	[kslowd] next -> ffff88002ce41bd0 positive
      	[kslowd] advance
      	[kslowd] lookup 'Es0g00og0_Nd_XCYe3BOzvXrsBLMlN6aw16M1htaA'
      	[kslowd] next -> ffff8800369faac8 positive
      
      The new object has looked up the subdir in which the file would be in (getting
      dentry ffff88002ce41bd0) and then looked up the file itself (getting dentry
      ffff8800369faac8).
      
      	[kslowd] validate 'Es0g00og0_Nd_XCYe3BOzvXrsBLMlN6aw16M1htaA'
      	[kslowd] ==> cachefiles_bury_object(,'@68','Es0g00og0_Nd_XCYe3BOzvXrsBLMlN6aw16M1htaA')
      	[kslowd] remove ffff8800369faac8 from ffff88002ce41bd0
      	[kslowd] unlink stale object
      	[kslowd] <== cachefiles_bury_object() = 0
      
      It then checks the file's xattrs to see if it's valid.  NFS says that the
      auxiliary data indicate the file is out of date (obvious to us - that's why NFS
      ditched the old version and got a new one).  CacheFiles then deletes the old
      file (dentry ffff8800369faac8).
      
      	[kslowd] redo lookup
      	[kslowd] lookup 'Es0g00og0_Nd_XCYe3BOzvXrsBLMlN6aw16M1htaA'
      	[kslowd] next -> ffff88002cd94288 negative
      	[kslowd] create -> ffff88002cd94288{ffff88002cdaf238{ino=148247}}
      
      CacheFiles then redoes the lookup and gets a negative result in a new dentry
      (ffff88002cd94288) which it then creates a file for.
      
      	[kslowd] ==> cachefiles_mark_object_active(,OBJ53)
      	[kslowd] <== cachefiles_mark_object_active() = 0
      	[kslowd] === OBTAINED_OBJECT ===
      	[kslowd] <== cachefiles_walk_to_object() = 0 [148247]
      	[kslowd] <== fscache_object_state_machine() [->OBJECT_AVAILABLE]
      
      The new object is then marked active and the state machine moves to the
      available state - at which point NFS can start filling the object.
      
      	[kslowd] ==> fscache_object_state_machine({OBJ52,OBJECT_RECYCLING,20})
      	[kslowd] ==> fscache_release_object()
      	[kslowd] ==> cachefiles_drop_object({OBJ52,2})
      	[kslowd] ==> cachefiles_delete_object(,OBJ52{ffff8800369faac8})
      
      The old object, meanwhile, goes on with being retired.  If allocation occurs
      first, cachefiles_delete_object() has to wait for dir->d_inode->i_mutex to
      become available before it can continue.
      
      	[kslowd] ==> cachefiles_bury_object(,'@68','Es0g00og0_Nd_XCYe3BOzvXrsBLMlN6aw16M1htaA')
      	[kslowd] remove ffff8800369faac8 from ffff88002ce41bd0
      	[kslowd] unlink stale object
      	EXT4-fs warning (device sda6): ext4_unlink: Inode number mismatch in unlink (148247!=148193)
      	CacheFiles: I/O Error: Unlink failed
      	FS-Cache: Cache cachefiles stopped due to I/O error
      
      CacheFiles then tries to delete the file for the old object, but the dentry it
      has (ffff8800369faac8) no longer points to a valid inode for that directory
      entry, and so ext4_unlink() returns -EIO when de->inode does not match i_ino.
      
      	[kslowd] <== cachefiles_bury_object() = -5
      	[kslowd] <== cachefiles_delete_object() = -5
      	[kslowd] <== fscache_object_state_machine() [->OBJECT_DEAD]
      	[kslowd] ==> fscache_object_state_machine({OBJ53,OBJECT_AVAILABLE,0})
      	[kslowd] <== fscache_object_state_machine() [->OBJECT_ACTIVE]
      
      (Note that the above trace includes extra information beyond that produced by
      the upstream code).
      
      The fix is to note when an object that is being retired has had its object
      deleted preemptively by a replacement object that is being created, and to
      skip the second removal attempt in such a case.
      Reported-by: NGreg M <gregm@servu.net.au>
      Reported-by: NMark Moseley <moseleymark@gmail.com>
      Reported-by: NRomain DEGEZ <romain.degez@smartjog.com>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c61ea31d
    • S
      ceph: fix locking for waking session requests after reconnect · 9abf82b8
      Sage Weil 提交于
      The session->s_waiting list is protected by mdsc->mutex, not s_mutex.  This
      was causing (rare) s_waiting list corruption.
      
      Fix errors paths too, while we're here.  A more thorough cleanup of this
      function is coming soon.
      Signed-off-by: NSage Weil <sage@newdream.net>
      9abf82b8
    • S
      ceph: resubmit requests on pg mapping change (not just primary change) · d85b7056
      Sage Weil 提交于
      OSD requests need to be resubmitted on any pg mapping change, not just when
      the pg primary changes.  Resending only when the primary changes results in
      occasional 'hung' requests during osd cluster recovery or rebalancing.
      Signed-off-by: NSage Weil <sage@newdream.net>
      d85b7056
    • S
      ceph: fix open file counting on snapped inodes when mds returns no caps · 04d000eb
      Sage Weil 提交于
      It's possible the MDS will not issue caps on a snapped inode, in which case
      an open request may not __ceph_get_fmode(), botching the open file
      counting.  (This is actually a server bug, but the client shouldn't BUG out
      in this case.)
      Signed-off-by: NSage Weil <sage@newdream.net>
      04d000eb
    • S
      ceph: unregister osd request on failure · 0ceed5db
      Sage Weil 提交于
      The osd request wasn't being unregistered when the osd returned a failure
      code, even though the result was returned to the caller.  This would cause
      it to eventually time out, and then crash the kernel when it tried to
      resend the request using a stale page vector.
      Signed-off-by: NSage Weil <sage@newdream.net>
      0ceed5db
  5. 11 5月, 2010 1 次提交
  6. 06 5月, 2010 1 次提交
    • S
      ceph: don't use writeback_control in writepages completion · 54ad023b
      Sage Weil 提交于
      The ->writepages writeback_control is not still valid in the writepages
      completion.  We were touching it solely to adjust pages_skipped when there
      was a writeback error (EIO, ENOSPC, EPERM due to bad osd credentials),
      causing an oops in the writeback code shortly thereafter.  Updating
      pages_skipped on error isn't correct anyway, so let's just rip out this
      (clearly broken) code to pass the wbc to the completion.
      Signed-off-by: NSage Weil <sage@newdream.net>
      54ad023b
  7. 05 5月, 2010 1 次提交
  8. 04 5月, 2010 7 次提交