1. 30 3月, 2010 2 次提交
    • L
      ext3: fix broken handling of EXT3_STATE_NEW · de329820
      Linus Torvalds 提交于
      In commit 9df93939 ("ext3: Use bitops to read/modify
      EXT3_I(inode)->i_state") ext3 changed its internal 'i_state' variable to
      use bitops for its state handling.  However, unline the same ext4
      change, it didn't actually change the name of the field when it changed
      the semantics of it.
      
      As a result, an old use of 'i_state' remained in fs/ext3/ialloc.c that
      initialized the field to EXT3_STATE_NEW.  And that does not work
      _at_all_ when we're now working with individually named bits rather than
      values that get masked.  So the code tried to mark the state to be new,
      but in actual fact set the field to EXT3_STATE_JDATA.  Which makes no
      sense at all, and screws up all the code that checks whether the inode
      was newly allocated.
      
      In particular, it made the xattr code unhappy, and caused various random
      behavior, like apparently
      
      	https://bugzilla.redhat.com/show_bug.cgi?id=577911
      
      So fix the initialization, and rename the field to match ext4 so that we
      don't have this happen again.
      
      Cc: James Morris <jmorris@namei.org>
      Cc: Stephen Smalley <sds@tycho.nsa.gov>
      Cc: Daniel J Walsh <dwalsh@redhat.com>
      Cc: Eric Paris <eparis@redhat.com>
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      de329820
    • D
      SLOW_WORK: CONFIG_SLOW_WORK_PROC should be CONFIG_SLOW_WORK_DEBUG · a53f4f9e
      David Howells 提交于
      CONFIG_SLOW_WORK_PROC was changed to CONFIG_SLOW_WORK_DEBUG, but not in all
      instances.  Change the remaining instances.  This makes the debugfs file
      display the time mark and the owner's description again.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a53f4f9e
  2. 29 3月, 2010 1 次提交
  3. 27 3月, 2010 1 次提交
  4. 25 3月, 2010 9 次提交
  5. 24 3月, 2010 1 次提交
  6. 25 3月, 2010 2 次提交
  7. 24 3月, 2010 5 次提交
    • S
      ocfs2: Fix a race in o2dlm lockres mastery · 14741472
      Srinivas Eeda 提交于
      In o2dlm, the master of a lock resource keeps a map of all interested
      nodes.  This prevents the master from purging the resource before an
      interested node can create a lock.
      
      A race between the mastery thread and the mastery handler allowed an
      interested node to discover who the master is without informing the
      master directly.  This is easily fixed by holding the dlm spinlock a
      little longer in the mastery handler.
      Signed-off-by: NSrinivas Eeda <srinivas.eeda@oracle.com>
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      14741472
    • T
      Ocfs2: Handle deletion of reflinked oprhan inodes correctly. · b54c2ca4
      Tristan Ye 提交于
      The rule is that all inodes in the orphan dir have ORPHANED_FL,
      otherwise we treated it as an ERROR.  This rule works well except
      for some rare cases of reflink operation:
      
      http://oss.oracle.com/bugzilla/show_bug.cgi?id=1215
      
      The problem is caused by how reflink and our orphan_scan thread
      interact.
      
       * The orphan scan pulls the orphans into a queue first, then runs the
         queue at a later time.  We only hold the orphan_dir's lock
         during scanning.
      
       * Reflink create a oprhaned target in orphan_dir as its first step.
         It removes the target and clears the flag as the final step.
         These two steps take the orphan_dir's lock, but it is not held for
         the duration.
      
      Based on the above semantics, a reflink inode can be moved out of the
      orphan dir and have its ORPHANED_FL cleared before the queue of orphans
      is run.  This leads to a ERROR in ocfs2_query_wipde_inode().
      
      This patch teaches ocfs2_query_wipe_inode() to detect previously
      orphaned reflink targets.  If a reflink fails or a crash occurs during
      the relfink operation, the inode will retain ORPHANED_FL and will be
      properly wiped.
      Signed-off-by: NTristan Ye <tristan.ye@oracle.com>
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      b54c2ca4
    • T
      Ocfs2: Journaling i_flags and i_orphaned_slot when adding inode to orphan dir. · 3939fda4
      Tristan Ye 提交于
      Currently, some callers were missing to journal the dirty inode after
      adding it to orphan dir.
      
      Now we're going to journal such modifications within the ocfs2_orphan_add()
      itself, It's safe to do so, though some existing caller may duplicate this,
      and it makes the logic look more straightforward anyway.
      Signed-off-by: NTristan Ye <tristan.ye@oracle.com>
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      3939fda4
    • M
      ocfs2: Clear undo bits when local alloc is freed · b4414eea
      Mark Fasheh 提交于
      When the local alloc file changes windows, unused bits are freed back to the
      global bitmap. By defnition, those bits can not be in use by any file. Also,
      the local alloc will never have been able to allocate those bits if they
      were part of a previous truncate. Therefore it makes sense that we should
      clear unused local alloc bits in the undo buffer so that they can be used
      immediatly.
      
      [ Modified to call it ocfs2_release_clusters() -- Joel ]
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      b4414eea
    • R
      nilfs2: fix imperfect completion wait in nilfs_wait_on_logs · d067633b
      Ryusuke Konishi 提交于
      nilfs_wait_on_logs has a potential to slip out before completion of
      all bio requests when it met an error.  This synchronization fault may
      cause unexpected results, for instance, violative access to freed
      segment buffers from an end-bio callback routine.
      
      This fixes the issue by ensuring that nilfs_wait_on_logs waits all
      given logs.
      Signed-off-by: NRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      d067633b
  8. 23 3月, 2010 19 次提交
    • R
      nilfs2: fix hang-up of cleaner after log writer returned with error · 110d735a
      Ryusuke Konishi 提交于
      According to the report from Andreas Beckmann (Message-ID:
      <4BA54677.3090902@abeckmann.de>), nilfs in 2.6.33 kernel got stuck
      after a disk full error.
      
      This turned out to be a regression by log writer updates merged at
      kernel 2.6.33.  nilfs_segctor_abort_construction, which is a cleanup
      function for erroneous cases, was skipping writeback completion for
      some logs.
      
      This fixes the bug and would resolve the hang issue.
      Reported-by: NAndreas Beckmann <debian@abeckmann.de>
      Signed-off-by: NRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      Tested-by: NRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      Cc: stable <stable@kernel.org>                     [2.6.33.x]
      110d735a
    • S
      ceph: fix possible double-free of mds request reference · 393f6620
      Sage Weil 提交于
      Clear pointer to mds request after dropping the reference to
      ensure we don't drop it again, as there is at least one error
      path through this function that does not reset fi->last_readdir
      to a new value.
      Signed-off-by: NSage Weil <sage@newdream.net>
      393f6620
    • S
      ceph: fix session check on mds reply · d96d6049
      Sage Weil 提交于
      Fix a broken check that a reply came back from the same MDS we sent the
      request to.  I don't think a case that actually triggers this would ever
      come up in practice, but it's clearly wrong and easy to fix.
      Reported-by: NDan Carpenter <error27@gmail.com>
      Signed-off-by: NSage Weil <sage@newdream.net>
      d96d6049
    • D
      ceph: handle kmalloc() failure · 4736b009
      Dan Carpenter 提交于
      Return ERR_PTR(-ENOMEM) if kmalloc() fails.  We handle allocation
      failures the same way later in the function.
      Signed-off-by: NDan Carpenter <error27@gmail.com>
      Signed-off-by: NSage Weil <sage@newdream.net>
      4736b009
    • S
      ceph: propagate mds session allocation failures to caller · 9c423956
      Sage Weil 提交于
      Return error to original caller if register_session() fails.
      Signed-off-by: NSage Weil <sage@newdream.net>
      9c423956
    • S
      ceph: make write_begin wait propagate ERESTARTSYS · 8f883c24
      Sage Weil 提交于
      Currently, if the wait_event_interruptible is interrupted, we
      return EAGAIN unconditionally and loop, such that we aren't, in
      fact, interruptible.  So, propagate ERESTARTSYS if we get it.
      Signed-off-by: NSage Weil <sage@newdream.net>
      8f883c24
    • S
      ceph: fix snap rebuild condition · ec4318bc
      Sage Weil 提交于
      We were rebuilding the snap context when it was not necessary
      (i.e. when the realm seq hadn't changed _and_ the parent seq
      was still older), which caused page snapc pointers to not match
      the realm's snapc pointer (even though the snap context itself
      was identical).  This confused begin_write and put it into an
      endless loop.
      
      The correct logic is: rebuild snapc if _my_ realm seq changed, or
      if my parent realm's seq is newer than mine (and thus mine needs
      to be rebuilt too).
      Signed-off-by: NSage Weil <sage@newdream.net>
      ec4318bc
    • S
      ceph: avoid reopening osd connections when address hasn't changed · 87b315a5
      Sage Weil 提交于
      We get a fault callback on _every_ tcp connection fault.  Normally, we
      want to reopen the connection when that happens.  If the address we have
      is bad, however, and connection attempts always result in a connection
      refused or similar error, explicitly closing and reopening the msgr
      connection just prevents the messenger's backoff logic from kicking in.
      The result can be a console full of
      
      [ 3974.417106] ceph: osd11 10.3.14.138:6800 connection failed
      [ 3974.423295] ceph: osd11 10.3.14.138:6800 connection failed
      [ 3974.429709] ceph: osd11 10.3.14.138:6800 connection failed
      
      Instead, if we get a fault, and have outstanding requests, but the osd
      address hasn't changed and the connection never successfully connected in
      the first place, do nothing to the osd connection.  The messenger layer
      will back off and retry periodically, because we never connected and thus
      the lossy bit is not set.
      
      Instead, touch each request's r_stamp so that handle_timeout can tell the
      request is still alive and kicking.
      Signed-off-by: NSage Weil <sage@newdream.net>
      87b315a5
    • S
      ceph: rename r_sent_stamp r_stamp · 3dd72fc0
      Sage Weil 提交于
      Make variable name slightly more generic, since it will (soon)
      reflect either the time the request was sent OR the time it was
      last determined to be still retrying.
      Signed-off-by: NSage Weil <sage@newdream.net>
      3dd72fc0
    • S
      ceph: fix connection fault con_work reentrancy problem · 3c3f2e32
      Sage Weil 提交于
      The messenger fault was clearing the BUSY bit, for reasons unclear.  This
      made it possible for the con->ops->fault function to reopen the connection,
      and requeue work in the workqueue--even though the current thread was
      already in con_work.
      
      This avoids a problem where the client busy loops with connection failures
      on an unreachable OSD, but doesn't address the root cause of that problem.
      Signed-off-by: NSage Weil <sage@newdream.net>
      3c3f2e32
    • S
      ceph: prevent dup stale messages to console for restarting mds · e4cb4cb8
      Sage Weil 提交于
      Prevent duplicate 'mds0 caps stale' message from spamming the console every
      few seconds while the MDS restarts.  Set s_renew_requested earlier, so that
      we only print the message once, even if we don't send an actual request.
      Signed-off-by: NSage Weil <sage@newdream.net>
      e4cb4cb8
    • S
      ceph: fix pg pool decoding from incremental osdmap update · efd7576b
      Sage Weil 提交于
      The incremental map decoding of pg pool updates wasn't skipping
      the snaps and removed_snaps vectors.  This caused osd requests
      to stall when pool snapshots were created or fs snapshots were
      deleted.  Use a common helper for full and incremental map
      decoders that decodes pools properly.
      Signed-off-by: NSage Weil <sage@newdream.net>
      efd7576b
    • S
      ceph: fix mds sync() race with completing requests · 80fc7314
      Sage Weil 提交于
      The wait_unsafe_requests() helper dropped the mdsc mutex to wait
      for each request to complete, and then examined r_node to get the
      next request after retaking the lock.  But the request completion
      removes the request from the tree, so r_node was always undefined
      at this point.  Since it's a small race, it usually led to a
      valid request, but not always.  The result was an occasional
      crash in rb_next() while dereferencing node->rb_left.
      
      Fix this by clearing the rb_node when removing the request from
      the request tree, and not walking off into the weeds when we
      are done waiting for a request.  Since the request we waited on
      will _always_ be out of the request tree, take a ref on the next
      request, in the hopes that it won't be.  But if it is, it's ok:
      we can start over from the beginning (and traverse over older read
      requests again).
      Signed-off-by: NSage Weil <sage@newdream.net>
      80fc7314
    • S
      ceph: only release unused caps with mds requests · 916623da
      Sage Weil 提交于
      We were releasing used caps (e.g. FILE_CACHE) from encode_inode_release
      with MDS requests (e.g. setattr).  We don't carry refs on most caps, so
      this code worked most of the time, but for setattr (utimes) we try to
      drop Fscr.
      
      This causes cap state to get slightly out of sync with reality, and may
      result in subsequent mds revoke messages getting ignored.
      
      Fix by only releasing unused caps.
      Signed-off-by: NSage Weil <sage@newdream.net>
      916623da
    • S
      ceph: clean up handle_cap_grant, handle_caps wrt session mutex · 15637c8b
      Sage Weil 提交于
      Drop session mutex unconditionally in handle_cap_grant, and do the
      check_caps from the handle_cap_grant helper.  This avoids using a magic
      return value.
      
      Also avoid using a flag variable in the IMPORT case and call
      check_caps at the appropriate point.
      Signed-off-by: NSage Weil <sage@newdream.net>
      15637c8b
    • S
      ceph: fix session locking in handle_caps, ceph_check_caps · cdc2ce05
      Sage Weil 提交于
      Passing a session pointer to ceph_check_caps() used to mean it would leave
      the session mutex locked.  That wasn't always possible if it wasn't passed
      CHECK_CAPS_AUTHONLY.   If could unlock the passed session and lock a
      differet session mutex, which was clearly wrong, and also emitted a
      warning when it a racing CPU retook it and we did an unlock from the wrong
      context.
      
      This was only a problem when there was more than one MDS.
      
      First, make ceph_check_caps unconditionally drop the session mutex, so that
      it is free to lock other sessions as needed.  Then adjust the one caller
      that passes in a session (handle_cap_grant) accordingly.
      Signed-off-by: NSage Weil <sage@newdream.net>
      cdc2ce05
    • S
      ceph: drop unnecessary WARN_ON in caps migration · 4ea0043a
      Sage Weil 提交于
      If we don't have the exported cap it's because we already released it. No
      need to WARN.
      Signed-off-by: NSage Weil <sage@newdream.net>
      4ea0043a
    • S
      ceph: fix null pointer deref of r_osd in debug output · 12eadc19
      Sage Weil 提交于
      This causes an oops when debug output is enabled and we kick
      an osd request with no current r_osd (sometime after an osd
      failure).  Check the pointer before dereferencing.
      Signed-off-by: NSage Weil <sage@newdream.net>
      12eadc19
    • S
      ceph: clean up service ticket decoding · 0a990e70
      Sage Weil 提交于
      Previously we would decode state directly into our current ticket_handler.
      This is problematic if for some reason we fail to decode, because we end
      up with half new state and half old state.
      
      We are probably already in bad shape if we get an update we can't decode,
      but we may as well be tidy anyway.  Decode into new_* temporaries and
      update the ticket_handler only on success.
      Signed-off-by: NSage Weil <sage@newdream.net>
      0a990e70