1. 10 4月, 2010 4 次提交
  2. 08 4月, 2010 1 次提交
  3. 07 4月, 2010 5 次提交
  4. 06 4月, 2010 6 次提交
    • D
      proc: copy_to_user() returns unsigned · 309361e0
      Dan Carpenter 提交于
      copy_to_user() returns the number of bytes left to be copied.
      
      This was a typo from: d82ef020 "proc: pagemap: Hold mmap_sem during
      page walk".
      Signed-off-by: NDan Carpenter <error27@gmail.com>
      Acked-by: NMatt Mackall <mpm@selenic.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      309361e0
    • D
      9p: return on mutex_lock_interruptible() · 85a770a8
      Dan Carpenter 提交于
      If "err" is -EINTR here the original code calls mutex_unlock() and then
      returns, but it should just return directly.
      Signed-off-by: NDan Carpenter <error27@gmail.com>
      Signed-off-by: NEric Van Hensbergen <ericvh@gmail.com>
      
      ------------------------------------------------------------------------------
      Download Intel&#174; Parallel Studio Eval
      Try the new software tools for yourself. Speed compiling, find bugs
      proactively, and fine-tune applications for parallel performance.
      See why Intel Parallel Studio got high marks during beta.
      http://p.sf.net/sfu/intel-sw-dev
      85a770a8
    • C
      Btrfs: add check for changed leaves in setup_leaf_for_split · 109f6aef
      Chris Mason 提交于
      setup_leaf_for_split needs to drop the path and search again, and has
      checks to see if the item we want to split changed size.  But, it misses
      the case where the leaf changed and now has enough room for the item
      we want to insert.
      
      This adds an extra check to make sure the leaf really needs splitting
      before we call btrfs_split_leaf(), which keeps us from trying to split
      a leaf with a single item.
      
      btrfs_split_leaf() will blindly split the single item leaf, leaving us
      with one good leaf and one empty leaf and then a crash.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      109f6aef
    • S
      Btrfs: create snapshot references in same commit as snapshot · 6bdb72de
      Sage Weil 提交于
      This creates the reference to a new snapshot in the same commit as the
      snapshot itself.  This avoids the need for a second commit in order for a
      snapshot to be persistent, and also avoids the problem of "leaking" a
      new snapshot tree root if the host crashes before the second commit takes
      place.
      
      It is not at all clear to me why it wasn't always done this way.  If there
      is still a reason for the two-stage {create,finish}_pending_snapshots()
      approach I'm missing something!  :)
      
      I've been running this for a couple weeks under pretty heavy usage (a few
      snapshots per minute) without obvious problems.
      Signed-off-by: NSage Weil <sage@newdream.net>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      6bdb72de
    • J
      Btrfs: fix small race with delalloc flushing waitqueue's · b5cb1600
      Josef Bacik 提交于
      Everytime we start a new flushing thread, we init the waitqueue if there isn't a
      flushing thread running.  The problem with this is we check
      space_info->flushing, which we clear right before doing a wake_up on the
      flushing waitqueue, which causes problems if we init the waitqueue in the middle
      of clearing the flushing flagh and calling wake_up.  This is hard to hit, but
      the code is wrong anyway, so init the flushing/allocating waitqueue when
      creating the space info and let it be.  I haven't seen the panic since I've been
      using this patch.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      b5cb1600
    • N
      Btrfs: use add_to_page_cache_lru, use __page_cache_alloc · 28ecb609
      Nick Piggin 提交于
      Pagecache pages should be allocated with __page_cache_alloc, so they
      obey pagecache memory policies.
      
      add_to_page_cache_lru is exported, so it should be used. Benefits over
      using a private pagevec: neater code, 128 bytes fewer stack used, percpu
      lru ordering is preserved, and finally don't need to flush pagevec
      before returning so batching may be shared with other LRU insertions.
      
      Signed-off-by: Nick Piggin <npiggin@suse.de>:
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      28ecb609
  5. 05 4月, 2010 6 次提交
  6. 04 4月, 2010 2 次提交
  7. 01 4月, 2010 2 次提交
  8. 31 3月, 2010 12 次提交
  9. 30 3月, 2010 2 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
    • L
      ext3: fix broken handling of EXT3_STATE_NEW · de329820
      Linus Torvalds 提交于
      In commit 9df93939 ("ext3: Use bitops to read/modify
      EXT3_I(inode)->i_state") ext3 changed its internal 'i_state' variable to
      use bitops for its state handling.  However, unline the same ext4
      change, it didn't actually change the name of the field when it changed
      the semantics of it.
      
      As a result, an old use of 'i_state' remained in fs/ext3/ialloc.c that
      initialized the field to EXT3_STATE_NEW.  And that does not work
      _at_all_ when we're now working with individually named bits rather than
      values that get masked.  So the code tried to mark the state to be new,
      but in actual fact set the field to EXT3_STATE_JDATA.  Which makes no
      sense at all, and screws up all the code that checks whether the inode
      was newly allocated.
      
      In particular, it made the xattr code unhappy, and caused various random
      behavior, like apparently
      
      	https://bugzilla.redhat.com/show_bug.cgi?id=577911
      
      So fix the initialization, and rename the field to match ext4 so that we
      don't have this happen again.
      
      Cc: James Morris <jmorris@namei.org>
      Cc: Stephen Smalley <sds@tycho.nsa.gov>
      Cc: Daniel J Walsh <dwalsh@redhat.com>
      Cc: Eric Paris <eparis@redhat.com>
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      de329820