1. 28 7月, 2008 6 次提交
  2. 27 7月, 2008 1 次提交
  3. 29 4月, 2008 1 次提交
  4. 18 4月, 2008 4 次提交
  5. 29 2月, 2008 1 次提交
  6. 07 2月, 2008 8 次提交
    • L
      [XFS] add __init/__exit mark to specific init/cleanup functions · de2eeea6
      Lachlan McIlroy 提交于
      SGI-PV: 971186
      SGI-Modid: xfs-linux-melb:xfs-kern:30459a
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      Signed-off-by: NDenis Cheng <crquan@gmail.com>
      de2eeea6
    • C
      [XFS] kill xfs_root · cbc89dcf
      Christoph Hellwig 提交于
      The only caller (xfs_fs_fill_super) can simplify call igrab on the root
      inode.
      
      SGI-PV: 971186
      SGI-Modid: xfs-linux-melb:xfs-kern:30393a
      Signed-off-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      cbc89dcf
    • C
      [XFS] stop updating inode->i_blocks · 222096ae
      Christoph Hellwig 提交于
      The VFS doesn't use i_blocks, it's only used by generic_fillattr and the
      generic quota code which XFS doesn't use. In XFS there is one use to check
      whether we have an inline or out of line sumlink, but we can replace that
      with a check of the XFS_IFINLINE inode flag.
      
      SGI-PV: 971186
      SGI-Modid: xfs-linux-melb:xfs-kern:30391a
      Signed-off-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      222096ae
    • D
      [XFS] Move AIL pushing into it's own thread · 249a8c11
      David Chinner 提交于
      When many hundreds to thousands of threads all try to do simultaneous
      transactions and the log is in a tail-pushing situation (i.e. full), we
      can get multiple threads walking the AIL list and contending on the AIL
      lock.
      
      The AIL push is, in effect, a simple I/O dispatch algorithm complicated by
      the ordering constraints placed on it by the transaction subsystem. It
      really does not need multiple threads to push on it - even when only a
      single CPU is pushing the AIL, it can push the I/O out far faster that
      pretty much any disk subsystem can handle.
      
      So, to avoid contention problems stemming from multiple list walkers, move
      the list walk off into another thread and simply provide a "target" to
      push to. When a thread requires a push, it sets the target and wakes the
      push thread, then goes to sleep waiting for the required amount of space
      to become available in the log.
      
      This mechanism should also be a lot fairer under heavy load as the waiters
      will queue in arrival order, rather than queuing in "who completed a push
      first" order.
      
      Also, by moving the pushing to a separate thread we can do more
      effectively overload detection and prevention as we can keep context from
      loop iteration to loop iteration. That is, we can push only part of the
      list each loop and not have to loop back to the start of the list every
      time we run. This should also help by reducing the number of items we try
      to lock and/or push items that we cannot move.
      
      Note that this patch is not intended to solve the inefficiencies in the
      AIL structure and the associated issues with extremely large list
      contents. That needs to be addresses separately; parallel access would
      cause problems to any new structure as well, so I'm only aiming to isolate
      the structure from unbounded parallelism here.
      
      SGI-PV: 972759
      SGI-Modid: xfs-linux-melb:xfs-kern:30371a
      Signed-off-by: NDavid Chinner <dgc@sgi.com>
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      249a8c11
    • D
      [XFS] Move platform specific mount option parse out of core XFS code · a67d7c5f
      David Chinner 提交于
      Mount option parsing is platform specific. Move it out of core code into
      the platform specific superblock operation file.
      
      SGI-PV: 971186
      SGI-Modid: xfs-linux-melb:xfs-kern:30012a
      Signed-off-by: NDavid Chinner <dgc@sgi.com>
      Signed-off-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      a67d7c5f
    • C
      [XFS] kill xfs_freeze. · 9909c4aa
      Christoph Hellwig 提交于
      No need to have a wrapper just two call two more functions.
      
      SGI-PV: 971186
      SGI-Modid: xfs-linux-melb:xfs-kern:29816a
      Signed-off-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NDonald Douwsma <donaldd@sgi.com>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      9909c4aa
    • C
      [XFS] Kill off xfs_statvfs. · 4ca488eb
      Christoph Hellwig 提交于
      We were already filling the Linux struct statfs anyway, and doing this
      trivial task directly in xfs_fs_statfs makes the code quite a bit cleaner.
      While I was at it I also moved copying attributes that don't change over
      the lifetime of the filesystem outside the superblock lock.
      
      xfs_fs_fill_super used to get the magic number and blocksize through
      xfs_statvfs, but assigning them directly is a lot cleaner and will save
      some stack space during mount.
      
      SGI-PV: 971186
      SGI-Modid: xfs-linux-melb:xfs-kern:29802a
      Signed-off-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      4ca488eb
    • L
      [XFS] clean up vnode/inode tracing · cf441eeb
      Lachlan McIlroy 提交于
      Simplify vnode tracing calls by embedding function name & return addr in
      the calling macro.
      
      Also do a lot of vnode->inode renaming for consistency, while we're at it.
      
      SGI-PV: 970335
      SGI-Modid: xfs-linux-melb:xfs-kern:29650a
      Signed-off-by: NEric Sandeen <sandeen@sandeen.net>
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      cf441eeb
  7. 17 10月, 2007 1 次提交
  8. 16 10月, 2007 13 次提交
  9. 15 10月, 2007 2 次提交
  10. 18 9月, 2007 1 次提交
  11. 18 7月, 2007 1 次提交
    • R
      Freezer: make kernel threads nonfreezable by default · 83144186
      Rafael J. Wysocki 提交于
      Currently, the freezer treats all tasks as freezable, except for the kernel
      threads that explicitly set the PF_NOFREEZE flag for themselves.  This
      approach is problematic, since it requires every kernel thread to either
      set PF_NOFREEZE explicitly, or call try_to_freeze(), even if it doesn't
      care for the freezing of tasks at all.
      
      It seems better to only require the kernel threads that want to or need to
      be frozen to use some freezer-related code and to remove any
      freezer-related code from the other (nonfreezable) kernel threads, which is
      done in this patch.
      
      The patch causes all kernel threads to be nonfreezable by default (ie.  to
      have PF_NOFREEZE set by default) and introduces the set_freezable()
      function that should be called by the freezable kernel threads in order to
      unset PF_NOFREEZE.  It also makes all of the currently freezable kernel
      threads call set_freezable(), so it shouldn't cause any (intentional)
      change of behaviour to appear.  Additionally, it updates documentation to
      describe the freezing of tasks more accurately.
      
      [akpm@linux-foundation.org: build fixes]
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Acked-by: NNigel Cunningham <nigel@nigel.suspend2.net>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: Gautham R Shenoy <ego@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      83144186
  12. 14 7月, 2007 1 次提交