1. 19 8月, 2013 1 次提交
  2. 06 6月, 2012 1 次提交
  3. 24 4月, 2012 2 次提交
    • S
      GFS2: Clean up log write code path · e8c92ed7
      Steven Whitehouse 提交于
      Prior to this patch, we have two ways of sending i/o to the log.
      One of those is used when we need to allocate both the data
      to be written itself and also a buffer head to submit it. This
      is done via sb_getblk and friends. This is used mostly for writing
      log headers.
      
      The other method is used when writing blocks which have some
      in-place counterpart. This is the case for all the metadata
      blocks which are journalled, and when journaled data is in use,
      for unescaped journalled data blocks.
      
      This patch replaces both of those two methods, and about half
      a dozen separate i/o submission points with a single i/o
      submission function. We also go direct to bio rather than
      using buffer heads, since this allows us to build i/o
      requests of the maximum size for the block device in
      question. It also reduces the memory required for flushing
      the log, which can be very useful in low memory situations.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      e8c92ed7
    • B
      GFS2: Use slab for block reservation memory · 36f5580b
      Bob Peterson 提交于
      This patch changes block reservations so it uses slab storage.
      Signed-off-by: NBob Peterson <rpeterso@redhat.com>
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      36f5580b
  4. 08 3月, 2012 1 次提交
    • S
      GFS2: Remove a __GFP_NOFAIL allocation · 75ca61c1
      Steven Whitehouse 提交于
      In order to ensure that we've got enough buffer heads for flushing
      the journal, the orignal code used __GFP_NOFAIL when performing
      this allocation. Here we dispense with that in favour of using a
      mempool. This should improve efficiency in low memory conditions
      since flushing the journal is a good way to get memory back, we
      don't want to be spinning, waiting on memory allocations. The
      buffers which are allocated via this mempool are fairly short lived,
      so that we'll recycle them pretty quickly.
      
      Although there are other memory allocations which occur during the
      journal flush process, this is the one which can potentially require
      the most memory, so the most important one to fix.
      
      The amount of memory reserved is a fixed amount, and we should not need
      to scale it when there are a greater number of filesystems in use.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      75ca61c1
  5. 11 1月, 2012 1 次提交
  6. 22 11月, 2011 1 次提交
  7. 27 7月, 2011 1 次提交
  8. 15 7月, 2011 1 次提交
    • S
      GFS2: Cache dir hash table in a contiguous buffer · 17d539f0
      Steven Whitehouse 提交于
      This patch adds a cache for the hash table to the directory code
      in order to help simplify the way in which the hash table is
      accessed. This is intended to be a first step towards introducing
      some performance improvements in the directory code.
      
      There are two follow ups that I'm hoping to see fairly shortly. One
      is to simplify the hash table reading code now that we always read the
      complete hash table, whether we want one entry or all of them. The
      other is to introduce readahead on the heads of the hash chains
      which are referred to from the table.
      
      The hash table is a maximum of 128k in size, so it is not worth trying
      to read it in small chunks.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      17d539f0
  9. 26 5月, 2011 1 次提交
    • M
      gfs2: Drop __TIME__ usage · 8d2c50e3
      Michal Marek 提交于
      The kernel already prints its build timestamp during boot, no need to
      repeat it in random drivers and produce different object files each
      time.
      
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: cluster-devel@redhat.com
      Signed-off-by: NMichal Marek <mmarek@suse.cz>
      8d2c50e3
  10. 20 4月, 2011 1 次提交
    • S
      GFS2: Optimise glock lru and end of life inodes · f42ab085
      Steven Whitehouse 提交于
      The GLF_LRU flag introduced in the previous patch can be
      used to check if a glock is on the lru list when a new
      holder is queued and if so remove it, without having first
      to get the lru_lock.
      
      The main purpose of this patch however is to optimise the
      glocks left over when an inode at end of life is being
      evicted. Previously such glocks were left with the GLF_LFLUSH
      flag set, so that when reclaimed, each one required a log flush.
      This patch resets the GLF_LFLUSH flag when there is nothing
      left to flush thus preventing later log flushes as glocks are
      reused or demoted.
      
      In order to do this, we need to keep track of the number of
      revokes which are outstanding, and also to clear the GLF_LFLUSH
      bit after a log commit when only revokes have been processed.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      f42ab085
  11. 24 2月, 2011 1 次提交
    • M
      mm: prevent concurrent unmap_mapping_range() on the same inode · 2aa15890
      Miklos Szeredi 提交于
      Michael Leun reported that running parallel opens on a fuse filesystem
      can trigger a "kernel BUG at mm/truncate.c:475"
      
      Gurudas Pai reported the same bug on NFS.
      
      The reason is, unmap_mapping_range() is not prepared for more than
      one concurrent invocation per inode.  For example:
      
        thread1: going through a big range, stops in the middle of a vma and
           stores the restart address in vm_truncate_count.
      
        thread2: comes in with a small (e.g. single page) unmap request on
           the same vma, somewhere before restart_address, finds that the
           vma was already unmapped up to the restart address and happily
           returns without doing anything.
      
      Another scenario would be two big unmap requests, both having to
      restart the unmapping and each one setting vm_truncate_count to its
      own value.  This could go on forever without any of them being able to
      finish.
      
      Truncate and hole punching already serialize with i_mutex.  Other
      callers of unmap_mapping_range() do not, and it's difficult to get
      i_mutex protection for all callers.  In particular ->d_revalidate(),
      which calls invalidate_inode_pages2_range() in fuse, may be called
      with or without i_mutex.
      
      This patch adds a new mutex to 'struct address_space' to prevent
      running multiple concurrent unmap_mapping_range() on the same mapping.
      
      [ We'll hopefully get rid of all this with the upcoming mm
        preemptibility series by Peter Zijlstra, the "mm: Remove i_mmap_mutex
        lockbreak" patch in particular.  But that is for 2.6.39 ]
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      Reported-by: NMichael Leun <lkml20101129@newton.leun.net>
      Reported-by: NGurudas Pai <gurudas.pai@oracle.com>
      Tested-by: NGurudas Pai <gurudas.pai@oracle.com>
      Acked-by: NHugh Dickins <hughd@google.com>
      Cc: stable@kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2aa15890
  12. 17 2月, 2011 1 次提交
  13. 21 1月, 2011 1 次提交
    • S
      GFS2: Use RCU for glock hash table · bc015cb8
      Steven Whitehouse 提交于
      This has a number of advantages:
      
       - Reduces contention on the hash table lock
       - Makes the code smaller and simpler
       - Should speed up glock dumps when under load
       - Removes ref count changing in examine_bucket
       - No longer need hash chain lock in glock_put() in common case
      
      There are some further changes which this enables and which
      we may do in the future. One is to look at using SLAB_RCU,
      and another is to look at using a per-cpu counter for the
      per-sb glock counter, since that is touched twice in the
      lifetime of each glock (but only used at umount time).
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      bc015cb8
  14. 11 10月, 2010 1 次提交
    • T
      workqueue: add and use WQ_MEM_RECLAIM flag · 6370a6ad
      Tejun Heo 提交于
      Add WQ_MEM_RECLAIM flag which currently maps to WQ_RESCUER, mark
      WQ_RESCUER as internal and replace all external WQ_RESCUER usages to
      WQ_MEM_RECLAIM.
      
      This makes the API users express the intent of the workqueue instead
      of indicating the internal mechanism used to guarantee forward
      progress.  This is also to make it cleaner to add more semantics to
      WQ_MEM_RECLAIM.  For example, if deemed necessary, memory reclaim
      workqueues can be made highpri.
      
      This patch doesn't introduce any functional change.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Jeff Garzik <jgarzik@pobox.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      6370a6ad
  15. 20 9月, 2010 2 次提交
    • S
      GFS2: Make . and .. qstrs constant · 8d123585
      Steven Whitehouse 提交于
      Rather than calculating the qstrs for . and .. each time
      we need them, its better to keep a constant version of
      these and just refer to them when required.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@infradead.org>
      8d123585
    • S
      GFS2: Use new workqueue scheme · 9fa0ea9f
      Steven Whitehouse 提交于
      The recovery workqueue can be freezable since
      we want it to finish what it is doing if the system is to
      be frozen (although why you'd want to freeze a cluster node
      is beyond me since it will result in it being ejected from
      the cluster). It does still make sense for single node
      GFS2 filesystems though.
      
      The glock workqueue will benefit from being able to run more
      work items concurrently. A test running postmark shows
      improved performance and multi-threaded workloads are likely
      to benefit even more. It needs to be high priority because
      the latency directly affects the latency of filesystem glock
      operations.
      
      The delete workqueue is similar to the recovery workqueue in
      that it must not get blocked by memory allocations, and may
      run for a long time.
      
      Potentially other GFS2 threads might also be converted to
      workqueues, but I'll leave that for a later patch.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      Acked-by: NTejun Heo <tj@kernel.org>
      9fa0ea9f
  16. 23 7月, 2010 1 次提交
    • T
      gfs2: use workqueue instead of slow-work · 6ecd7c2d
      Tejun Heo 提交于
      Workqueue can now handle high concurrency.  Convert gfs to use
      workqueue instead of slow-work.
      
      * Steven pointed out that recovery path might be run from allocation
        path and thus requires forward progress guarantee without memory
        allocation.  Create and use gfs_recovery_wq with rescuer.  Please
        note that forward progress wasn't guaranteed with slow-work.
      
      * Updated to use non-reentrant workqueue.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NSteven Whitehouse <swhiteho@redhat.com>
      6ecd7c2d
  17. 29 3月, 2010 1 次提交
  18. 01 3月, 2010 1 次提交
    • S
      GFS2: Metadata address space clean up · 009d8518
      Steven Whitehouse 提交于
      Since the start of GFS2, an "extra" inode has been used to store
      the metadata belonging to each inode. The only reason for using
      this inode was to have an extra address space, the other fields
      were unused. This means that the memory usage was rather inefficient.
      
      The reason for keeping each inode's metadata in a separate address
      space is that when glocks are requested on remote nodes, we need to
      be able to efficiently locate the data and metadata which relating
      to that glock (inode) in order to sync or sync and invalidate it
      (depending on the remotely requested lock mode).
      
      This patch adds a new type of glock, which has in addition to
      its normal fields, has an address space. This applies to all
      inode and rgrp glocks (but to no other glock types which remain
      as before). As a result, we no longer need to have the second
      inode.
      
      This results in three major improvements:
       1. A saving of approx 25% of memory used in caching inodes
       2. A removal of the circular dependency between inodes and glocks
       3. No confusion between "normal" and "metadata" inodes in super.c
      
      Although the first of these is the more immediately apparent, the
      second is just as important as it now enables a number of clean
      ups at umount time. Those will be the subject of future patches.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      009d8518
  19. 20 11月, 2009 1 次提交
  20. 19 5月, 2009 1 次提交
    • S
      GFS2: Umount recovery race fix · fe64d517
      Steven Whitehouse 提交于
      This patch fixes a race condition where we can receive recovery
      requests part way through processing a umount. This was causing
      problems since the recovery thread had already gone away.
      
      Looking in more detail at the recovery code, it was really trying
      to implement a slight variation on a work queue, and that happens to
      align nicely with the recently introduced slow-work subsystem. As a
      result I've updated the code to use slow-work, rather than its own home
      grown variety of work queue.
      
      When using the wait_on_bit() function, I noticed that the wait function
      that was supplied as an argument was appearing in the WCHAN field, so
      I've updated the function names in order to produce more meaningful
      output.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      fe64d517
  21. 24 3月, 2009 2 次提交
    • S
      GFS2: Merge lock_dlm module into GFS2 · f057f6cd
      Steven Whitehouse 提交于
      This is the big patch that I've been working on for some time
      now. There are many reasons for wanting to make this change
      such as:
       o Reducing overhead by eliminating duplicated fields between structures
       o Simplifcation of the code (reduces the code size by a fair bit)
       o The locking interface is now the DLM interface itself as proposed
         some time ago.
       o Fewer lookups of glocks when processing replies from the DLM
       o Fewer memory allocations/deallocations for each glock
       o Scope to do further optimisations in the future (but this patch is
         more than big enough for now!)
      
      Please note that (a) this patch relates to the lock_dlm module and
      not the DLM itself, that is still a separate module; and (b) that
      we retain the ability to build GFS2 as a standalone single node
      filesystem with out requiring the DLM.
      
      This patch needs a lot of testing, hence my keeping it I restarted
      my -git tree after the last merge window. That way, this has the maximum
      exposure before its merged. This is (modulo a few minor bug fixes) the
      same patch that I've been posting on and off the the last three months
      and its passed a number of different tests so far.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      f057f6cd
    • A
      GFS2: change gfs2_quota_scan into a shrinker · 0a7ab79c
      Abhijith Das 提交于
      Deallocation of gfs2_quota_data objects now happens on-demand through a
      shrinker instead of routinely deallocating through the quotad daemon.
      Signed-off-by: NAbhijith Das <adas@redhat.com>
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      0a7ab79c
  22. 05 1月, 2009 4 次提交
    • S
      GFS2: Kill two daemons with one patch · 97cc1025
      Steven Whitehouse 提交于
      This patch removes the two daemons, gfs2_scand and gfs2_glockd
      and replaces them with a shrinker which is called from the VM.
      
      The net result is that GFS2 responds better when there is memory
      pressure, since it shrinks the glock cache at the same rate
      as the VFS shrinks the dcache and icache. There are no longer
      any time based criteria for shrinking glocks, they are kept
      until such time as the VM asks for more memory and then we
      demote just as many glocks as required.
      
      There are potential future changes to this code, including the
      possibility of sorting the glocks which are to be written back
      into inode number order, to get a better I/O ordering. It would
      be very useful to have an elevator based workqueue implementation
      for this, as that would automatically deal with the read I/O cases
      at the same time.
      
      This patch is my answer to Andrew Morton's remark, made during
      the initial review of GFS2, asking why GFS2 needs so many kernel
      threads, the answer being that it doesn't :-) This patch is a
      net loss of about 200 lines of code.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      97cc1025
    • S
      GFS2: Fix "truncate in progress" hang · 813e0c46
      Steven Whitehouse 提交于
      Following on from the recent clean up of gfs2_quotad, this patch moves
      the processing of "truncate in progress" inodes from the glock workqueue
      into gfs2_quotad. This fixes a hang due to the "truncate in progress"
      processing requiring glocks in order to complete.
      
      It might seem odd to use gfs2_quotad for this particular item, but
      we have to use a pre-existing thread since creating a thread implies
      a GFP_KERNEL memory allocation which is not allowed from the glock
      workqueue context. Of the existing threads, gfs2_logd and gfs2_recoverd
      may deadlock if used for this operation. gfs2_scand and gfs2_glockd are
      both scheduled for removal at some (hopefully not too distant) future
      point. That leaves only gfs2_quotad whose workload is generally fairly
      light and is easily adapted for this extra task.
      
      Also, as a result of this change, it opens the way for a future patch to
      make the reading of the inode's information asynchronous with respect to
      the glock workqueue, which is another improvement that has been on the list
      for some time now.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      813e0c46
    • S
      GFS2: Clean up & move gfs2_quotad · 37b2c837
      Steven Whitehouse 提交于
      This patch is a clean up of gfs2_quotad prior to giving it an
      extra job to do in addition to the current portfolio of updating
      the quota and statfs information from time to time.
      
      As a result it has been moved into quota.c allowing one of the
      functions it calls to be made static. Also the clean up allows
      the two existing functions to have separate timeouts and also
      to coexist with its future role of dealing with the "truncate in
      progress" inode flag.
      
      The (pointless) setting of gfs2_quotad_secs is removed since we
      arrange to only wake up quotad when one of the two timers expires.
      
      In addition the struct gfs2_quota_data is moved into a slab cache,
      mainly for easier debugging. It should also be possible to use
      a shrinker in the future, rather than the current scheme of scanning
      the quota data entries from time to time.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      37b2c837
    • S
      GFS2: Rationalise header files · b2760583
      Steven Whitehouse 提交于
      Move the contents of some headers which contained very
      little into more sensible places, and remove the original
      header files. This should make it easier to find things.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      b2760583
  23. 27 7月, 2008 1 次提交
  24. 27 6月, 2008 1 次提交
    • S
      [GFS2] Clean up the glock core · 6802e340
      Steven Whitehouse 提交于
      This patch implements a number of cleanups to the core of the
      GFS2 glock code. As a result a lot of code is removed. It looks
      like a really big change, but actually a large part of this patch
      is either removing or moving existing code.
      
      There are some new bits too though, such as the new run_queue()
      function which is considerably streamlined. Highlights of this
      patch include:
      
       o Fixes a cluster coherency bug during SH -> EX lock conversions
       o Removes the "glmutex" code in favour of a single bit lock
       o Removes the ->go_xmote_bh() for inodes since it was duplicating
         ->go_lock()
       o We now only use the ->lm_lock() function for both locks and
         unlocks (i.e. unlock is a lock with target mode LM_ST_UNLOCKED)
       o The fast path is considerably shortly, giving performance gains
         especially with lock_nolock
       o The glock_workqueue is now used for all the callbacks from the DLM
         which allows us to simplify the lock_dlm module (see following patch)
       o The way is now open to make further changes such as eliminating the two
         threads (gfs2_glockd and gfs2_scand) in favour of a more efficient
         scheme.
      
      This patch has undergone extensive testing with various test suites
      so it should be pretty stable by now.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      Cc: Bob Peterson <rpeterso@redhat.com>
      6802e340
  25. 31 3月, 2008 1 次提交
  26. 25 1月, 2008 3 次提交
    • B
      [GFS2] Remove unneeded i_spin · 598278bd
      Bob Peterson 提交于
      This patch removes a vestigial variable "i_spin" from the gfs2_inode
      structure.  This not only saves us memory (>300000 of these in memory
      for the oom test) it also saves us time because we don't have to
      spend time initializing it (i.e. slightly better performance).
      Signed-off-by: NBob Peterson <rpeterso@redhat.com>
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      598278bd
    • S
      [GFS2] Reduce inode size by moving i_alloc out of line · 6dbd8224
      Steven Whitehouse 提交于
      It is possible to reduce the size of GFS2 inodes by taking the i_alloc
      structure out of the gfs2_inode. This patch allocates the i_alloc
      structure whenever its needed, and frees it afterward. This decreases
      the amount of low memory we use at the expense of requiring a memory
      allocation for each page or partial page that we write. A quick test
      with postmark shows that the overhead is not measurable and I also note
      that OCFS2 use the same approach.
      
      In the future I'd like to solve the problem by shrinking down the size
      of the members of the i_alloc structure, but for now, this reduces the
      immediate problem of using too much low-memory on x86 and doesn't add
      too much overhead.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      6dbd8224
    • S
      [GFS2] Remove useless i_cache from inodes · f91a0d3e
      Steven Whitehouse 提交于
      The i_cache was designed to keep references to the indirect blocks
      used during block mapping so that they didn't have to be looked
      up continually. The idea failed because there are too many places
      where the i_cache needs to be freed, and this has in the past been
      the cause of many bugs.
      
      In addition there was no performance benefit being gained since the
      disk blocks in question were cached anyway. So this patch removes
      it in order to simplify the code to prepare for other changes which
      would otherwise have had to add further support for this feature.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      f91a0d3e
  27. 17 10月, 2007 1 次提交
  28. 10 10月, 2007 1 次提交
  29. 20 7月, 2007 1 次提交
    • P
      mm: Remove slab destructors from kmem_cache_create(). · 20c2df83
      Paul Mundt 提交于
      Slab destructors were no longer supported after Christoph's
      c59def9f change. They've been
      BUGs for both slab and slub, and slob never supported them
      either.
      
      This rips out support for the dtor pointer from kmem_cache_create()
      completely and fixes up every single callsite in the kernel (there were
      about 224, not including the slab allocator definitions themselves,
      or the documentation references).
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      20c2df83
  30. 17 5月, 2007 1 次提交
    • C
      Remove SLAB_CTOR_CONSTRUCTOR · a35afb83
      Christoph Lameter 提交于
      SLAB_CTOR_CONSTRUCTOR is always specified. No point in checking it.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Cc: Steven French <sfrench@us.ibm.com>
      Cc: Michael Halcrow <mhalcrow@us.ibm.com>
      Cc: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
      Cc: Miklos Szeredi <miklos@szeredi.hu>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Roman Zippel <zippel@linux-m68k.org>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Dave Kleikamp <shaggy@austin.ibm.com>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Cc: "J. Bruce Fields" <bfields@fieldses.org>
      Cc: Anton Altaparmakov <aia21@cantab.net>
      Cc: Mark Fasheh <mark.fasheh@oracle.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Jan Kara <jack@ucw.cz>
      Cc: David Chinner <dgc@sgi.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a35afb83
  31. 08 5月, 2007 1 次提交
    • C
      slab allocators: Remove SLAB_DEBUG_INITIAL flag · 50953fe9
      Christoph Lameter 提交于
      I have never seen a use of SLAB_DEBUG_INITIAL.  It is only supported by
      SLAB.
      
      I think its purpose was to have a callback after an object has been freed
      to verify that the state is the constructor state again?  The callback is
      performed before each freeing of an object.
      
      I would think that it is much easier to check the object state manually
      before the free.  That also places the check near the code object
      manipulation of the object.
      
      Also the SLAB_DEBUG_INITIAL callback is only performed if the kernel was
      compiled with SLAB debugging on.  If there would be code in a constructor
      handling SLAB_DEBUG_INITIAL then it would have to be conditional on
      SLAB_DEBUG otherwise it would just be dead code.  But there is no such code
      in the kernel.  I think SLUB_DEBUG_INITIAL is too problematic to make real
      use of, difficult to understand and there are easier ways to accomplish the
      same effect (i.e.  add debug code before kfree).
      
      There is a related flag SLAB_CTOR_VERIFY that is frequently checked to be
      clear in fs inode caches.  Remove the pointless checks (they would even be
      pointless without removeal of SLAB_DEBUG_INITIAL) from the fs constructors.
      
      This is the last slab flag that SLUB did not support.  Remove the check for
      unimplemented flags from SLUB.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      50953fe9
  32. 01 5月, 2007 1 次提交
    • S
      [GFS2] Fix bz 224480 and cleanup glock demotion code · 3b8249f6
      Steven Whitehouse 提交于
      This patch prevents the printing of a warning message in cases where
      the fs is functioning normally by handing off responsibility for
      unlinked, but still open inodes, to another node for eventual deallocation.
      Also, there is now an improved system for ensuring that such requests
      to other nodes do not get lost. The callback on the iopen lock is
      only ever called when i_nlink == 0 and when a node is unable to deallocate
      it due to it still being in use on another node. When a node receives
      the callback therefore, it knows that i_nlink must be zero, so we mark
      it as such (in gfs2_drop_inode) in order that it will then attempt
      deallocation of the inode itself.
      
      As an additional benefit, queuing a demote request no longer requires
      a memory allocation. This simplifies the code for dealing with gfs2_holders
      as it removes one special case.
      
      There are two new fields in struct gfs2_glock. gl_demote_state is the
      state which the remote node has requested and gl_demote_time is the
      time when the request came in. Both fields are only valid when the
      GLF_DEMOTE flag is set in gl_flags.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      3b8249f6