1. 10 8月, 2010 2 次提交
  2. 19 5月, 2010 1 次提交
    • T
      Ocfs2: Optimize ocfs2 truncate to use ocfs2_remove_btree_range() instead. · 78f94673
      Tristan Ye 提交于
      Truncate is just a special case of punching holes(from new i_size to
      end), we therefore could take advantage of the existing
      ocfs2_remove_btree_range() to reduce the comlexity and redundancy in
      alloc.c.  The goal here is to make truncate more generic and
      straightforward.
      
      Several functions only used by ocfs2_commit_truncate() will smiply be
      removed.
      
      ocfs2_remove_btree_range() was originally used by the hole punching
      code, which didn't take refcount trees into account (definitely a bug).
      We therefore need to change that func a bit to handle refcount trees.
      It must take the refcount lock, calculate and reserve blocks for
      refcount tree changes, and decrease refcounts at the end.  We replace 
      ocfs2_lock_allocators() here by adding a new func
      ocfs2_reserve_blocks_for_rec_trunc() which accepts some extra blocks to
      reserve.  This will not hurt any other code using
      ocfs2_remove_btree_range() (such as dir truncate and hole punching).
      
      I merged the following steps into one patch since they may be
      logically doing one thing, though I know it looks a little bit fat
      to review.
      
      1). Remove redundant code used by ocfs2_commit_truncate(), since we're
          moving to ocfs2_remove_btree_range anyway.
      
      2). Add a new func ocfs2_reserve_blocks_for_rec_trunc() for purpose of
          accepting some extra blocks to reserve.
      
      3). Change ocfs2_prepare_refcount_change_for_del() a bit to fit our
          needs.  It's safe to do this since it's only being called by
          truncate.
      
      4). Change ocfs2_remove_btree_range() a bit to take refcount case into
          account.
      
      5). Finally, we change ocfs2_commit_truncate() to call
          ocfs2_remove_btree_range() in a proper way.
      
      The patch has been tested normally for sanity check, stress tests
      with heavier workload will be expected.
      
      Based on this patch, fixing the punching holes bug will be fairly easy.
      Signed-off-by: NTristan Ye <tristan.ye@oracle.com>
      Acked-by: NMark Fasheh <mfasheh@suse.com>
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      78f94673
  3. 11 5月, 2010 1 次提交
    • J
      ocfs2: Wrap signal blocking in void functions. · e4b963f1
      Joel Becker 提交于
      ocfs2 sometimes needs to block signals around dlm operations, but it
      currently does it with sigprocmask().  Even worse, it's checking the
      error code of sigprocmask().  The in-kernel sigprocmask() can only error
      if you get the SIG_* argument wrong.  We don't.
      
      Wrap the sigprocmask() calls with ocfs2_[un]block_signals().  These
      functions are void, but they will BUG() if somehow sigprocmask() returns
      an error.
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      e4b963f1
  4. 06 5月, 2010 3 次提交
    • M
      ocfs2: use allocation reservations for directory data · e3b4a97d
      Mark Fasheh 提交于
      Use the reservations system for unindexed dir tree allocations. We don't
      bother with the indexed tree as reads from it are mostly random anyway.
      Directory reservations are marked seperately, to allow the reservations code
      a chance to optimize their window sizes. This patch allocates only 8 bits
      for directory windows as they generally are not expected to grow as quickly
      as file data. Future improvements to dir window sizing can trivially be
      made.
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      e3b4a97d
    • M
      ocfs2: use allocation reservations during file write · 4fe370af
      Mark Fasheh 提交于
      Add a per-inode reservations structure and pass it through to the
      reservations code.
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      4fe370af
    • J
      ocfs2: Make ocfs2_journal_dirty() void. · ec20cec7
      Joel Becker 提交于
      jbd[2]_journal_dirty_metadata() only returns 0.  It's been returning 0
      since before the kernel moved to git.  There is no point in checking
      this error.
      
      ocfs2_journal_dirty() has been faithfully returning the status since the
      beginning.  All over ocfs2, we have blocks of code checking this can't
      fail status.  In the past few years, we've tried to avoid adding these
      checks, because they are pointless.  But anyone who looks at our code
      assumes they are needed.
      
      Finally, ocfs2_journal_dirty() is made a void function.  All error
      checking is removed from other files.  We'll BUG_ON() the status of
      jbd2_journal_dirty_metadata() just in case they change it someday.  They
      won't.
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      ec20cec7
  5. 04 5月, 2010 1 次提交
  6. 24 4月, 2010 2 次提交
  7. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  8. 24 3月, 2010 1 次提交
    • T
      Ocfs2: Handle deletion of reflinked oprhan inodes correctly. · b54c2ca4
      Tristan Ye 提交于
      The rule is that all inodes in the orphan dir have ORPHANED_FL,
      otherwise we treated it as an ERROR.  This rule works well except
      for some rare cases of reflink operation:
      
      http://oss.oracle.com/bugzilla/show_bug.cgi?id=1215
      
      The problem is caused by how reflink and our orphan_scan thread
      interact.
      
       * The orphan scan pulls the orphans into a queue first, then runs the
         queue at a later time.  We only hold the orphan_dir's lock
         during scanning.
      
       * Reflink create a oprhaned target in orphan_dir as its first step.
         It removes the target and clears the flag as the final step.
         These two steps take the orphan_dir's lock, but it is not held for
         the duration.
      
      Based on the above semantics, a reflink inode can be moved out of the
      orphan dir and have its ORPHANED_FL cleared before the queue of orphans
      is run.  This leads to a ERROR in ocfs2_query_wipde_inode().
      
      This patch teaches ocfs2_query_wipe_inode() to detect previously
      orphaned reflink targets.  If a reflink fails or a crash occurs during
      the relfink operation, the inode will retain ORPHANED_FL and will be
      properly wiped.
      Signed-off-by: NTristan Ye <tristan.ye@oracle.com>
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      b54c2ca4
  9. 05 3月, 2010 5 次提交
    • C
      dquot: cleanup dquot initialize routine · 871a2931
      Christoph Hellwig 提交于
      Get rid of the initialize dquot operation - it is now always called from
      the filesystem and if a filesystem really needs it's own (which none
      currently does) it can just call into it's own routine directly.
      
      Rename the now static low-level dquot_initialize helper to __dquot_initialize
      and vfs_dq_init to dquot_initialize to have a consistent namespace.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJan Kara <jack@suse.cz>
      871a2931
    • C
      dquot: move dquot initialization responsibility into the filesystem · 907f4554
      Christoph Hellwig 提交于
      Currently various places in the VFS call vfs_dq_init directly.  This means
      we tie the quota code into the VFS.  Get rid of that and make the
      filesystem responsible for the initialization.   For most metadata operations
      this is a straight forward move into the methods, but for truncate and
      open it's a bit more complicated.
      
      For truncate we currently only call vfs_dq_init for the sys_truncate case
      because open already takes care of it for ftruncate and open(O_TRUNC) - the
      new code causes an additional vfs_dq_init for those which is harmless.
      
      For open the initialization is moved from do_filp_open into the open method,
      which means it happens slightly earlier now, and only for regular files.
      The latter is fine because we don't need to initialize it for operations
      on special files, and we already do it as part of the namespace operations
      for directories.
      
      Add a dquot_file_open helper that filesystems that support generic quotas
      can use to fill in ->open.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJan Kara <jack@suse.cz>
      907f4554
    • C
      dquot: cleanup dquot drop routine · 9f754758
      Christoph Hellwig 提交于
      Get rid of the drop dquot operation - it is now always called from
      the filesystem and if a filesystem really needs it's own (which none
      currently does) it can just call into it's own routine directly.
      
      Rename the now static low-level dquot_drop helper to __dquot_drop
      and vfs_dq_drop to dquot_drop to have a consistent namespace.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJan Kara <jack@suse.cz>
      9f754758
    • C
      dquot: move dquot drop responsibility into the filesystem · 257ba15c
      Christoph Hellwig 提交于
      Currently clear_inode calls vfs_dq_drop directly.  This means
      we tie the quota code into the VFS.  Get rid of that and make the
      filesystem responsible for the drop inside the ->clear_inode
      superblock operation.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJan Kara <jack@suse.cz>
      257ba15c
    • C
      dquot: cleanup inode allocation / freeing routines · 63936dda
      Christoph Hellwig 提交于
      Get rid of the alloc_inode and free_inode dquot operations - they are
      always called from the filesystem and if a filesystem really needs
      their own (which none currently does) it can just call into it's
      own routine directly.
      
      Also get rid of the vfs_dq_alloc/vfs_dq_free wrappers and always
      call the lowlevel dquot_alloc_inode / dqout_free_inode routines
      directly, which now lose the number argument which is always 1.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJan Kara <jack@suse.cz>
      63936dda
  10. 26 1月, 2010 1 次提交
  11. 23 9月, 2009 1 次提交
    • T
      ocfs2: Call refcount tree remove process properly. · 8b2c0dba
      Tao Ma 提交于
      Now with xattr refcount support, we need to check whether
      we have xattr refcounted before we remove the refcount tree.
      
      Now the mechanism is:
      1) Check whether i_clusters == 0, if no, exit.
      2) check whether we have i_xattr_loc in dinode. if yes, exit.
      2) Check whether we have inline xattr stored outside, if yes, exit.
      4) Remove the tree.
      Signed-off-by: NTao Ma <tao.ma@oracle.com>
      8b2c0dba
  12. 05 9月, 2009 7 次提交
  13. 23 6月, 2009 1 次提交
  14. 04 4月, 2009 4 次提交
    • W
      ocfs2: fix rare stale inode errors when exporting via nfs · 6ca497a8
      wengang wang 提交于
      For nfs exporting, ocfs2_get_dentry() returns the dentry for fh.
      ocfs2_get_dentry() may read from disk when the inode is not in memory,
      without any cross cluster lock. this leads to the file system loading a
      stale inode.
      
      This patch fixes above problem.
      
      Solution is that in case of inode is not in memory, we get the cluster
      lock(PR) of alloc inode where the inode in question is allocated from (this
      causes node on which deletion is done sync the alloc inode) before reading
      out the inode itsself. then we check the bitmap in the group (the inode in
      question allcated from) to see if the bit is clear. if it's clear then it's
      stale. if the bit is set, we then check generation as the existing code
      does.
      
      We have to read out the inode in question from disk first to know its alloc
      slot and allot bit. And if its not stale we read it out using ocfs2_iget().
      The second read should then be from cache.
      
      And also we have to add a per superblock nfs_sync_lock to cover the lock for
      alloc inode and that for inode in question. this is because ocfs2_get_dentry()
      and ocfs2_delete_inode() lock on them in reverse order. nfs_sync_lock is locked
      in EX mode in ocfs2_get_dentry() and in PR mode in ocfs2_delete_inode(). so
      that mutliple ocfs2_delete_inode() can run concurrently in normal case.
      
      [mfasheh@suse.com: build warning fixes and comment cleanups]
      Signed-off-by: NWengang Wang <wen.gang.wang@oracle.com>
      Acked-by: NJoel Becker <joel.becker@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      6ca497a8
    • T
      ocfs2: Optimize inode allocation by remembering last group · 13821151
      Tao Ma 提交于
      In ocfs2, the inode block search looks for the "emptiest" inode
      group to allocate from. So if an inode alloc file has many equally
      (or almost equally) empty groups, new inodes will tend to get
      spread out amongst them, which in turn can put them all over the
      disk. This is undesirable because directory operations on conceptually
      "nearby" inodes force a large number of seeks.
      
      So we add ip_last_used_group in core directory inodes which records
      the last used allocation group. Another field named ip_last_used_slot
      is also added in case inode stealing happens. When claiming new inode,
      we passed in directory's inode so that the allocation can use this
      information.
      For more details, please see
      http://oss.oracle.com/osswiki/OCFS2/DesignDocs/InodeAllocationStrategy.
      Signed-off-by: NTao Ma <tao.ma@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      13821151
    • M
      ocfs2: Increase max links count · 198a1ca3
      Mark Fasheh 提交于
      Since we've now got a directory format capable of handling a large number of
      entries, we can increase the maximum link count supported. This only gets
      increased if the directory indexing feature is turned on.
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      Acked-by: NJoel Becker <joel.becker@oracle.com>
      198a1ca3
    • M
      ocfs2: Add a name indexed b-tree to directory inodes · 9b7895ef
      Mark Fasheh 提交于
      This patch makes use of Ocfs2's flexible btree code to add an additional
      tree to directory inodes. The new tree stores an array of small,
      fixed-length records in each leaf block. Each record stores a hash value,
      and pointer to a block in the traditional (unindexed) directory tree where a
      dirent with the given name hash resides. Lookup exclusively uses this tree
      to find dirents, thus providing us with constant time name lookups.
      
      Some of the hashing code was copied from ext3. Unfortunately, it has lots of
      unfixed checkpatch errors. I left that as-is so that tracking changes would
      be easier.
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      Acked-by: NJoel Becker <joel.becker@oracle.com>
      9b7895ef
  15. 06 1月, 2009 7 次提交
    • J
      ocfs2: Use metadata-specific ocfs2_journal_access_*() functions. · 13723d00
      Joel Becker 提交于
      The per-metadata-type ocfs2_journal_access_*() functions hook up jbd2
      commit triggers and allow us to compute metadata ecc right before the
      buffers are written out.  This commit provides ecc for inodes, extent
      blocks, group descriptors, and quota blocks.  It is not safe to use
      extened attributes and metaecc at the same time yet.
      
      The ocfs2_extent_tree and ocfs2_path abstractions in alloc.c both hide
      the type of block at their root.  Before, it didn't matter, but now the
      root block must use the appropriate ocfs2_journal_access_*() function.
      To keep this abstract, the structures now have a pointer to the matching
      journal_access function and a wrapper call to call it.
      
      A few places use naked ocfs2_write_block() calls instead of adding the
      blocks to the journal.  We make sure to calculate their checksum and ecc
      before the write.
      
      Since we pass around the journal_access functions.  Let's typedef them
      in ocfs2.h.
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      13723d00
    • J
      ocfs2: block read meta ecc. · d6b32bbb
      Joel Becker 提交于
      Add block check calls to the read_block validate functions.  This is the
      almost all of the read-side checking of metaecc.  xattr buckets are not checked
      yet.   Writes are also unchecked, and so a read-write mount will quickly fail.
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      d6b32bbb
    • J
      ocfs2: Add quota calls for allocation and freeing of inodes and space · a90714c1
      Jan Kara 提交于
      Add quota calls for allocation and freeing of inodes and space, also update
      estimates on number of needed credits for a transaction. Move out inode
      allocation from ocfs2_mknod_locked() because vfs_dq_init() must be called
      outside of a transaction.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      a90714c1
    • J
      ocfs2: Mark system files as not subject to quota accounting · bbbd0eb3
      Jan Kara 提交于
      Mark system files as not subject to quota accounting. This prevents
      possible recursions into quota code and thus deadlocks.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      bbbd0eb3
    • J
      1a224ad1
    • J
      ocfs2: Validate metadata only when it's read from disk. · 970e4936
      Joel Becker 提交于
      Add an optional validation hook to ocfs2_read_blocks().  Now the
      validation function is only called when a block was actually read off of
      disk.  It is not called when the buffer was in cache.
      
      We add a buffer state bit BH_NeedsValidate to flag these buffers.  It
      must always be one higher than the last JBD2 buffer state bit.
      
      The dinode, dirblock, extent_block, and xattr_block validators are
      lifted to this scheme directly.  The group_descriptor validator needs to
      be split into two pieces.  The first part only needs the gd buffer and
      is passed to ocfs2_read_block().  The second part requires the dinode as
      well, and is called every time.  It's only 3 compares, so it's tiny.
      This also allows us to clean up the non-fatal gd check used by resize.c.
      It now has no magic argument.
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      970e4936
    • J
      ocfs2: Wrap inode block reads in a dedicated function. · b657c95c
      Joel Becker 提交于
      The ocfs2 code currently reads inodes off disk with a simple
      ocfs2_read_block() call.  Each place that does this has a different set
      of sanity checks it performs.  Some check only the signature.  A couple
      validate the block number (the block read vs di->i_blkno).  A couple
      others check for VALID_FL.  Only one place validates i_fs_generation.  A
      couple check nothing.  Even when an error is found, they don't all do
      the same thing.
      
      We wrap inode reading into ocfs2_read_inode_block().  This will validate
      all the above fields, going readonly if they are invalid (they never
      should be).  ocfs2_read_inode_block_full() is provided for the places
      that want to pass read_block flags.  Every caller is passing a struct
      inode with a valid ip_blkno, so we don't need a separate blkno argument
      either.
      
      We will remove the validation checks from the rest of the code in a
      later commit, as they are no longer necessary.
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      b657c95c
  16. 11 11月, 2008 1 次提交
  17. 15 10月, 2008 1 次提交