1. 05 9月, 2009 3 次提交
  2. 23 6月, 2009 1 次提交
  3. 22 4月, 2009 2 次提交
  4. 04 4月, 2009 4 次提交
    • W
      ocfs2: fix rare stale inode errors when exporting via nfs · 6ca497a8
      wengang wang 提交于
      For nfs exporting, ocfs2_get_dentry() returns the dentry for fh.
      ocfs2_get_dentry() may read from disk when the inode is not in memory,
      without any cross cluster lock. this leads to the file system loading a
      stale inode.
      
      This patch fixes above problem.
      
      Solution is that in case of inode is not in memory, we get the cluster
      lock(PR) of alloc inode where the inode in question is allocated from (this
      causes node on which deletion is done sync the alloc inode) before reading
      out the inode itsself. then we check the bitmap in the group (the inode in
      question allcated from) to see if the bit is clear. if it's clear then it's
      stale. if the bit is set, we then check generation as the existing code
      does.
      
      We have to read out the inode in question from disk first to know its alloc
      slot and allot bit. And if its not stale we read it out using ocfs2_iget().
      The second read should then be from cache.
      
      And also we have to add a per superblock nfs_sync_lock to cover the lock for
      alloc inode and that for inode in question. this is because ocfs2_get_dentry()
      and ocfs2_delete_inode() lock on them in reverse order. nfs_sync_lock is locked
      in EX mode in ocfs2_get_dentry() and in PR mode in ocfs2_delete_inode(). so
      that mutliple ocfs2_delete_inode() can run concurrently in normal case.
      
      [mfasheh@suse.com: build warning fixes and comment cleanups]
      Signed-off-by: NWengang Wang <wen.gang.wang@oracle.com>
      Acked-by: NJoel Becker <joel.becker@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      6ca497a8
    • T
      ocfs2: Optimize inode group allocation by recording last used group. · feb473a6
      Tao Ma 提交于
      In ocfs2, the block group search looks for the "emptiest" group
      to allocate from. So if the allocator has many equally(or almost
      equally) empty groups, new block group will tend to get spread
      out amongst them.
      
      So we add osb_inode_alloc_group in ocfs2_super to record the last
      used inode allocation group.
      For more details, please see
      http://oss.oracle.com/osswiki/OCFS2/DesignDocs/InodeAllocationStrategy.
      
      I have done some basic test and the results are a ten times improvement on
      some cold-cache stat workloads.
      Signed-off-by: NTao Ma <tao.ma@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      feb473a6
    • T
      ocfs2: Allocate inode groups from global_bitmap. · 60ca81e8
      Tao Ma 提交于
      Inode groups used to be allocated from local alloc file,
      but since we want all inodes to be contiguous enough, we
      will try to allocate them directly from global_bitmap.
      Signed-off-by: NTao Ma <tao.ma@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      60ca81e8
    • T
      ocfs2: Optimize inode allocation by remembering last group · 13821151
      Tao Ma 提交于
      In ocfs2, the inode block search looks for the "emptiest" inode
      group to allocate from. So if an inode alloc file has many equally
      (or almost equally) empty groups, new inodes will tend to get
      spread out amongst them, which in turn can put them all over the
      disk. This is undesirable because directory operations on conceptually
      "nearby" inodes force a large number of seeks.
      
      So we add ip_last_used_group in core directory inodes which records
      the last used allocation group. Another field named ip_last_used_slot
      is also added in case inode stealing happens. When claiming new inode,
      we passed in directory's inode so that the allocation can use this
      information.
      For more details, please see
      http://oss.oracle.com/osswiki/OCFS2/DesignDocs/InodeAllocationStrategy.
      Signed-off-by: NTao Ma <tao.ma@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      13821151
  5. 06 1月, 2009 7 次提交
    • J
      ocfs2: Use metadata-specific ocfs2_journal_access_*() functions. · 13723d00
      Joel Becker 提交于
      The per-metadata-type ocfs2_journal_access_*() functions hook up jbd2
      commit triggers and allow us to compute metadata ecc right before the
      buffers are written out.  This commit provides ecc for inodes, extent
      blocks, group descriptors, and quota blocks.  It is not safe to use
      extened attributes and metaecc at the same time yet.
      
      The ocfs2_extent_tree and ocfs2_path abstractions in alloc.c both hide
      the type of block at their root.  Before, it didn't matter, but now the
      root block must use the appropriate ocfs2_journal_access_*() function.
      To keep this abstract, the structures now have a pointer to the matching
      journal_access function and a wrapper call to call it.
      
      A few places use naked ocfs2_write_block() calls instead of adding the
      blocks to the journal.  We make sure to calculate their checksum and ecc
      before the write.
      
      Since we pass around the journal_access functions.  Let's typedef them
      in ocfs2.h.
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      13723d00
    • J
      ocfs2: block read meta ecc. · d6b32bbb
      Joel Becker 提交于
      Add block check calls to the read_block validate functions.  This is the
      almost all of the read-side checking of metaecc.  xattr buckets are not checked
      yet.   Writes are also unchecked, and so a read-write mount will quickly fail.
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      d6b32bbb
    • J
      ocfs2: Validate metadata only when it's read from disk. · 970e4936
      Joel Becker 提交于
      Add an optional validation hook to ocfs2_read_blocks().  Now the
      validation function is only called when a block was actually read off of
      disk.  It is not called when the buffer was in cache.
      
      We add a buffer state bit BH_NeedsValidate to flag these buffers.  It
      must always be one higher than the last JBD2 buffer state bit.
      
      The dinode, dirblock, extent_block, and xattr_block validators are
      lifted to this scheme directly.  The group_descriptor validator needs to
      be split into two pieces.  The first part only needs the gd buffer and
      is passed to ocfs2_read_block().  The second part requires the dinode as
      well, and is called every time.  It's only 3 compares, so it's tiny.
      This also allows us to clean up the non-fatal gd check used by resize.c.
      It now has no magic argument.
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      970e4936
    • J
      ocfs2: Morph the haphazard OCFS2_IS_VALID_GROUP_DESC() checks. · 42035306
      Joel Becker 提交于
      Random places in the code would check a group descriptor bh to see if it
      was valid. The previous commit unified descriptor block reads,
      validating all block reads in the same place.  Thus, these checks are no
      longer necessary.  Rather than eliminate them, however, we change them
      to BUG_ON() checks.  This ensures the assumptions remain true.  All of
      the code paths to these checks have been audited to ensure they come
      from a validated descriptor read.
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      42035306
    • J
      ocfs2: Wrap group descriptor reads in a dedicated function. · 68f64d47
      Joel Becker 提交于
      We have a clean call for validating group descriptors, but every place
      that wants the always does a read_block()+validate() call pair.  Create
      a toplevel ocfs2_read_group_descriptor() that does the right
      thing.  This allows us to leverage the single call point later for
      fancier handling.  We also add validation of gd->bg_generation against
      the superblock and gd->bg_blkno against the block we thought we read.
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      68f64d47
    • J
      ocfs2: Consolidate validation of group descriptors. · 57e3e797
      Joel Becker 提交于
      Currently the validation of group descriptors is directly duplicated so
      that one version can error the filesystem and the other (resize) can
      just report the problem.  Consolidate to one function that takes a
      boolean.  Wrap that function with the old call for the old users.
      
      This is in preparation for lifting the read+validate step into a
      single function.
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      57e3e797
    • J
      ocfs2: Morph the haphazard OCFS2_IS_VALID_DINODE() checks. · 10995aa2
      Joel Becker 提交于
      Random places in the code would check a dinode bh to see if it was
      valid.  Not only did they do different levels of validation, they
      handled errors in different ways.
      
      The previous commit unified inode block reads, validating all block
      reads in the same place.  Thus, these haphazard checks are no longer
      necessary.  Rather than eliminate them, however, we change them to
      BUG_ON() checks.  This ensures the assumptions remain true.  All of the
      code paths to these checks have been audited to ensure they come from a
      validated inode read.
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      10995aa2
  6. 15 10月, 2008 2 次提交
  7. 14 10月, 2008 9 次提交
    • M
      ocfs2: Don't check for NULL before brelse() · a81cb88b
      Mark Fasheh 提交于
      This is pointless as brelse() already does the check.
      
      Signed-off-by: Mark Fasheh
      a81cb88b
    • J
      ocfs2: Add the 'inode64' mount option. · 12462f1d
      Joel Becker 提交于
      Now that ocfs2 limits inode numbers to 32bits, add a mount option to
      disable the limit.  This parallels XFS.  64bit systems can handle the
      larger inode numbers.
      
      [ Added description of inode64 mount option in ocfs2.txt. --Mark ]
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      12462f1d
    • J
      ocfs2: Limit inode allocation to 32bits. · 1187c968
      Joel Becker 提交于
      ocfs2 inode numbers are block numbers.  For any filesystem with less
      than 2^32 blocks, this is not a problem.  However, when ocfs2 starts
      using JDB2, it will be able to support filesystems with more than 2^32
      blocks.  This would result in inode numbers higher than 2^32.
      
      The problem is that stat(2) can't handle those numbers on 32bit
      machines.  The simple solution is to have ocfs2 allocate all inodes
      below that boundary.
      
      The suballoc code is changed to honor an optional block limit.  Only the
      inode suballocator sets that limit - all other allocations stay unlimited.
      
      The biggest trick is to grow the inode suballocator beneath that limit.
      There's no point in allocating block groups that are above the limit,
      then rejecting their elements later on.  We want to prevent the inode
      allocator from ever having block groups above the limit.  This involves
      a little gyration with the local alloc code.  If the local alloc window
      is above the limit, it signals the caller to try the global bitmap but
      does not disable the local alloc file (which can be used for other
      allocations).
      
      [ Minor cleanup - removed an ML_NOTICE comment. --Mark ]
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      1187c968
    • J
      ocfs2: Make ocfs2_extent_tree the first-class representation of a tree. · f99b9b7c
      Joel Becker 提交于
      We now have three different kinds of extent trees in ocfs2: inode data
      (dinode), extended attributes (xattr_tree), and extended attribute
      values (xattr_value).  There is a nice abstraction for them,
      ocfs2_extent_tree, but it is hidden in alloc.c.  All the calling
      functions have to pick amongst a varied API and pass in type bits and
      often extraneous pointers.
      
      A better way is to make ocfs2_extent_tree a first-class object.
      Everyone converts their object to an ocfs2_extent_tree() via the
      ocfs2_get_*_extent_tree() calls, then uses the ocfs2_extent_tree for all
      tree calls to alloc.c.
      
      This simplifies a lot of callers, making for readability.  It also
      provides an easy way to add additional extent tree types, as they only
      need to be defined in alloc.c with a ocfs2_get_<new>_extent_tree()
      function.
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      f99b9b7c
    • T
      ocfs2: Add extended attribute support · cf1d6c76
      Tiger Yang 提交于
      This patch implements storing extended attributes both in inode or a single
      external block. We only store EA's in-inode when blocksize > 512 or that
      inode block has free space for it. When an EA's value is larger than 80
      bytes, we will store the value via b-tree outside inode or block.
      Signed-off-by: NTiger Yang <tiger.yang@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      cf1d6c76
    • T
      ocfs2: Add extent tree operation for xattr value btrees · f56654c4
      Tao Ma 提交于
      Add some thin wrappers around ocfs2_insert_extent() for each of the 3
      different btree types, ocfs2_inode_insert_extent(),
      ocfs2_xattr_value_insert_extent() and ocfs2_xattr_tree_insert_extent(). The
      last is for the xattr index btree, which will be used in a followup patch.
      
      All the old callers in file.c etc will call ocfs2_dinode_insert_extent(),
      while the other two handle the xattr issue. And the init of extent tree are
      handled by these functions.
      
      When storing xattr value which is too large, we will allocate some clusters
      for it and here ocfs2_extent_list and ocfs2_extent_rec will also be used. In
      order to re-use the b-tree operation code, a new parameter named "private"
      is added into ocfs2_extent_tree and it is used to indicate the root of
      ocfs2_exent_list. The reason is that we can't deduce the root from the
      buffer_head now. It may be in an inode, an ocfs2_xattr_block or even worse,
      in any place in an ocfs2_xattr_bucket.
      Signed-off-by: NTao Ma <tao.ma@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      f56654c4
    • T
      ocfs2: Abstract ocfs2_extent_tree in b-tree operations. · e7d4cb6b
      Tao Ma 提交于
      In the old extent tree operation, we take the hypothesis that we
      are using the ocfs2_extent_list in ocfs2_dinode as the tree root.
      As xattr will also use ocfs2_extent_list to store large value
      for a xattr entry, we refactor the tree operation so that xattr
      can use it directly.
      
      The refactoring includes 4 steps:
      1. Abstract set/get of last_eb_blk and update_clusters since they may
         be stored in different location for dinode and xattr.
      2. Add a new structure named ocfs2_extent_tree to indicate the
         extent tree the operation will work on.
      3. Remove all the use of fe_bh and di, use root_bh and root_el in
         extent tree instead. So now all the fe_bh is replaced with
         et->root_bh, el with root_el accordingly.
      4. Make ocfs2_lock_allocators generic. Now it is limited to be only used
         in file extend allocation. But the whole function is useful when we want
         to store large EAs.
      
      Note: This patch doesn't touch ocfs2_commit_truncate() since it is not used
      for anything other than truncate inode data btrees.
      Signed-off-by: NTao Ma <tao.ma@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      e7d4cb6b
    • T
      ocfs2: Use ocfs2_extent_list instead of ocfs2_dinode. · 811f933d
      Tao Ma 提交于
      ocfs2_extend_meta_needed(), ocfs2_calc_extend_credits() and
      ocfs2_reserve_new_metadata() are all useful for extent tree operations. But
      they are all limited to an inode btree because they use a struct
      ocfs2_dinode parameter. Change their parameter to struct ocfs2_extent_list
      (the part of an ocfs2_dinode they actually use) so that the xattr btree code
      can use these functions.
      Signed-off-by: NTao Ma <tao.ma@oracle.com>
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      811f933d
    • M
      ocfs2: throttle back local alloc when low on disk space · 9c7af40b
      Mark Fasheh 提交于
      Ocfs2's local allocator disables itself for the duration of a mount point
      when it has trouble allocating a large enough area from the primary bitmap.
      That can cause performance problems, especially for disks which were only
      temporarily full or fragmented. This patch allows for the allocator to
      shrink it's window first, before being disabled. Later, it can also be
      re-enabled so that any performance drop is minimized.
      
      To do this, we allow the value of osb->local_alloc_bits to be shrunk when
      needed. The default value is recorded in a mostly read-only variable so that
      we can re-initialize when required.
      
      Locking had to be updated so that we could protect changes to
      local_alloc_bits. Mostly this involves protecting various local alloc values
      with the osb spinlock. A new state is also added, OCFS2_LA_THROTTLED, which
      is used when the local allocator is has shrunk, but is not disabled. If the
      available space dips below 1 megabyte, the local alloc file is disabled. In
      either case, local alloc is re-enabled 30 seconds after the event, or when
      an appropriate amount of bits is seen in the primary bitmap.
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      9c7af40b
  8. 18 4月, 2008 3 次提交
  9. 03 2月, 2008 1 次提交
  10. 26 1月, 2008 3 次提交
  11. 21 9月, 2007 1 次提交
    • M
      ocfs2: Allow smaller allocations during large writes · 415cb800
      Mark Fasheh 提交于
      The ocfs2 write code loops through a page much like the block code, except
      that ocfs2 allocation units can be any size, including larger than page
      size. Typically it's equal to or larger than page size - most kernels run 4k
      pages, the minimum ocfs2 allocation (cluster) size.
      
      Some changes introduced during 2.6.23 changed the way writes to pages are
      handled, and inadvertantly broke support for > 4k page size. Instead of just
      writing one cluster at a time, we now handle the whole page in one pass.
      
      This means that multiple (small) seperate allocations might happen in the
      same pass. The allocation code howver typically optimizes by getting the
      maximum which was reserved. This triggered a BUG_ON in the extend code where
      it'd ask for a single bit (for one part of a > 4k page) and get back more
      than it asked for.
      
      Fix this by providing a variant of the high level allocation function which
      allows the caller to specify a maximum. The traditional function remains and
      just calls the new one with a maximum determined from the initial
      reservation.
      Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
      415cb800
  12. 11 7月, 2007 3 次提交
  13. 03 5月, 2007 1 次提交