1. 26 1月, 2008 13 次提交
    • M
      ocfs2: Support commit= mount option · d147b3d6
      Mark Fasheh 提交于
      Mostly taken from ext3. This allows the user to set the jbd commit interval,
      in seconds. The default of 5 seconds stays the same, but now users can
      easily increase the commit interval. Typically, this would be increased in
      order to benefit performance at the expense of data-safety.
      Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
      d147b3d6
    • M
      ocfs2: Add missing permission checks · 0957f007
      Mark Fasheh 提交于
      Check that an online resize is being driven by a user with permission to
      change system resource limits.
      Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
      0957f007
    • T
      [PATCH 2/2] ocfs2: Implement group add for online resize · 7909f2bf
      Tao Ma 提交于
      This patch adds the ability for a userspace program to request that a
      properly formatted cluster group be added to the main allocation bitmap for
      an Ocfs2 file system. The request is made via an ioctl, OCFS2_IOC_GROUP_ADD.
      On a high level, this is similar to ext3, but we use a different ioctl as
      the structure which has to be passed through is different.
      
      During an online resize, tunefs.ocfs2 will format any new cluster groups
      which must be added to complete the resize, and call OCFS2_IOC_GROUP_ADD on
      each one. Kernel verifies that the core cluster group information is valid
      and then does the work of linking it into the global allocation bitmap.
      Signed-off-by: NTao Ma <tao.ma@oracle.com>
      Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
      7909f2bf
    • T
      [PATCH 1/2] ocfs2: Add group extend for online resize · d659072f
      Tao Ma 提交于
      This patch adds the ability for a userspace program to request an extend of
      last cluster group on an Ocfs2 file system. The request is made via ioctl,
      OCFS2_IOC_GROUP_EXTEND. This is derived from EXT3_IOC_GROUP_EXTEND, but is
      obviously Ocfs2 specific.
      
      tunefs.ocfs2 would call this for an online-resize operation if the last
      cluster group isn't full.
      Signed-off-by: NTao Ma <tao.ma@oracle.com>
      Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
      d659072f
    • T
      ocfs2: Initalize bitmap_cpg of ocfs2_super to be the maximum. · e9d578a8
      Tao Ma 提交于
      This value is initialized from global_bitmap->id2.i_chain.cl_cpg. If there
      is only 1 group, it will be equal to the total clusters in the volume. So
      as for online resize, it should change for all the nodes in the cluster.
      It isn't easy and there is no corresponding lock for it.
      
      bitmap_cpg is only used in 2 areas:
      1. Check whether the suballoc is too large for us to allocate from the global
         bitmap, so it is little used. And now the suballoc size is 2048, it rarely
         meet this situation and the check is almost useless.
      2. Calculate which group a cluster belongs to. We use it during truncate to
         figure out which cluster group an extent belongs too. But we should be OK
         if we increase it though as the cluster group calculated shouldn't change
         and we only ever have a small bitmap_cpg on file systems with a single
         cluster group.
      Signed-off-by: NTao Ma <tao.ma@oracle.com>
      Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
      e9d578a8
    • M
      ocfs2: Documentation update · 1252c434
      Mark Fasheh 提交于
      Remove 'readpages' from the list in ocfs2.txt. Instead of having two
      identical lists, I just removed the list in the OCFS2 section of fs/Kconfig
      and added a pointer to Documentation/filesystems/ocfs2.txt.
      Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
      1252c434
    • M
      ocfs2: Readpages support · 628a24f5
      Mark Fasheh 提交于
      Add ->readpages support to Ocfs2. This is rather trivial - all it required
      is a small update to ocfs2_get_block (for mapping full extents via b_size)
      and an ocfs2_readpages() function which partially mirrors ocfs2_readpage().
      Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
      628a24f5
    • M
      ocfs2: Rename ocfs2_meta_[un]lock · e63aecb6
      Mark Fasheh 提交于
      Call this the "inode_lock" now, since it covers both data and meta data.
      This patch makes no functional changes.
      Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
      e63aecb6
    • M
      ocfs2: Remove data locks · c934a92d
      Mark Fasheh 提交于
      The meta lock now covers both meta data and data, so this just removes the
      now-redundant data lock.
      
      Combining locks saves us a round of lock mastery per inode and one less lock
      to ping between nodes during read/write.
      
      We don't lose much - since meta locks were always held before a data lock
      (and at the same level) ordered writeout mode (the default) ensured that
      flushing for the meta data lock also pushed out data anyways.
      Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
      c934a92d
    • M
      ocfs2: Add data downconvert worker to inode lock · f1f54068
      Mark Fasheh 提交于
      In order to extend inode lock coverage to inode data, we use the same data
      downconvert worker with only a small modification to only do work for
      regular files.
      Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
      f1f54068
    • M
      ocfs2: Remove mount/unmount votes · 34d024f8
      Mark Fasheh 提交于
      The node maps that are set/unset by these votes are no longer relevant, thus
      we can remove the mount and umount votes. Since those are the last two
      remaining votes, we can also remove the entire vote infrastructure.
      
      The vote thread has been renamed to the downconvert thread, and the small
      amount of functionality related to managing it has been moved into
      fs/ocfs2/dlmglue.c. All references to votes have been removed or updated.
      Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
      34d024f8
    • M
      ocfs2: Remove fs dependency on ocfs2_heartbeat module · 6f7b056e
      Mark Fasheh 提交于
      Now that the dlm exposes domain information to us, we don't need generic
      node up / node down callbacks. And since the DLM is only telling us when a
      node goes down unexpectedly, we no longer need to optimize away node down
      callbacks via the umount map.
      Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
      6f7b056e
    • M
      ocfs2_dlm: Call node eviction callbacks from heartbeat handler · 6561168c
      Mark Fasheh 提交于
      With this, a dlm client can take advantage of the group protocol in the dlm
      to get full notification whenever a node within the dlm domain leaves
      unexpectedly.
      Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
      6561168c
  2. 25 1月, 2008 27 次提交