1. 13 3月, 2015 1 次提交
    • M
      ocfs2: make append_dio an incompat feature · 18d585f0
      Mark Fasheh 提交于
      It turns out that making this feature ro_compat isn't quite enough to
      prevent accidental corruption on mount from older kernels.  Ocfs2 (like
      other file systems) will process orphaned inodes even when the user mounts
      in 'ro' mode.  So for the case of a filesystem not knowing the append_dio
      feature, mounting the filesystem could result in orphaned-for-dio files
      being deleted, which we clearly don't want.
      
      So instead, turn this into an incompat flag.
      
      Btw, this is kind of my fault - initially I asked that we add a flag to
      cover the feature and even suggested that we use an ro flag.  It wasn't
      until I was looking through our commits for v4.0-rc1 that I realized we
      actually want this to be incompat.
      Signed-off-by: NMark Fasheh <mfasheh@suse.de>
      Cc: Joseph Qi <joseph.qi@huawei.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      18d585f0
  2. 17 2月, 2015 2 次提交
  3. 27 4月, 2011 1 次提交
  4. 31 3月, 2011 1 次提交
  5. 22 12月, 2010 1 次提交
  6. 08 10月, 2010 1 次提交
    • S
      · 2c442719
      Sunil Mushran 提交于
      ocfs2: Add support for heartbeat=global mount option
      
      Adds support for heartbeat=global mount option. It ensures that the heartbeat
      mode passed matches the one enabled on disk.
      Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com>
      2c442719
  7. 10 10月, 2010 1 次提交
    • S
      · 98f486f2
      Sunil Mushran 提交于
      ocfs2: Add an incompat feature flag OCFS2_FEATURE_INCOMPAT_CLUSTERINFO
      
      OCFS2_FEATURE_INCOMPAT_CLUSTERINFO allows us to use sb->s_cluster_info for
      both userspace and o2cb cluster stacks. It also allows us to extend cluster
      info to include stack flags.
      
      This patch also adds stackflags to sb->s_clusterinfo. It also introduces a
      clusterinfo flag OCFS2_CLUSTER_O2CB_GLOBAL_HEARTBEAT to denote the enabled
      global heartbeat mode.
      
      This incompat flag can be set/cleared using tunefs.ocfs2 --fs-features. The
      clusterinfo flag is set/cleared using tunefs.ocfs2 --update-cluster-stack.
      Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com>
      98f486f2
  8. 24 9月, 2010 1 次提交
  9. 10 9月, 2010 1 次提交
    • T
      ocfs2: Cache system inodes of other slots. · b4d693fc
      Tao Ma 提交于
      Durring orphan scan, if we are slot 0, and we are replaying
      orphan_dir:0001, the general process is that for every file
      in this dir:
      1. we will iget orphan_dir:0001, since there is no inode for it.
         we will have to create an inode and read it from the disk.
      2. do the normal work, such as delete_inode and remove it from
         the dir if it is allowed.
      3. call iput orphan_dir:0001 when we are done. In this case,
         since we have no dcache for this inode, i_count will
         reach 0, and VFS will have to call clear_inode and in
         ocfs2_clear_inode we will checkpoint the inode which will let
         ocfs2_cmt and journald begin to work.
      4. We loop back to 1 for the next file.
      
      So you see, actually for every deleted file, we have to read the
      orphan dir from the disk and checkpoint the journal. It is very
      time consuming and cause a lot of journal checkpoint I/O.
      A better solution is that we can have another reference for these
      inodes in ocfs2_super. So if there is no other race among
      nodes(which will let dlmglue to checkpoint the inode), for step 3,
      clear_inode won't be called and for step 1, we may only need to
      read the inode for the 1st time. This is a big win for us.
      
      So this patch will try to cache system inodes of other slots so
      that we will have one more reference for these inodes and avoid
      the extra inode read and journal checkpoint.
      Signed-off-by: NTao Ma <tao.ma@oracle.com>
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      b4d693fc
  10. 06 5月, 2010 1 次提交
    • M
      ocfs2: increase the default size of local alloc windows · 6b82021b
      Mark Fasheh 提交于
      I have observed that the current size of 8M gives us pretty poor
      fragmentation on multi-threaded workloads which do lots of writes.
      
      Generally, I can increase the size of local alloc windows and observe a
      marked decrease in fragmentation, even up and beyond window sizes of 512
      megabytes. This makes sense for a couple reasons - larger local alloc means
      more room for reservation windows. On multi-node workloads the larger local
      alloc helps as well because we don't have to do window slides as often.
      
      Also, I removed the OCFS2_DEFAULT_LOCAL_ALLOC_SIZE constant as it is no
      longer used and the comment above it was out of date.
      
      To test fragmentation, I used a workload which launched 4 threads that did
      4k writes into a series of about 140 alternating files.
      
      With resv_level=2, and a 4k/4k file system I observed the following average
      fragmentation for various localalloc= parameters:
      
      localalloc=	avg. fragmentation
      	8		48
      	32		16
      	64		10
      	120		7
      
      On larger cluster sizes, the difference is more dramatic.
      
      The new default size top out at 256M, which we'll only get for cluster
      sizes of 32K and above.
      Signed-off-by: NMark Fasheh <mfasheh@suse.com>
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      6b82021b
  11. 18 3月, 2010 1 次提交
  12. 17 5月, 2010 1 次提交
  13. 13 4月, 2010 1 次提交
  14. 26 3月, 2010 1 次提交
  15. 13 4月, 2010 2 次提交
  16. 03 3月, 2010 1 次提交
  17. 26 1月, 2010 1 次提交
  18. 18 12月, 2009 1 次提交
  19. 23 9月, 2009 4 次提交
  20. 04 4月, 2009 6 次提交
  21. 13 3月, 2009 1 次提交
  22. 06 1月, 2009 7 次提交
  23. 17 12月, 2008 1 次提交
  24. 11 11月, 2008 1 次提交