1. 04 6月, 2018 1 次提交
  2. 09 3月, 2018 1 次提交
  3. 23 1月, 2018 2 次提交
  4. 19 1月, 2018 3 次提交
  5. 28 11月, 2017 1 次提交
  6. 16 11月, 2017 3 次提交
  7. 10 8月, 2017 1 次提交
    • A
      gfs2: forcibly flush ail to relieve memory pressure · b066a4ee
      Abhi Das 提交于
      On systems with low memory, it is possible for gfs2 to infinitely
      loop in balance_dirty_pages() under heavy IO (creating sparse files).
      
      balance_dirty_pages() attempts to write out the dirty pages via
      gfs2_writepages() but none are found because these dirty pages are
      being used by the journaling code in the ail. Normally, the journal
      has an upper threshold which when hit triggers an automatic flush
      of the ail. But this threshold can be higher than the number of
      allowable dirty pages and result in the ail never being flushed.
      
      This patch forces an ail flush when gfs2_writepages() fails to write
      anything. This is a good indication that the ail might be holding
      some dirty pages.
      Signed-off-by: NAbhi Das <adas@redhat.com>
      Signed-off-by: NBob Peterson <rpeterso@redhat.com>
      b066a4ee
  8. 03 2月, 2017 1 次提交
  9. 11 12月, 2016 1 次提交
    • A
      fix gfs2_stuffed_write_end() on short copies · 43388b21
      Al Viro 提交于
      a) the page is uptodate - ->write_begin() would either fail (in which
      case we don't reach ->write_end()), or unstuff the inode, or find the
      page already uptodate, or do a successful call of stuffed_readpage(),
      which would've made it uptodate
      
      b) zeroing the tail in pagecache is wrong.  kill -9 at the right time
      while writing unmodified file contents to the same file should _not_
      leave us in a situation when read() from the file will be reporting
      it full of zeroes.  Especially since that effect will be transient -
      at some later point the page will be evicted and then we'll be back
      to the real file contents.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      43388b21
  10. 18 8月, 2016 1 次提交
  11. 03 8月, 2016 1 次提交
  12. 27 6月, 2016 1 次提交
    • B
      gfs2: writeout truncated pages · fd4c5748
      Benjamin Marzinski 提交于
      When gfs2 attempts to write a page to a file that is being truncated,
      and notices that the page is completely outside of the file size, it
      tries to invalidate it.  However, this may require a transaction for
      journaled data files to revoke any buffers from the page on the active
      items list. Unfortunately, this can happen inside a log flush, where a
      transaction cannot be started. Also, gfs2 may need to be able to remove
      the buffer from the ail1 list before it can finish the log flush.
      
      To deal with this, when writing a page of a file with data journalling
      enabled gfs2 now skips the check to see if the write is outside the file
      size, and simply writes it anyway. This situation can only occur when
      the truncate code still has the file locked exclusively, and hasn't
      marked this block as free in the metadata (which happens later in
      truc_dealloc).  After gfs2 writes this page out, the truncation code
      will shortly invalidate it and write out any revokes if necessary.
      
      To do this, gfs2 now implements its own version of block_write_full_page
      without the check, and calls the newly exported __block_write_full_page.
      It also no longer calls gfs2_writepage_common from gfs2_jdata_writepage.
      Signed-off-by: NBenjamin Marzinski <bmarzins@redhat.com>
      Signed-off-by: NBob Peterson <rpeterso@redhat.com>
      fd4c5748
  13. 07 5月, 2016 1 次提交
  14. 02 5月, 2016 1 次提交
  15. 20 4月, 2016 1 次提交
  16. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  17. 15 3月, 2016 1 次提交
    • B
      GFS2: Fix direct IO write rounding error · 2df6f471
      Bob Peterson 提交于
      The fsx test in xfstests was failing because it was using direct IO
      writes which were using a bad calculation. It was using
      loff_t lstart = offset & (PAGE_CACHE_SIZE - 1); when it should be
      loff_t lstart = offset & ~(PAGE_CACHE_SIZE - 1);
      Thus, the write at offset 0x67e00 was calculating lstart to be
      0xe00, the address of our corruption. Instead, it should have been
      0x67000. This patch fixes the calculation.
      Signed-off-by: NBob Peterson <rpeterso@redhat.com>
      Acked-by: NSteven Whitehouse <swhiteho@redhat.com>
      2df6f471
  18. 24 11月, 2015 1 次提交
    • B
      GFS2: Extract quota data from reservations structure (revert 5407e242) · b54e9a0b
      Bob Peterson 提交于
      This patch basically reverts the majority of patch 5407e242.
      That patch eliminated the gfs2_qadata structure in favor of just
      using the reservations structure. The problem with doing that is that
      it increases the size of the reservations structure. That is not an
      issue until it comes time to fold the reservations structure into the
      inode in memory so we know it's always there. By separating out the
      quota structure again, we aren't punishing the non-quota users by
      making all the inodes bigger, requiring more slab space. This patch
      creates a new slab area to allocate the quota stuff so it's managed
      a little more sanely.
      Signed-off-by: NBob Peterson <rpeterso@redhat.com>
      b54e9a0b
  19. 06 5月, 2015 1 次提交
    • F
      gfs2: kerneldoc warning fixes · 1272574b
      Fabian Frederick 提交于
      Fixes the following kernel-doc warnings:
      Warning(fs/gfs2/aops.c:180): No description found for parameter 'wbc'
      Warning(fs/gfs2/aops.c:236): No description found for parameter 'end'
      Warning(fs/gfs2/aops.c:236): No description found for parameter 'done_index'
      Warning(fs/gfs2/aops.c:236): Excess function parameter 'writepage' description in 'gfs2_write_jdata_pagevec'
      Warning(fs/gfs2/aops.c:346): Excess function parameter 'writepage' description in 'gfs2_write_cache_jdata'
      Warning(fs/gfs2/aops.c:346): Excess function parameter 'data' description in 'gfs2_write_cache_jdata'
      Warning(fs/gfs2/aops.c:605): No description found for parameter 'file'
      Warning(fs/gfs2/aops.c:605): No description found for parameter 'mapping'
      Warning(fs/gfs2/aops.c:605): No description found for parameter 'pages'
      Warning(fs/gfs2/aops.c:605): No description found for parameter 'nr_pages'
      Warning(fs/gfs2/aops.c:870): No description found for parameter 'copied'
      Signed-off-by: NFabian Frederick <fabf@skynet.be>
      Signed-off-by: NBob Peterson <rpeterso@redhat.com>
      1272574b
  20. 12 4月, 2015 3 次提交
  21. 26 3月, 2015 1 次提交
  22. 19 3月, 2015 1 次提交
    • A
      gfs2: perform quota checks against allocation parameters · b8fbf471
      Abhi Das 提交于
      Use struct gfs2_alloc_parms as an argument to gfs2_quota_check()
      and gfs2_quota_lock_check() to check for quota violations while
      accounting for the new blocks requested by the current operation
      in ap->target.
      
      Previously, the number of new blocks requested during an operation
      were not accounted for during quota_check and would allow these
      operations to exceed quota. This was not very apparent since most
      operations allocated only 1 block at a time and quotas would get
      violated in the next operation. i.e. quota excess would only be by
      1 block or so. With fallocate, (where we allocate a bunch of blocks
      at once) the quota excess is non-trivial and is addressed by this
      patch.
      Signed-off-by: NAbhi Das <adas@redhat.com>
      Signed-off-by: NBob Peterson <rpeterso@redhat.com>
      Acked-by: NSteven Whitehouse <swhiteho@redhat.com>
      b8fbf471
  23. 21 1月, 2015 1 次提交
  24. 05 6月, 2014 1 次提交
    • M
      mm: non-atomically mark page accessed during page cache allocation where possible · 2457aec6
      Mel Gorman 提交于
      aops->write_begin may allocate a new page and make it visible only to have
      mark_page_accessed called almost immediately after.  Once the page is
      visible the atomic operations are necessary which is noticable overhead
      when writing to an in-memory filesystem like tmpfs but should also be
      noticable with fast storage.  The objective of the patch is to initialse
      the accessed information with non-atomic operations before the page is
      visible.
      
      The bulk of filesystems directly or indirectly use
      grab_cache_page_write_begin or find_or_create_page for the initial
      allocation of a page cache page.  This patch adds an init_page_accessed()
      helper which behaves like the first call to mark_page_accessed() but may
      called before the page is visible and can be done non-atomically.
      
      The primary APIs of concern in this care are the following and are used
      by most filesystems.
      
      	find_get_page
      	find_lock_page
      	find_or_create_page
      	grab_cache_page_nowait
      	grab_cache_page_write_begin
      
      All of them are very similar in detail to the patch creates a core helper
      pagecache_get_page() which takes a flags parameter that affects its
      behavior such as whether the page should be marked accessed or not.  Then
      old API is preserved but is basically a thin wrapper around this core
      function.
      
      Each of the filesystems are then updated to avoid calling
      mark_page_accessed when it is known that the VM interfaces have already
      done the job.  There is a slight snag in that the timing of the
      mark_page_accessed() has now changed so in rare cases it's possible a page
      gets to the end of the LRU as PageReferenced where as previously it might
      have been repromoted.  This is expected to be rare but it's worth the
      filesystem people thinking about it in case they see a problem with the
      timing change.  It is also the case that some filesystems may be marking
      pages accessed that previously did not but it makes sense that filesystems
      have consistent behaviour in this regard.
      
      The test case used to evaulate this is a simple dd of a large file done
      multiple times with the file deleted on each iterations.  The size of the
      file is 1/10th physical memory to avoid dirty page balancing.  In the
      async case it will be possible that the workload completes without even
      hitting the disk and will have variable results but highlight the impact
      of mark_page_accessed for async IO.  The sync results are expected to be
      more stable.  The exception is tmpfs where the normal case is for the "IO"
      to not hit the disk.
      
      The test machine was single socket and UMA to avoid any scheduling or NUMA
      artifacts.  Throughput and wall times are presented for sync IO, only wall
      times are shown for async as the granularity reported by dd and the
      variability is unsuitable for comparison.  As async results were variable
      do to writback timings, I'm only reporting the maximum figures.  The sync
      results were stable enough to make the mean and stddev uninteresting.
      
      The performance results are reported based on a run with no profiling.
      Profile data is based on a separate run with oprofile running.
      
      async dd
                                          3.15.0-rc3            3.15.0-rc3
                                             vanilla           accessed-v2
      ext3    Max      elapsed     13.9900 (  0.00%)     11.5900 ( 17.16%)
      tmpfs	Max      elapsed      0.5100 (  0.00%)      0.4900 (  3.92%)
      btrfs   Max      elapsed     12.8100 (  0.00%)     12.7800 (  0.23%)
      ext4	Max      elapsed     18.6000 (  0.00%)     13.3400 ( 28.28%)
      xfs	Max      elapsed     12.5600 (  0.00%)      2.0900 ( 83.36%)
      
      The XFS figure is a bit strange as it managed to avoid a worst case by
      sheer luck but the average figures looked reasonable.
      
              samples percentage
      ext3       86107    0.9783  vmlinux-3.15.0-rc4-vanilla        mark_page_accessed
      ext3       23833    0.2710  vmlinux-3.15.0-rc4-accessed-v3r25 mark_page_accessed
      ext3        5036    0.0573  vmlinux-3.15.0-rc4-accessed-v3r25 init_page_accessed
      ext4       64566    0.8961  vmlinux-3.15.0-rc4-vanilla        mark_page_accessed
      ext4        5322    0.0713  vmlinux-3.15.0-rc4-accessed-v3r25 mark_page_accessed
      ext4        2869    0.0384  vmlinux-3.15.0-rc4-accessed-v3r25 init_page_accessed
      xfs        62126    1.7675  vmlinux-3.15.0-rc4-vanilla        mark_page_accessed
      xfs         1904    0.0554  vmlinux-3.15.0-rc4-accessed-v3r25 init_page_accessed
      xfs          103    0.0030  vmlinux-3.15.0-rc4-accessed-v3r25 mark_page_accessed
      btrfs      10655    0.1338  vmlinux-3.15.0-rc4-vanilla        mark_page_accessed
      btrfs       2020    0.0273  vmlinux-3.15.0-rc4-accessed-v3r25 init_page_accessed
      btrfs        587    0.0079  vmlinux-3.15.0-rc4-accessed-v3r25 mark_page_accessed
      tmpfs      59562    3.2628  vmlinux-3.15.0-rc4-vanilla        mark_page_accessed
      tmpfs       1210    0.0696  vmlinux-3.15.0-rc4-accessed-v3r25 init_page_accessed
      tmpfs         94    0.0054  vmlinux-3.15.0-rc4-accessed-v3r25 mark_page_accessed
      
      [akpm@linux-foundation.org: don't run init_page_accessed() against an uninitialised pointer]
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Tested-by: NPrabhakar Lad <prabhakar.csengg@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2457aec6
  25. 14 5月, 2014 1 次提交
    • B
      GFS2: remove transaction glock · 24972557
      Benjamin Marzinski 提交于
      GFS2 has a transaction glock, which must be grabbed for every
      transaction, whose purpose is to deal with freezing the filesystem.
      Aside from this involving a large amount of locking, it is very easy to
      make the current fsfreeze code hang on unfreezing.
      
      This patch rewrites how gfs2 handles freezing the filesystem. The
      transaction glock is removed. In it's place is a freeze glock, which is
      cached (but not held) in a shared state by every node in the cluster
      when the filesystem is mounted. This lock only needs to be grabbed on
      freezing, and actions which need to be safe from freezing, like
      recovery.
      
      When a node wants to freeze the filesystem, it grabs this glock
      exclusively.  When the freeze glock state changes on the nodes (either
      from shared to unlocked, or shared to exclusive), the filesystem does a
      special log flush.  gfs2_log_flush() does all the work for flushing out
      the and shutting down the incore log, and then it tries to grab the
      freeze glock in a shared state again.  Since the filesystem is stuck in
      gfs2_log_flush, no new transaction can start, and nothing can be written
      to disk. Unfreezing the filesytem simply involes dropping the freeze
      glock, allowing gfs2_log_flush() to grab and then release the shared
      lock, so it is cached for next time.
      
      However, in order for the unfreezing ioctl to occur, gfs2 needs to get a
      shared lock on the filesystem root directory inode to check permissions.
      If that glock has already been grabbed exclusively, fsfreeze will be
      unable to get the shared lock and unfreeze the filesystem.
      
      In order to allow the unfreeze, this patch makes gfs2 grab a shared lock
      on the filesystem root directory during the freeze, and hold it until it
      unfreezes the filesystem.  The functions which need to grab a shared
      lock in order to allow the unfreeze ioctl to be issued now use the lock
      grabbed by the freeze code instead.
      
      The freeze and unfreeze code take care to make sure that this shared
      lock will not be dropped while another process is using it.
      Signed-off-by: NBenjamin Marzinski <bmarzins@redhat.com>
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      24972557
  26. 07 5月, 2014 3 次提交
  27. 06 2月, 2014 1 次提交
    • S
      GFS2: journal data writepages update · 774016b2
      Steven Whitehouse 提交于
      GFS2 has carried what is more or less a copy of the
      write_cache_pages() for some time. It seems that this
      copy has slipped behind the core code over time. This
      patch brings it back uptodate, and in addition adds the
      tracepoint which would otherwise be missing.
      
      We could go further, and eliminate some or all of the
      code duplication here. The issue is that if we do that,
      then the function we need to split out from the existing
      write_cache_pages(), which will look a lot like
      gfs2_jdata_write_pagevec(), would land up putting quite a
      lot of extra variables on the stack. I know that has been
      a problem in the past in the writeback code path, which
      is why I've hesitated to do it here.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      774016b2
  28. 15 1月, 2014 1 次提交
  29. 03 1月, 2014 1 次提交
    • S
      GFS2: Clean up releasepage · e4f29206
      Steven Whitehouse 提交于
      For historical reasons, we drop and retake the log lock in ->releasepage()
      however, since there is no reason why we cannot hold the log lock over
      the whole function, this allows some simplification. In particular,
      pinning a buffer is only ever done under the log lock, so it is possible
      here to remove the test for pinned buffers in the second loop, since it
      is impossible for that to happen (it is also tested in the first loop).
      
      As a result, two tests made later in the second loop become constants
      and can also be reduced to the only possible branch. So the net result
      is to remove various bits of unreachable code and make this more
      readable.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      e4f29206
  30. 20 12月, 2013 1 次提交
  31. 02 10月, 2013 1 次提交
    • S
      GFS2: Add allocation parameters structure · 7b9cff46
      Steven Whitehouse 提交于
      This patch adds a structure to contain allocation parameters with
      the intention of future expansion of this structure. The idea is
      that we should be able to add more information about the allocation
      in the future in order to allow the allocator to make a better job
      of placing the requests on-disk.
      
      There is no functional difference from applying this patch.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      7b9cff46