1. 18 6月, 2017 1 次提交
  2. 15 6月, 2017 2 次提交
    • A
      ufs_truncate_blocks(): fix the case when size is in the last direct block · a8fad984
      Al Viro 提交于
      The logics when deciding whether we need to do anything with direct blocks
      is broken when new size is within the last direct block.  It's better to
      find the path to the last byte _not_ to be removed and use that instead
      of the path to the beginning of the first block to be freed...
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      a8fad984
    • A
      ufs: avoid grabbing ->truncate_mutex if possible · 09bf4f5b
      Al Viro 提交于
      tail unpacking is done in a wrong place; the deadlocks galore
      is best dealt with by doing that in ->write_iter() (and switching
      to iomap, while we are at it), but that's rather painful to
      backport.  The trouble comes from grabbing pages that cover
      the beginning of tail from inside of ufs_new_fragments(); ongoing
      pageout of any of those is going to deadlock on ->truncate_mutex
      with process that got around to extending the tail holding that
      and waiting for page to get unlocked, while ->writepage() on
      that page is waiting on ->truncate_mutex.
      
      The thing is, we don't need ->truncate_mutex when the fragment
      we are trying to map is within the tail - the damn thing is
      allocated (tail can't contain holes).
      
      Let's do a plain lookup and if the fragment is present, we can
      just pretend that we'd won the race in almost all cases.  The
      only exception is a fragment between the end of tail and the
      end of block containing tail.
      
      Protect ->i_lastfrag with ->meta_lock - read_seqlock_excl() is
      sufficient.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      09bf4f5b
  3. 11 6月, 2017 1 次提交
  4. 10 6月, 2017 4 次提交
  5. 25 12月, 2016 1 次提交
  6. 23 12月, 2016 1 次提交
  7. 05 11月, 2016 1 次提交
  8. 28 9月, 2016 1 次提交
  9. 22 9月, 2016 1 次提交
  10. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  11. 09 12月, 2015 1 次提交
    • A
      don't put symlink bodies in pagecache into highmem · 21fc61c7
      Al Viro 提交于
      kmap() in page_follow_link_light() needed to go - allowing to hold
      an arbitrary number of kmaps for long is a great way to deadlocking
      the system.
      
      new helper (inode_nohighmem(inode)) needs to be used for pagecache
      symlinks inodes; done for all in-tree cases.  page_follow_link_light()
      instrumented to yell about anything missed.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      21fc61c7
  12. 07 12月, 2015 1 次提交
    • A
      ufs: get rid of ->setattr() for symlinks · 9cdce3c0
      Al Viro 提交于
      It was to needed for a couple of months in 2010, until UFS
      quota support got dropped.  Since then it's equivalent to
      simple_setattr() (i.e. the default) for everything except the
      regular files.  And dropping it there allows to convert all
      UFS symlinks to {page,simple}_symlink_inode_operations, getting
      rid of fs/ufs/symlink.c completely.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      9cdce3c0
  13. 07 7月, 2015 24 次提交