1. 28 7月, 2016 1 次提交
  2. 01 6月, 2016 2 次提交
    • Y
      ceph: improve fscache revalidation · f7f7e7a0
      Yan, Zheng 提交于
      There are several issues in fscache revalidation code.
      - In ceph_revalidate_work(), fscache_invalidate() is called when
        fscache_check_consistency() return 0. This is complete wrong
        because 0 means cache is valid.
      - Handle_cap_grant() calls ceph_queue_revalidate() if client
        already has CAP_FILE_CACHE. This code is confusing. Client
        should revalidate the cache each time it got CAP_FILE_CACHE
        anew.
      - In Handle_cap_grant(), fscache_invalidate() is called if MDS
        revokes CAP_FILE_CACHE. This is inconsistency with the case
        that inode get evicted. In the later case, the cache is not
        discarded. Client may use the cache when inode is reloaded.
      
      This patch moves the fscache revalidation into ceph_get_caps().
      Client revalidates the cache after it gets CAP_FILE_CACHE.
      i_rdcache_gen should keep constance while CAP_FILE_CACHE is
      used. If i_fscache_gen is not equal to i_rdcache_gen, client
      needs to check cache's consistency.
      Signed-off-by: NYan, Zheng <zyan@redhat.com>
      f7f7e7a0
    • Y
      ceph: avoid unnecessary fscache invalidation/revlidation · 14649758
      Yan, Zheng 提交于
      ceph_fill_file_size() has already called ceph_fscache_invalidate()
      if it return true.
      Signed-off-by: NYan, Zheng <zyan@redhat.com>
      14649758
  3. 26 5月, 2016 2 次提交
  4. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  5. 26 3月, 2016 1 次提交
  6. 05 3月, 2016 1 次提交
  7. 23 1月, 2016 1 次提交
    • A
      wrappers for ->i_mutex access · 5955102c
      Al Viro 提交于
      parallel to mutex_{lock,unlock,trylock,is_locked,lock_nested},
      inode_foo(inode) being mutex_foo(&inode->i_mutex).
      
      Please, use those for access to ->i_mutex; over the coming cycle
      ->i_mutex will become rwsem, with ->lookup() done with it held
      only shared.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      5955102c
  8. 03 11月, 2015 2 次提交
  9. 09 9月, 2015 1 次提交
  10. 31 7月, 2015 1 次提交
    • Y
      ceph: always re-send cap flushes when MDS recovers · fc927cd3
      Yan, Zheng 提交于
      commit e548e9b9 makes the kclient
      only re-send cap flush once during MDS failover. If the kclient sends
      a cap flush after MDS enters reconnect stage but before MDS recovers.
      The kclient will skip re-sending the same cap flush when MDS recovers.
      
      This causes problem for newly created inode. The MDS handles cap
      flushes before replaying unsafe requests, so it's possible that MDS
      find corresponding inode is missing when handling cap flush. The fix
      is reverting to old behaviour: always re-send when MDS recovers
      Signed-off-by: NYan, Zheng <zyan@redhat.com>
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      fc927cd3
  11. 25 6月, 2015 15 次提交
  12. 20 4月, 2015 3 次提交
  13. 16 4月, 2015 1 次提交
  14. 19 2月, 2015 4 次提交
  15. 18 12月, 2014 4 次提交