1. 26 5月, 2016 1 次提交
  2. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  3. 26 3月, 2016 1 次提交
  4. 05 3月, 2016 1 次提交
  5. 23 1月, 2016 1 次提交
    • A
      wrappers for ->i_mutex access · 5955102c
      Al Viro 提交于
      parallel to mutex_{lock,unlock,trylock,is_locked,lock_nested},
      inode_foo(inode) being mutex_foo(&inode->i_mutex).
      
      Please, use those for access to ->i_mutex; over the coming cycle
      ->i_mutex will become rwsem, with ->lookup() done with it held
      only shared.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      5955102c
  6. 03 11月, 2015 2 次提交
  7. 09 9月, 2015 1 次提交
  8. 31 7月, 2015 1 次提交
    • Y
      ceph: always re-send cap flushes when MDS recovers · fc927cd3
      Yan, Zheng 提交于
      commit e548e9b9 makes the kclient
      only re-send cap flush once during MDS failover. If the kclient sends
      a cap flush after MDS enters reconnect stage but before MDS recovers.
      The kclient will skip re-sending the same cap flush when MDS recovers.
      
      This causes problem for newly created inode. The MDS handles cap
      flushes before replaying unsafe requests, so it's possible that MDS
      find corresponding inode is missing when handling cap flush. The fix
      is reverting to old behaviour: always re-send when MDS recovers
      Signed-off-by: NYan, Zheng <zyan@redhat.com>
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      fc927cd3
  9. 25 6月, 2015 15 次提交
  10. 20 4月, 2015 3 次提交
  11. 16 4月, 2015 1 次提交
  12. 19 2月, 2015 4 次提交
  13. 18 12月, 2014 5 次提交
  14. 14 11月, 2014 1 次提交
    • Y
      ceph: fix flush tid comparision · 3231300b
      Yan, Zheng 提交于
      TID of cap flush ack is 64 bits, but ceph_inode_info::flushing_cap_tid
      is only 16 bits. 16 bits should be plenty to let the cap flush updates
      pipeline appropriately, but we need to cast in the proper direction when
      comparing these differently-sized versions. So downcast the 64-bits one
      to 16 bits.
      
      Reflects ceph.git commit a5184cf46a6e867287e24aeb731634828467cd98.
      Signed-off-by: NYan, Zheng <zyan@redhat.com>
      Reviewed-by: NIlya Dryomov <idryomov@redhat.com>
      3231300b
  15. 15 10月, 2014 2 次提交
    • F
      ceph: fix bool assignments · ab6c2c3e
      Fabian Frederick 提交于
      Fix some coccinelle warnings:
      fs/ceph/caps.c:2400:6-10: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2401:6-15: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2402:6-17: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2403:6-22: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2404:6-22: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2405:6-19: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2440:4-20: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2469:3-16: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2490:2-18: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2519:3-7: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2549:3-12: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2575:2-6: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2589:3-7: WARNING: Assignment of bool to 0/1
      Signed-off-by: NFabian Frederick <fabf@skynet.be>
      Signed-off-by: NIlya Dryomov <idryomov@redhat.com>
      ab6c2c3e
    • Y
      ceph: move ceph_find_inode() outside the s_mutex · 6cd3bcad
      Yan, Zheng 提交于
      ceph_find_inode() may wait on freeing inode, using it inside the s_mutex
      may cause deadlock. (the freeing inode is waiting for OSD read reply, but
      dispatch thread is blocked by the s_mutex)
      Signed-off-by: NYan, Zheng <zyan@redhat.com>
      Reviewed-by: NSage Weil <sage@redhat.com>
      6cd3bcad