1. 19 2月, 2015 3 次提交
  2. 18 12月, 2014 5 次提交
  3. 14 11月, 2014 1 次提交
    • Y
      ceph: fix flush tid comparision · 3231300b
      Yan, Zheng 提交于
      TID of cap flush ack is 64 bits, but ceph_inode_info::flushing_cap_tid
      is only 16 bits. 16 bits should be plenty to let the cap flush updates
      pipeline appropriately, but we need to cast in the proper direction when
      comparing these differently-sized versions. So downcast the 64-bits one
      to 16 bits.
      
      Reflects ceph.git commit a5184cf46a6e867287e24aeb731634828467cd98.
      Signed-off-by: NYan, Zheng <zyan@redhat.com>
      Reviewed-by: NIlya Dryomov <idryomov@redhat.com>
      3231300b
  4. 15 10月, 2014 2 次提交
    • F
      ceph: fix bool assignments · ab6c2c3e
      Fabian Frederick 提交于
      Fix some coccinelle warnings:
      fs/ceph/caps.c:2400:6-10: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2401:6-15: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2402:6-17: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2403:6-22: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2404:6-22: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2405:6-19: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2440:4-20: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2469:3-16: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2490:2-18: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2519:3-7: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2549:3-12: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2575:2-6: WARNING: Assignment of bool to 0/1
      fs/ceph/caps.c:2589:3-7: WARNING: Assignment of bool to 0/1
      Signed-off-by: NFabian Frederick <fabf@skynet.be>
      Signed-off-by: NIlya Dryomov <idryomov@redhat.com>
      ab6c2c3e
    • Y
      ceph: move ceph_find_inode() outside the s_mutex · 6cd3bcad
      Yan, Zheng 提交于
      ceph_find_inode() may wait on freeing inode, using it inside the s_mutex
      may cause deadlock. (the freeing inode is waiting for OSD read reply, but
      dispatch thread is blocked by the s_mutex)
      Signed-off-by: NYan, Zheng <zyan@redhat.com>
      Reviewed-by: NSage Weil <sage@redhat.com>
      6cd3bcad
  5. 24 7月, 2014 1 次提交
  6. 06 6月, 2014 5 次提交
  7. 29 4月, 2014 1 次提交
  8. 05 4月, 2014 2 次提交
  9. 21 1月, 2014 6 次提交
  10. 17 1月, 2014 1 次提交
  11. 01 1月, 2014 1 次提交
  12. 24 11月, 2013 2 次提交
  13. 07 9月, 2013 2 次提交
  14. 28 8月, 2013 1 次提交
  15. 16 8月, 2013 2 次提交
    • Y
      ceph: fix request max size · 3871cbb9
      Yan, Zheng 提交于
      ceph_check_caps() requests new max size only when there is Fw cap.
      If we call check_max_size() while there is no Fw cap. It updates
      i_wanted_max_size and calls ceph_check_caps(), but ceph_check_caps()
      does nothing. Later when Fw cap is issued, we call check_max_size()
      again. But i_wanted_max_size is equal to 'endoff' at this time, so
      check_max_size() doesn't call ceph_check_caps() and we end up with
      waiting for the new max size forever.
      
      The fix is duplicate ceph_check_caps()'s "request max size" code in
      check_max_size(), and make try_get_cap_refs() wait for the Fw cap
      before retry requesting new max size.
      
      This patch also removes the "endoff > (inode->i_size << 1)" check
      in check_max_size(). It's useless because there is no corresponding
      logic in ceph_check_caps().
      Reviewed-by: NSage Weil <sage@inktank.com>
      Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com>
      3871cbb9
    • Y
      ceph: introduce i_truncate_mutex · b0d7c223
      Yan, Zheng 提交于
      I encountered below deadlock when running fsstress
      
      wmtruncate work      truncate                 MDS
      ---------------  ------------------  --------------------------
                         lock i_mutex
                                            <- truncate file
      lock i_mutex (blocked)
                                            <- revoking Fcb (filelock to MIX)
                         send request ->
                                               handle request (xlock filelock)
      
      At the initial time, there are some dirty pages in the page cache.
      When the kclient receives the truncate message, it reduces inode size
      and creates some 'out of i_size' dirty pages. wmtruncate work can't
      truncate these dirty pages because it's blocked by the i_mutex. Later
      when the kclient receives the cap message that revokes Fcb caps, It
      can't flush all dirty pages because writepages() only flushes dirty
      pages within the inode size.
      
      When the MDS handles the 'truncate' request from kclient, it waits
      for the filelock to become stable. But the filelock is stuck in
      unstable state because it can't finish revoking kclient's Fcb caps.
      
      The truncate pagecache locking has already caused lots of trouble
      for use. I think it's time simplify it by introducing a new mutex.
      We use the new mutex to prevent concurrent truncate_inode_pages().
      There is no need to worry about race between buffered write and
      truncate_inode_pages(), because our "get caps" mechanism prevents
      them from concurrent execution.
      Reviewed-by: NSage Weil <sage@inktank.com>
      Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com>
      b0d7c223
  16. 10 8月, 2013 1 次提交
  17. 04 7月, 2013 4 次提交
    • Y
      ceph: fix race between cap issue and revoke · 6ee6b953
      Yan, Zheng 提交于
      If we receive new caps from the auth MDS and the non-auth MDS is
      revoking the newly issued caps, we should release the caps from
      the non-auth MDS. The scenario is filelock's state changes from
      SYNC to LOCK. Non-auth MDS revokes Fc cap, the client gets Fc cap
      from the auth MDS at the same time.
      Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com>
      Reviewed-by: NSage Weil <sage@inktank.com>
      6ee6b953
    • Y
      ceph: fix cap revoke race · b1530f57
      Yan, Zheng 提交于
      If caps are been revoking by the auth MDS, don't consider them as
      issued even they are still issued by non-auth MDS. The non-auth
      MDS should also be revoking/exporting these caps, the client just
      hasn't received the cap revoke/export message.
      
      The race I encountered is: When caps are exporting to new MDS, the
      client receives cap import message and cap revoke message from the
      new MDS, then receives cap export message from the old MDS. When
      the client receives cap revoke message from the new MDS, the revoking
      caps are still issued by the old MDS, so the client does nothing.
      Later when the cap export message is received, the client removes
      the caps issued by the old MDS. (Another way to fix the race is
      calling ceph_check_caps() in handle_cap_export())
      Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com>
      Reviewed-by: NSage Weil <sage@inktank.com>
      b1530f57
    • Y
      ceph: fix pending vmtruncate race · b415bf4f
      Yan, Zheng 提交于
      The locking order for pending vmtruncate is wrong, it can lead to
      following race:
      
              write                  wmtruncate work
      ------------------------    ----------------------
      lock i_mutex
      check i_truncate_pending   check i_truncate_pending
      truncate_inode_pages()     lock i_mutex (blocked)
      copy data to page cache
      unlock i_mutex
                                 truncate_inode_pages()
      
      The fix is take i_mutex before calling __ceph_do_pending_vmtruncate()
      
      Fixes: http://tracker.ceph.com/issues/5453Signed-off-by: NYan, Zheng <zheng.z.yan@intel.com>
      Reviewed-by: NSage Weil <sage@inktank.com>
      b415bf4f
    • M
      ceph: Reconstruct the func ceph_reserve_caps. · 93faca6e
      majianpeng 提交于
      Drop ignored return value.  Fix allocation failure case to not leak.
      Signed-off-by: NJianpeng Ma <majianpeng@gmail.com>
      Reviewed-by: NSage Weil <sage@inktank.com>
      93faca6e