1. 07 9月, 2017 3 次提交
  2. 07 7月, 2017 3 次提交
  3. 04 5月, 2017 2 次提交
  4. 02 3月, 2017 1 次提交
  5. 25 2月, 2017 1 次提交
  6. 20 2月, 2017 6 次提交
  7. 19 1月, 2017 1 次提交
  8. 13 12月, 2016 8 次提交
    • Y
      ceph: properly set issue_seq for cap release · dc24de82
      Yan, Zheng 提交于
      Signed-off-by: NYan, Zheng <zyan@redhat.com>
      dc24de82
    • J
      ceph: add flags parameter to send_cap_msg · 1e4ef0c6
      Jeff Layton 提交于
      Add a flags parameter to send_cap_msg, so we can request expedited
      service from the MDS when we know we'll be waiting on the result.
      
      Set that flag in the case of try_flush_caps. The callers of that
      function generally wait synchronously on the result, so it's beneficial
      to ask the server to expedite it.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Reviewed-by: NYan, Zheng <zyan@redhat.com>
      1e4ef0c6
    • J
      ceph: update cap message struct version to 10 · 43b29673
      Jeff Layton 提交于
      The userland ceph has MClientCaps at struct version 10. This brings the
      kernel up the same version.
      
      For now, all of the the new stuff is set to default values including
      the flags field, which will be conditionally set in a later patch.
      
      Note that we don't need to set the change_attr and btime to anything
      since we aren't currently setting the feature flag. The MDS should
      ignore those values.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Reviewed-by: NYan, Zheng <zyan@redhat.com>
      43b29673
    • J
      ceph: define new argument structure for send_cap_msg · 0ff8bfb3
      Jeff Layton 提交于
      When we get to this many arguments, it's hard to work with positional
      parameters. send_cap_msg is already at 25 arguments, with more needed.
      
      Define a new args structure and pass a pointer to it to send_cap_msg.
      Eventually it might make sense to embed one of these inside
      ceph_cap_snap instead of tracking individual fields.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Reviewed-by: NYan, Zheng <zyan@redhat.com>
      0ff8bfb3
    • J
      ceph: move xattr initialzation before the encoding past the ceph_mds_caps · 9670079f
      Jeff Layton 提交于
      Just for clarity. This part is inside the header, so it makes sense to
      group it with the rest of the stuff in the header.
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Reviewed-by: NYan, Zheng <zyan@redhat.com>
      9670079f
    • J
      ceph: fix minor typo in unsafe_request_wait · 4945a084
      Jeff Layton 提交于
      Signed-off-by: NJeff Layton <jlayton@redhat.com>
      Reviewed-by: NYan, Zheng <zyan@redhat.com>
      4945a084
    • Y
      ceph: try getting buffer capability for readahead/fadvise · 2b1ac852
      Yan, Zheng 提交于
      For readahead/fadvise cases, caller of ceph_readpages does not
      hold buffer capability. Pages can be added to page cache while
      there is no buffer capability. This can cause data integrity
      issue.
      Signed-off-by: NYan, Zheng <zyan@redhat.com>
      2b1ac852
    • N
      ceph: fix scheduler warning due to nested blocking · 5c341ee3
      Nikolay Borisov 提交于
      try_get_cap_refs can be used as a condition in a wait_event* calls.
      This is all fine until it has to call __ceph_do_pending_vmtruncate,
      which in turn acquires the i_truncate_mutex. This leads to a situation
      in which a task's state is !TASK_RUNNING and at the same time it's
      trying to acquire a sleeping primitive. In essence a nested sleeping
      primitives are being used. This causes the following warning:
      
      WARNING: CPU: 22 PID: 11064 at kernel/sched/core.c:7631 __might_sleep+0x9f/0xb0()
      do not call blocking ops when !TASK_RUNNING; state=1 set at [<ffffffff8109447d>] prepare_to_wait_event+0x5d/0x110
       ipmi_msghandler tcp_scalable ib_qib dca ib_mad ib_core ib_addr ipv6
      CPU: 22 PID: 11064 Comm: fs_checker.pl Tainted: G           O    4.4.20-clouder2 #6
      Hardware name: Supermicro X10DRi/X10DRi, BIOS 1.1a 10/16/2015
       0000000000000000 ffff8838b416fa88 ffffffff812f4409 ffff8838b416fad0
       ffffffff81a034f2 ffff8838b416fac0 ffffffff81052b46 ffffffff81a0432c
       0000000000000061 0000000000000000 0000000000000000 ffff88167bda54a0
      Call Trace:
       [<ffffffff812f4409>] dump_stack+0x67/0x9e
       [<ffffffff81052b46>] warn_slowpath_common+0x86/0xc0
       [<ffffffff81052bcc>] warn_slowpath_fmt+0x4c/0x50
       [<ffffffff8109447d>] ? prepare_to_wait_event+0x5d/0x110
       [<ffffffff8109447d>] ? prepare_to_wait_event+0x5d/0x110
       [<ffffffff8107767f>] __might_sleep+0x9f/0xb0
       [<ffffffff81612d30>] mutex_lock+0x20/0x40
       [<ffffffffa04eea14>] __ceph_do_pending_vmtruncate+0x44/0x1a0 [ceph]
       [<ffffffffa04fa692>] try_get_cap_refs+0xa2/0x320 [ceph]
       [<ffffffffa04fd6f5>] ceph_get_caps+0x255/0x2b0 [ceph]
       [<ffffffff81094370>] ? wait_woken+0xb0/0xb0
       [<ffffffffa04f2c11>] ceph_write_iter+0x2b1/0xde0 [ceph]
       [<ffffffff81613f22>] ? schedule_timeout+0x202/0x260
       [<ffffffff8117f01a>] ? kmem_cache_free+0x1ea/0x200
       [<ffffffff811b46ce>] ? iput+0x9e/0x230
       [<ffffffff81077632>] ? __might_sleep+0x52/0xb0
       [<ffffffff81156147>] ? __might_fault+0x37/0x40
       [<ffffffff8119e123>] ? cp_new_stat+0x153/0x170
       [<ffffffff81198cfa>] __vfs_write+0xaa/0xe0
       [<ffffffff81199369>] vfs_write+0xa9/0x190
       [<ffffffff811b6d01>] ? set_close_on_exec+0x31/0x70
       [<ffffffff8119a056>] SyS_write+0x46/0xa0
      
      This happens since wait_event_interruptible can interfere with the
      mutex locking code, since they both fiddle with the task state.
      
      Fix the issue by using the newly-added nested blocking infrastructure
      in 61ada528 ("sched/wait: Provide infrastructure to deal with
      nested blocking")
      
      Link: https://lwn.net/Articles/628628/Signed-off-by: NNikolay Borisov <kernel@kyup.com>
      Signed-off-by: NYan, Zheng <zyan@redhat.com>
      5c341ee3
  9. 09 8月, 2016 1 次提交
  10. 28 7月, 2016 12 次提交
  11. 01 6月, 2016 2 次提交
    • Y
      ceph: improve fscache revalidation · f7f7e7a0
      Yan, Zheng 提交于
      There are several issues in fscache revalidation code.
      - In ceph_revalidate_work(), fscache_invalidate() is called when
        fscache_check_consistency() return 0. This is complete wrong
        because 0 means cache is valid.
      - Handle_cap_grant() calls ceph_queue_revalidate() if client
        already has CAP_FILE_CACHE. This code is confusing. Client
        should revalidate the cache each time it got CAP_FILE_CACHE
        anew.
      - In Handle_cap_grant(), fscache_invalidate() is called if MDS
        revokes CAP_FILE_CACHE. This is inconsistency with the case
        that inode get evicted. In the later case, the cache is not
        discarded. Client may use the cache when inode is reloaded.
      
      This patch moves the fscache revalidation into ceph_get_caps().
      Client revalidates the cache after it gets CAP_FILE_CACHE.
      i_rdcache_gen should keep constance while CAP_FILE_CACHE is
      used. If i_fscache_gen is not equal to i_rdcache_gen, client
      needs to check cache's consistency.
      Signed-off-by: NYan, Zheng <zyan@redhat.com>
      f7f7e7a0
    • Y
      ceph: avoid unnecessary fscache invalidation/revlidation · 14649758
      Yan, Zheng 提交于
      ceph_fill_file_size() has already called ceph_fscache_invalidate()
      if it return true.
      Signed-off-by: NYan, Zheng <zyan@redhat.com>
      14649758