1. 16 9月, 2019 11 次提交
  2. 22 8月, 2019 1 次提交
    • L
      ceph: fix buffer free while holding i_ceph_lock in __ceph_build_xattrs_blob() · 12fe3dda
      Luis Henriques 提交于
      Calling ceph_buffer_put() in __ceph_build_xattrs_blob() may result in
      freeing the i_xattrs.blob buffer while holding the i_ceph_lock.  This can
      be fixed by having this function returning the old blob buffer and have
      the callers of this function freeing it when the lock is released.
      
      The following backtrace was triggered by fstests generic/117.
      
        BUG: sleeping function called from invalid context at mm/vmalloc.c:2283
        in_atomic(): 1, irqs_disabled(): 0, pid: 649, name: fsstress
        4 locks held by fsstress/649:
         #0: 00000000a7478e7e (&type->s_umount_key#19){++++}, at: iterate_supers+0x77/0xf0
         #1: 00000000f8de1423 (&(&ci->i_ceph_lock)->rlock){+.+.}, at: ceph_check_caps+0x7b/0xc60
         #2: 00000000562f2b27 (&s->s_mutex){+.+.}, at: ceph_check_caps+0x3bd/0xc60
         #3: 00000000f83ce16a (&mdsc->snap_rwsem){++++}, at: ceph_check_caps+0x3ed/0xc60
        CPU: 1 PID: 649 Comm: fsstress Not tainted 5.2.0+ #439
        Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.1-0-ga5cab58-prebuilt.qemu.org 04/01/2014
        Call Trace:
         dump_stack+0x67/0x90
         ___might_sleep.cold+0x9f/0xb1
         vfree+0x4b/0x60
         ceph_buffer_release+0x1b/0x60
         __ceph_build_xattrs_blob+0x12b/0x170
         __send_cap+0x302/0x540
         ? __lock_acquire+0x23c/0x1e40
         ? __mark_caps_flushing+0x15c/0x280
         ? _raw_spin_unlock+0x24/0x30
         ceph_check_caps+0x5f0/0xc60
         ceph_flush_dirty_caps+0x7c/0x150
         ? __ia32_sys_fdatasync+0x20/0x20
         ceph_sync_fs+0x5a/0x130
         iterate_supers+0x8f/0xf0
         ksys_sync+0x4f/0xb0
         __ia32_sys_sync+0xa/0x10
         do_syscall_64+0x50/0x1c0
         entry_SYSCALL_64_after_hwframe+0x49/0xbe
        RIP: 0033:0x7fc6409ab617
      Signed-off-by: NLuis Henriques <lhenriques@suse.com>
      Reviewed-by: NJeff Layton <jlayton@kernel.org>
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      12fe3dda
  3. 08 7月, 2019 7 次提交
  4. 06 6月, 2019 2 次提交
    • Y
      ceph: fix error handling in ceph_get_caps() · 7b2f936f
      Yan, Zheng 提交于
      The function return 0 even when interrupted or try_get_cap_refs()
      return error.
      
      Fixes: 1199d7da ("ceph: simplify arguments and return semantics of try_get_cap_refs")
      Signed-off-by: N"Yan, Zheng" <zyan@redhat.com>
      Reviewed-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      7b2f936f
    • Y
      ceph: avoid iput_final() while holding mutex or in dispatch thread · 3e1d0452
      Yan, Zheng 提交于
      iput_final() may wait for reahahead pages. The wait can cause deadlock.
      For example:
      
        Workqueue: ceph-msgr ceph_con_workfn [libceph]
          Call Trace:
           schedule+0x36/0x80
           io_schedule+0x16/0x40
           __lock_page+0x101/0x140
           truncate_inode_pages_range+0x556/0x9f0
           truncate_inode_pages_final+0x4d/0x60
           evict+0x182/0x1a0
           iput+0x1d2/0x220
           iterate_session_caps+0x82/0x230 [ceph]
           dispatch+0x678/0xa80 [ceph]
           ceph_con_workfn+0x95b/0x1560 [libceph]
           process_one_work+0x14d/0x410
           worker_thread+0x4b/0x460
           kthread+0x105/0x140
           ret_from_fork+0x22/0x40
      
        Workqueue: ceph-msgr ceph_con_workfn [libceph]
          Call Trace:
           __schedule+0x3d6/0x8b0
           schedule+0x36/0x80
           schedule_preempt_disabled+0xe/0x10
           mutex_lock+0x2f/0x40
           ceph_check_caps+0x505/0xa80 [ceph]
           ceph_put_wrbuffer_cap_refs+0x1e5/0x2c0 [ceph]
           writepages_finish+0x2d3/0x410 [ceph]
           __complete_request+0x26/0x60 [libceph]
           handle_reply+0x6c8/0xa10 [libceph]
           dispatch+0x29a/0xbb0 [libceph]
           ceph_con_workfn+0x95b/0x1560 [libceph]
           process_one_work+0x14d/0x410
           worker_thread+0x4b/0x460
           kthread+0x105/0x140
           ret_from_fork+0x22/0x40
      
      In above example, truncate_inode_pages_range() waits for readahead pages
      while holding s_mutex. ceph_check_caps() waits for s_mutex and blocks
      OSD dispatch thread. Later OSD replies (for readahead) can't be handled.
      
      ceph_check_caps() also may lock snap_rwsem for read. So similar deadlock
      can happen if iput_final() is called while holding snap_rwsem.
      
      In general, it's not good to call iput_final() inside MDS/OSD dispatch
      threads or while holding any mutex.
      
      The fix is introducing ceph_async_iput(), which calls iput_final() in
      workqueue.
      Signed-off-by: N"Yan, Zheng" <zyan@redhat.com>
      Reviewed-by: NJeff Layton <jlayton@redhat.com>
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      3e1d0452
  5. 08 5月, 2019 4 次提交
  6. 06 3月, 2019 4 次提交
  7. 21 1月, 2019 1 次提交
  8. 26 12月, 2018 4 次提交
  9. 22 10月, 2018 3 次提交
  10. 13 8月, 2018 2 次提交
  11. 03 8月, 2018 1 次提交