1. 10 10月, 2014 1 次提交
  2. 03 10月, 2014 1 次提交
  3. 26 9月, 2014 1 次提交
  4. 07 8月, 2014 1 次提交
    • T
      ocfs2: race between umount and unfinished remastering during recovery · bba1cb17
      Tariq Saeed 提交于
      Orabug: 19074140
      
      When umount is issued during recovery on the new master that has not
      finished remastering locks, it triggers BUG() in
      dlm_send_mig_lockres_msg().  Here is the situation:
      
       1) node A has a lock on resource X mastered by node B.
      
       2) node B dies ->  node A sets recovering flag for res X
      
       3) Node C becomes the new master for resources owned by the
          dead node and is remastering locks of the dead node but
          has not finished the remastering process yet.
      
       4) umount is issued on node C.
      
       5) During processing of umount, ignoring unfished recovery,
          node C attempts to migrate resource X to node A.
      
       6) node A finds res X in DLM_LOCK_RES_RECOVERING state, considers
          it a logic error and sends back -EFAULT.
      
       7) node C asserts BUG() upon seeing EFAULT resp from node B.
      
      Fix is to delay migrating res X till remastering is finished at which
      point recovering flag will be cleared on both A and C.
      Signed-off-by: NTariq Saeed <tariq.x.saeed@oracle.com>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bba1cb17
  5. 24 6月, 2014 2 次提交
  6. 05 6月, 2014 1 次提交
  7. 24 5月, 2014 1 次提交
  8. 13 11月, 2013 2 次提交
  9. 12 9月, 2013 1 次提交
  10. 26 2月, 2013 1 次提交
  11. 25 7月, 2011 3 次提交
  12. 26 5月, 2011 1 次提交
    • S
      ocfs2/dlm: Do not migrate resource to a node that is leaving the domain · 66effd3c
      Sunil Mushran 提交于
      During dlm domain shutdown, o2dlm has to free all the lock resources. Ones that
      have no locks and references are freed. Ones that have locks and/or references
      are migrated to another node.
      
      The first task in migration is finding a target. Currently we scan the lock
      resource and find one node that either has a lock or a reference. This is not
      very efficient in a parallel umount case as we might end up migrating the
      lock resource to a node which itself may have to migrate it to a third node.
      
      The patch scans the dlm->exit_domain_map to ensure the target node is not
      leaving the domain. If no valid target node is found, o2dlm does not migrate
      the resource but instead waits for the unlock and deref messages that will
      allow it to free the resource.
      Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com>
      Signed-off-by: NJoel Becker <jlbec@evilplan.org>
      66effd3c
  13. 24 5月, 2011 1 次提交
  14. 14 5月, 2011 1 次提交
  15. 31 3月, 2011 1 次提交
  16. 21 2月, 2011 1 次提交
    • T
      ocfs2: Remove ENTRY from masklog. · ef6b689b
      Tao Ma 提交于
      ENTRY is used to record the entry of a function.
      But because it is added in so many functions, if we enable it,
      the system logs get filled up quickly and cause too much I/O.
      So actually no one can open it for a production system or even
      for a test.
      
      So for mlog_entry_void, we just remove it.
      for mlog_entry(...), we replace it with mlog(0,...), and they
      will be replace by trace event later.
      Signed-off-by: NTao Ma <boyu.mt@taobao.com>
      ef6b689b
  17. 10 12月, 2010 1 次提交
    • S
      ocfs2/dlm: Migrate lockres with no locks if it has a reference · 388c4bcb
      Sunil Mushran 提交于
      o2dlm was not migrating resources with zero locks because it assumed that that
      resource would get purged by dlm_thread. However, some usage patterns involve
      creating and dropping locks at a high rate leading to the migrate thread seeing
      zero locks but the purge thread seeing an active reference. When this happens,
      the dlm_thread cannot purge the resource and the migrate thread sees no reason
      to migrate that resource. The spell is broken when the migrate thread catches
      the resource with a lock.
      
      The fix is to make the migrate thread also consider the reference map.
      
      This usage pattern can be triggered by userspace on userdlm locks and flocks.
      Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com>
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      388c4bcb
  18. 24 9月, 2010 1 次提交
  19. 08 8月, 2010 2 次提交
    • W
      ocfs2/dlm: remove potential deadlock -V3 · b11f1f1a
      Wengang Wang 提交于
      When we need to take both dlm_domain_lock and dlm->spinlock, we should take
      them in order of: dlm_domain_lock then dlm->spinlock.
      
      There is pathes disobey this order. That is calling dlm_lockres_put() with
      dlm->spinlock held in dlm_run_purge_list. dlm_lockres_put() calls dlm_put() at
      the ref and dlm_put() locks on dlm_domain_lock.
      
      Fix:
      Don't grab/put the dlm when the initialising/releasing lockres.
      That grab is not required because we don't call dlm_unregister_domain()
      based on refcount.
      Signed-off-by: NWengang Wang <wen.gang.wang@oracle.com>
      Cc: stable@kernel.org
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      b11f1f1a
    • W
      ocfs2/dlm: fix a dead lock · 6d98c3cc
      Wengang Wang 提交于
      When we have to take both dlm->master_lock and lockres->spinlock,
      take them in order
      
      lockres->spinlock and then dlm->master_lock.
      
      The patch fixes a violation of the rule.
      We can simply move taking dlm->master_lock to where we have dropped res->spinlock
      since when we access res->state and free mle memory we don't need master_lock's
      protection.
      Signed-off-by: NWengang Wang <wen.gang.wang@oracle.com>
      Cc: stable@kernel.org
      Signed-off-by: NJoel Becker <joel.becker@oracle.com>
      6d98c3cc
  20. 16 7月, 2010 1 次提交
  21. 19 5月, 2010 1 次提交
  22. 06 5月, 2010 1 次提交
  23. 24 3月, 2010 1 次提交
  24. 26 1月, 2010 1 次提交
  25. 04 12月, 2009 1 次提交
  26. 24 9月, 2009 1 次提交
  27. 04 4月, 2009 9 次提交