- 27 6月, 2006 4 次提交
-
-
由 Daniel Phillips 提交于
This allows us to have a hash table greater than a single page which greatly improves dlm performance on some tests. Signed-off-by: NDaniel Phillips <phillips@google.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Mark Fasheh 提交于
It's called on every lookup so this might help performance a bit. Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Mark Fasheh 提交于
Fixes a performance bug - pointed out by Andrew. Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Mark Fasheh 提交于
Gains us a bit of performance on loads which heavily hit the lockres hash. Patch suggested by Daniel Phillips <phillips@google.com>. Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
- 25 3月, 2006 2 次提交
-
-
由 Kurt Hackel 提交于
Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
when starting lock mastery (excepting the recovery lock) wait on any nodes needing recovery. fix one instance where lock resources were left attached to the recovery list after recovery completed. ensure that the node_down code is run uniformly regardless of which node found the dead node first. Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
- 02 3月, 2006 1 次提交
-
-
由 Mark Fasheh 提交于
Switch from list_head to hlist_head. Make the size of the hash dependent upon the allocated area, rather than a constant. Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
- 17 2月, 2006 1 次提交
-
-
由 Kurt Hackel 提交于
* add dlm_wait_for_node_death function to be used after receiving a network error. this will wait for the given timeout to allow the heartbeat callbacks to update the domain map. without this, some paths may spin and consume enough cpu that the heartbeat gets starved and never updates. Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
- 04 2月, 2006 1 次提交
-
-
由 Kurt Hackel 提交于
* fix a hang which can occur during shutdown migration * do not allow nodes to join during recovery * when restarting lock mastery, do not ignore nodes which come up * more than one node could become recovery master, fix this * sleep to allow some time for heartbeat state to catch up to network * extra debug info for bad recovery state problems * make DLM_RECO_NODE_DATA_DONE a valid state for non-master recovery nodes * prune all locks from dead nodes on $RECOVERY lock resources * do NOT automatically add new nodes to mle nodemaps until they have properly joined the domain * make sure dlm_pick_recovery_master only exits when all nodes have synced * properly handle dlmunlock errors in dlm_pick_recovery_master * do not propagate network errors in dlm_send_begin_reco_message * dead nodes were not being put in the recovery map sometimes, fix this * dlmunlock was failing to clear the unlock actions on DLM_DENIED Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
- 04 1月, 2006 1 次提交
-
-
由 Kurt Hackel 提交于
A distributed lock manager built with the cluster file system use case in mind. The OCFS2 dlm exposes a VMS style API, though things have been simplified internally. The only lock levels implemented currently are NLMODE, PRMODE and EXMODE. Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com> Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com>
-