- 26 1月, 2010 2 次提交
-
-
由 Sunil Mushran 提交于
During lock resource migration, o2dlm fills the packet with a LVB from the first valid lock. For sanity, it ensures that the other valid locks have the same LVB. If not, it BUGs. The valid locks are ones that have granted EX or PR lock levels and are either on the Granted or Converting lists. Locks in the Blocked list cannot have a valid LVB. This patch ensures that we skip the locks in the Blocked list. Fixes oss bugzilla#1202 http://oss.oracle.com/bugzilla/show_bug.cgi?id=1202Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NJoel Becker <joel.becker@oracle.com>
-
由 Sunil Mushran 提交于
Patch removes trailing whitespaces. Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NJoel Becker <joel.becker@oracle.com>
-
- 03 12月, 2009 1 次提交
-
-
由 Tiger Yang 提交于
We used to return positive EAGAIN to indicate a retry action is needed in dlm_begin_reco_handler(). Now we return negative -EAGAIN to erase the confusion caused by this error code. Signed-off-by: NTiger Yang <tiger.yang@oracle.com> Signed-off-by: NJoel Becker <joel.becker@oracle.com>
-
- 24 9月, 2009 1 次提交
-
-
由 Alexey Dobriyan 提交于
* remove asm/atomic.h inclusion from linux/utsname.h -- not needed after kref conversion * remove linux/utsname.h inclusion from files which do not need it NOTE: it looks like fs/binfmt_elf.c do not need utsname.h, however due to some personality stuff it _is_ needed -- cowardly leave ELF-related headers and files alone. Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 7月, 2009 1 次提交
-
-
由 Jeff Liu 提交于
in dlmrecovery.c:1121, replace 'migrate' to 'migration' to keep the consistency by comparing to other lines with the similar log info in the same file. Signed-off-by: NJeff Liu <jeff.liu@oracle.com> Signed-off-by: NJoel Becker <joel.becker@oracle.com>
-
- 11 3月, 2008 4 次提交
-
-
由 Sunil Mushran 提交于
Knowing the dlm recovery master helps in debugging recovery issues. This patch prints a message on the recovery master node. Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NJoel Becker <joel.becker@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Sunil Mushran 提交于
During migration, the recovery master node may be asked to master a lockres it may not know about. In that case, it would not only have to create a lockres and add it to the hash, but also remember to to do the _put_ corresponding to the kref_init in dlm_init_lockres(), as soon as the migration is completed. Yes, we don't wait for the dlm_purge_lockres() to do that matching put. Note the ref added for it being in the hash protects the lockres from being freed prematurely. This patch adds that missing put, as described above, to plug a memleak. Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NJoel Becker <joel.becker@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Sunil Mushran 提交于
Normally locks for remote nodes are freed when that node sends an UNLOCK message to the master. The master node tags an DLM_UNLOCK_FREE_LOCK action to do an extra put on the lock at the end. However, there are times when the master node has to free the locks for the remote nodes forcibly. Two cases when this happens are: 1. When the master has migrated the lockres plus all locks to another node. 2. When the master is clearing all the locks of a dead node. It was in the above two conditions that the dlm was missing the extra put. Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NJoel Becker <joel.becker@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Tao Ma 提交于
__dlm_print_one_lock_resource must be called with spin_lock the res->spinlock. While in some cases, we use it without this precondition and lead to the failure of assert_spin_locked. So call dlm_print_one_lock_resource instead. Signed-off-by: NTao Ma <tao.ma@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
- 26 1月, 2008 2 次提交
-
-
由 Tao Ma 提交于
Currently the process of dlm join contains 2 steps: query join and assert join. After query join, the joined node will set its joining_node. So if the joining node happens to panic before the 2nd step, the joined node will fail to clear its joining_node flag because that node isn't in the domain map. It at least cause 2 problems. 1. All the new join request will fail. So no new node can mount the volume. 2. The joined node can't umount the volume since during the umount process it has to wait for the joining_node to be unknown. So the umount will be hanged. The solution is to clear the joining_node before we check the domain map. Signed-off-by: NTao Ma <tao.ma@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Mark Fasheh 提交于
With this, a dlm client can take advantage of the group protocol in the dlm to get full notification whenever a node within the dlm domain leaves unexpectedly. Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
- 20 10月, 2007 1 次提交
-
-
由 Pavel Emelyanov 提交于
The task_struct->pid member is going to be deprecated, so start using the helpers (task_pid_nr/task_pid_vnr/task_pid_nr_ns) in the kernel. The first thing to start with is the pid, printed to dmesg - in this case we may safely use task_pid_nr(). Besides, printks produce more (much more) than a half of all the explicit pid usage. [akpm@linux-foundation.org: git-drm went and changed lots of stuff] Signed-off-by: NPavel Emelyanov <xemul@openvz.org> Cc: Dave Airlie <airlied@linux.ie> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 7月, 2007 2 次提交
-
-
由 Shani Moideen 提交于
Replacing memset(<addr>,0,PAGE_SIZE) with clear_page() in fs/ocfs2/dlm/dlmrecovery.c Signed-off-by: NShani Moideen <shani.moideen@wipro.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Christoph Hellwig 提交于
Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
- 03 5月, 2007 1 次提交
-
-
由 Mark Fasheh 提交于
Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
- 27 4月, 2007 1 次提交
-
-
由 Srinivas Eeda 提交于
There is a possibility that dlm_remaster_locks could overwride node->state with DLM_RECO_NODE_DATA_REQUESTED after dlm_reco_data_done_handler sets the node->state to DLM_RECO_NODE_DATA_DONE. This could lead to recovery getting stuck and requires a cluster reboot. Synchronize with dlm_reco_state_lock spinlock. Signed-off-by: NSrinivas Eeda <srinivas.eeda@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
- 08 2月, 2007 8 次提交
-
-
由 Kurt Hackel 提交于
Currently o2net allows one handler function per message type. This patch adds the ability to call another function to be called after the handler has returned the message to the other node. Handlers are now given the option of returning a context (in the form of a void **) which will be passed back into the post message handler function. Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
The dlm encodes the node number and a sequence number in the lock cookie. It also stores the cookie in the lockres in the big endian format to avoid swapping 8 bytes on each lock request. The bug here was that it was assuming the cookie to be in the cpu format when decoding it for printing the error message. This patch swaps the bytes before the print. Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
The dlm was not waking up threads waiting on the lockres wait queue, waiting for the lockres to be no longer be in the DLM_LOCK_RES_IN_PROGRESS and the DLM_LOCK_RES_MIGRATING states. Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
dlm_dispatch_work was not processing the queued up tasks at the first sign of the node leaving the domain leading to not only incompleted tasks but also a mismatch in the dlm refcnt. Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
The migrate lockres handler was only searching for its lock on migrated lockres on the expected queue. This could be problematic as the new master could have also issued a convert request during the migration and thus moved the lock to the convert queue. We now search for the lock on all three queues. Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NSunil Mushran <Sunil.Mushran@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
dlmunlock() was not waiting for migration to complete before releasing locks on locally mastered locks. Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NSunil Mushran <Sunil.Mushran@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
This was previously broken and migration of some locks had to be temporarily disabled. We use a new (and backward-incompatible) set of network messages to account for all references to a lock resources held across the cluster. once these are all freed, the master node may then free the lock resource memory once its local references are dropped. Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
- 14 12月, 2006 1 次提交
-
-
由 Robert P. J. Day 提交于
All kcalloc() calls of the form "kcalloc(1,...)" are converted to the equivalent kzalloc() calls, and a few kcalloc() calls with the incorrect ordering of the first two arguments are fixed. Signed-off-by: NRobert P. J. Day <rpjday@mindspring.com> Cc: Jeff Garzik <jeff@garzik.org> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Cc: Dominik Brodowski <linux@dominikbrodowski.net> Cc: Adam Belay <ambx1@neo.rr.com> Cc: James Bottomley <James.Bottomley@steeleye.com> Cc: Greg KH <greg@kroah.com> Cc: Mark Fasheh <mark.fasheh@oracle.com> Cc: Trond Myklebust <trond.myklebust@fys.uio.no> Cc: Neil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 22 11月, 2006 1 次提交
-
-
由 David Howells 提交于
Fix up for make allyesconfig. Signed-Off-By: NDavid Howells <dhowells@redhat.com>
-
- 25 9月, 2006 1 次提交
-
-
由 Mark Fasheh 提交于
The OCFS2 DLM uses strlen() to determine lock name length, which excludes the possibility of putting binary values in the name string. Fix this by requiring that string length be passed in as a parameter. Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
- 30 6月, 2006 1 次提交
-
-
由 Adrian Bunk 提交于
dlm_lockres_master_requery() became global without any external usage. Signed-off-by: NAdrian Bunk <bunk@stusta.de> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
- 28 6月, 2006 1 次提交
-
-
由 Ingo Molnar 提交于
locking init cleanups: - convert " = SPIN_LOCK_UNLOCKED" to spin_lock_init() or DEFINE_SPINLOCK() - convert rwlocks in a similar manner this patch was generated automatically. Motivation: - cleanliness - lockdep needs control of lock initialization, which the open-coded variants do not give - it's also useful for -rt and for lock debugging in general Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 27 6月, 2006 11 次提交
-
-
由 Adrian Bunk 提交于
This patch #if 0's the no longer used dlm_dump_lock_resources(). Since this makes dlmdebug.h empty, this patch also removes this header. Additionally, the needlessly global dlm_is_node_recovered() is made static. Signed-off-by: NAdrian Bunk <bunk@stusta.de> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
The work that is done can block for long periods of time and so is not appropriate for keventd. Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
We cannot restart recovery. Once we begin to recover a node, keep the state of the recovery intact and follow through, regardless of any other node deaths that may occur. Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
If the previous master of the recovery lock dies, let calc_usage take it down completely and let the caller completely redo the dlmlock() call. Otherwise, there will never be an opportunity to re-master the lockres and recovery wont be able to progress. Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-