提交 0ba95977 编写于 作者: G Guoqing Jiang 提交者: Shaohua Li

md-cluster: use sync way to handle METADATA_UPDATED msg

Previously, when node received METADATA_UPDATED msg, it just
need to wakeup mddev->thread, then md_reload_sb will be called
eventually.

We taken the asynchronous way to avoid a deadlock issue, the
deadlock issue could happen when one node is receiving the
METADATA_UPDATED msg (wants reconfig_mutex) and trying to run
the path:

md_check_recovery -> mddev_trylock(hold reconfig_mutex)
                  -> md_update_sb-metadata_update_start
		     (want EX on token however token is
		      got by the sending node)

Since we will support resizing for clustered raid, and we
need the metadata update handling to be synchronous so that
the initiating node can detect failure, so we need to change
the way for handling METADATA_UPDATED msg.

But, we obviously need to avoid above deadlock with the
sync way. To make this happen, we considered to not hold
reconfig_mutex to call md_reload_sb, if some other thread
has already taken reconfig_mutex and waiting for the 'token',
then process_recvd_msg() can safely call md_reload_sb()
without taking the mutex. This is because we can be certain
that no other thread will take the mutex, and we also certain
that the actions performed by md_reload_sb() won't interfere
with anything that the other thread is in the middle of.

To make this more concrete, we added a new cinfo->state bit
        MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD

Which is set in lock_token() just before dlm_lock_sync() is
called, and cleared just after. As lock_token() is always
called with reconfig_mutex() held (the specific case is the
resync_info_update which is distinguished well in previous
patch), if process_recvd_msg() finds that the new bit is set,
then the mutex must be held by some other thread, and it will
keep waiting.

So process_metadata_update() can call md_reload_sb() if either
mddev_trylock() succeeds, or if MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD
is set. The tricky bit is what to do if neither of these apply.
We need to wait. Fortunately mddev_unlock() always calls wake_up()
on mddev->thread->wqueue. So we can get lock_token() to call
wake_up() on that when it sets the bit.

There are also some related changes inside this commit:
1. remove RELOAD_SB related codes since there are not valid anymore.
2. mddev is added into md_cluster_info then we can get mddev inside
   lock_token.
3. add new parameter for lock_token to distinguish reconfig_mutex
   is held or not.

And, we need to set MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD in below:
1. set it before unregister thread, otherwise a deadlock could
   appear if stop a resyncing array.
   This is because md_unregister_thread(&cinfo->recv_thread) is
   blocked by recv_daemon -> process_recvd_msg
			  -> process_metadata_update.
   To resolve the issue, MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD is
   also need to be set before unregister thread.
2. set it in metadata_update_start to fix another deadlock.
	a. Node A sends METADATA_UPDATED msg (held Token lock).
	b. Node B wants to do resync, and is blocked since it can't
	   get Token lock, but MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD is
	   not set since the callchain
	   (md_do_sync -> sync_request
        	       -> resync_info_update
		       -> sendmsg
		       -> lock_comm -> lock_token)
	   doesn't hold reconfig_mutex.
	c. Node B trys to update sb (held reconfig_mutex), but stopped
	   at wait_event() in metadata_update_start since we have set
	   MD_CLUSTER_SEND_LOCK flag in lock_comm (step 2).
	d. Then Node B receives METADATA_UPDATED msg from A, of course
	   recv_daemon is blocked forever.
   Since metadata_update_start always calls lock_token with reconfig_mutex,
   we need to set MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD here as well, and
   lock_token don't need to set it twice unless lock_token is invoked from
   lock_comm.

Finally, thanks to Neil for his great idea and help!
Reviewed-by: NNeilBrown <neilb@suse.com>
Signed-off-by: NGuoqing Jiang <gqjiang@suse.com>
Signed-off-by: NShaohua Li <shli@fb.com>
上级 d11507e1
...@@ -67,9 +67,10 @@ struct resync_info { ...@@ -67,9 +67,10 @@ struct resync_info {
* set up all the related infos such as bitmap and personality */ * set up all the related infos such as bitmap and personality */
#define MD_CLUSTER_ALREADY_IN_CLUSTER 6 #define MD_CLUSTER_ALREADY_IN_CLUSTER 6
#define MD_CLUSTER_PENDING_RECV_EVENT 7 #define MD_CLUSTER_PENDING_RECV_EVENT 7
#define MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD 8
struct md_cluster_info { struct md_cluster_info {
struct mddev *mddev; /* the md device which md_cluster_info belongs to */
/* dlm lock space and resources for clustered raid. */ /* dlm lock space and resources for clustered raid. */
dlm_lockspace_t *lockspace; dlm_lockspace_t *lockspace;
int slot_number; int slot_number;
...@@ -523,11 +524,17 @@ static void process_add_new_disk(struct mddev *mddev, struct cluster_msg *cmsg) ...@@ -523,11 +524,17 @@ static void process_add_new_disk(struct mddev *mddev, struct cluster_msg *cmsg)
static void process_metadata_update(struct mddev *mddev, struct cluster_msg *msg) static void process_metadata_update(struct mddev *mddev, struct cluster_msg *msg)
{ {
int got_lock = 0;
struct md_cluster_info *cinfo = mddev->cluster_info; struct md_cluster_info *cinfo = mddev->cluster_info;
mddev->good_device_nr = le32_to_cpu(msg->raid_slot); mddev->good_device_nr = le32_to_cpu(msg->raid_slot);
set_bit(MD_RELOAD_SB, &mddev->flags);
dlm_lock_sync(cinfo->no_new_dev_lockres, DLM_LOCK_CR); dlm_lock_sync(cinfo->no_new_dev_lockres, DLM_LOCK_CR);
md_wakeup_thread(mddev->thread); wait_event(mddev->thread->wqueue,
(got_lock = mddev_trylock(mddev)) ||
test_bit(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state));
md_reload_sb(mddev, mddev->good_device_nr);
if (got_lock)
mddev_unlock(mddev);
} }
static void process_remove_disk(struct mddev *mddev, struct cluster_msg *msg) static void process_remove_disk(struct mddev *mddev, struct cluster_msg *msg)
...@@ -646,11 +653,29 @@ static void recv_daemon(struct md_thread *thread) ...@@ -646,11 +653,29 @@ static void recv_daemon(struct md_thread *thread)
* Takes the lock on the TOKEN lock resource so no other * Takes the lock on the TOKEN lock resource so no other
* node can communicate while the operation is underway. * node can communicate while the operation is underway.
*/ */
static int lock_token(struct md_cluster_info *cinfo) static int lock_token(struct md_cluster_info *cinfo, bool mddev_locked)
{ {
int error; int error, set_bit = 0;
struct mddev *mddev = cinfo->mddev;
/*
* If resync thread run after raid1d thread, then process_metadata_update
* could not continue if raid1d held reconfig_mutex (and raid1d is blocked
* since another node already got EX on Token and waitting the EX of Ack),
* so let resync wake up thread in case flag is set.
*/
if (mddev_locked && !test_bit(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD,
&cinfo->state)) {
error = test_and_set_bit_lock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD,
&cinfo->state);
WARN_ON_ONCE(error);
md_wakeup_thread(mddev->thread);
set_bit = 1;
}
error = dlm_lock_sync(cinfo->token_lockres, DLM_LOCK_EX); error = dlm_lock_sync(cinfo->token_lockres, DLM_LOCK_EX);
if (set_bit)
clear_bit_unlock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state);
if (error) if (error)
pr_err("md-cluster(%s:%d): failed to get EX on TOKEN (%d)\n", pr_err("md-cluster(%s:%d): failed to get EX on TOKEN (%d)\n",
__func__, __LINE__, error); __func__, __LINE__, error);
...@@ -663,12 +688,12 @@ static int lock_token(struct md_cluster_info *cinfo) ...@@ -663,12 +688,12 @@ static int lock_token(struct md_cluster_info *cinfo)
/* lock_comm() /* lock_comm()
* Sets the MD_CLUSTER_SEND_LOCK bit to lock the send channel. * Sets the MD_CLUSTER_SEND_LOCK bit to lock the send channel.
*/ */
static int lock_comm(struct md_cluster_info *cinfo) static int lock_comm(struct md_cluster_info *cinfo, bool mddev_locked)
{ {
wait_event(cinfo->wait, wait_event(cinfo->wait,
!test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state)); !test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state));
return lock_token(cinfo); return lock_token(cinfo, mddev_locked);
} }
static void unlock_comm(struct md_cluster_info *cinfo) static void unlock_comm(struct md_cluster_info *cinfo)
...@@ -743,11 +768,12 @@ static int __sendmsg(struct md_cluster_info *cinfo, struct cluster_msg *cmsg) ...@@ -743,11 +768,12 @@ static int __sendmsg(struct md_cluster_info *cinfo, struct cluster_msg *cmsg)
return error; return error;
} }
static int sendmsg(struct md_cluster_info *cinfo, struct cluster_msg *cmsg) static int sendmsg(struct md_cluster_info *cinfo, struct cluster_msg *cmsg,
bool mddev_locked)
{ {
int ret; int ret;
lock_comm(cinfo); lock_comm(cinfo, mddev_locked);
ret = __sendmsg(cinfo, cmsg); ret = __sendmsg(cinfo, cmsg);
unlock_comm(cinfo); unlock_comm(cinfo);
return ret; return ret;
...@@ -834,6 +860,7 @@ static int join(struct mddev *mddev, int nodes) ...@@ -834,6 +860,7 @@ static int join(struct mddev *mddev, int nodes)
mutex_init(&cinfo->recv_mutex); mutex_init(&cinfo->recv_mutex);
mddev->cluster_info = cinfo; mddev->cluster_info = cinfo;
cinfo->mddev = mddev;
memset(str, 0, 64); memset(str, 0, 64);
sprintf(str, "%pU", mddev->uuid); sprintf(str, "%pU", mddev->uuid);
...@@ -908,6 +935,7 @@ static int join(struct mddev *mddev, int nodes) ...@@ -908,6 +935,7 @@ static int join(struct mddev *mddev, int nodes)
return 0; return 0;
err: err:
set_bit(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state);
md_unregister_thread(&cinfo->recovery_thread); md_unregister_thread(&cinfo->recovery_thread);
md_unregister_thread(&cinfo->recv_thread); md_unregister_thread(&cinfo->recv_thread);
lockres_free(cinfo->message_lockres); lockres_free(cinfo->message_lockres);
...@@ -943,7 +971,7 @@ static void resync_bitmap(struct mddev *mddev) ...@@ -943,7 +971,7 @@ static void resync_bitmap(struct mddev *mddev)
int err; int err;
cmsg.type = cpu_to_le32(BITMAP_NEEDS_SYNC); cmsg.type = cpu_to_le32(BITMAP_NEEDS_SYNC);
err = sendmsg(cinfo, &cmsg); err = sendmsg(cinfo, &cmsg, 1);
if (err) if (err)
pr_err("%s:%d: failed to send BITMAP_NEEDS_SYNC message (%d)\n", pr_err("%s:%d: failed to send BITMAP_NEEDS_SYNC message (%d)\n",
__func__, __LINE__, err); __func__, __LINE__, err);
...@@ -963,6 +991,7 @@ static int leave(struct mddev *mddev) ...@@ -963,6 +991,7 @@ static int leave(struct mddev *mddev)
if (cinfo->slot_number > 0 && mddev->recovery_cp != MaxSector) if (cinfo->slot_number > 0 && mddev->recovery_cp != MaxSector)
resync_bitmap(mddev); resync_bitmap(mddev);
set_bit(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state);
md_unregister_thread(&cinfo->recovery_thread); md_unregister_thread(&cinfo->recovery_thread);
md_unregister_thread(&cinfo->recv_thread); md_unregister_thread(&cinfo->recv_thread);
lockres_free(cinfo->message_lockres); lockres_free(cinfo->message_lockres);
...@@ -997,16 +1026,30 @@ static int slot_number(struct mddev *mddev) ...@@ -997,16 +1026,30 @@ static int slot_number(struct mddev *mddev)
static int metadata_update_start(struct mddev *mddev) static int metadata_update_start(struct mddev *mddev)
{ {
struct md_cluster_info *cinfo = mddev->cluster_info; struct md_cluster_info *cinfo = mddev->cluster_info;
int ret;
/*
* metadata_update_start is always called with the protection of
* reconfig_mutex, so set WAITING_FOR_TOKEN here.
*/
ret = test_and_set_bit_lock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD,
&cinfo->state);
WARN_ON_ONCE(ret);
md_wakeup_thread(mddev->thread);
wait_event(cinfo->wait, wait_event(cinfo->wait,
!test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state) || !test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state) ||
test_and_clear_bit(MD_CLUSTER_SEND_LOCKED_ALREADY, &cinfo->state)); test_and_clear_bit(MD_CLUSTER_SEND_LOCKED_ALREADY, &cinfo->state));
/* If token is already locked, return 0 */ /* If token is already locked, return 0 */
if (cinfo->token_lockres->mode == DLM_LOCK_EX) if (cinfo->token_lockres->mode == DLM_LOCK_EX) {
clear_bit_unlock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state);
return 0; return 0;
}
return lock_token(cinfo); ret = lock_token(cinfo, 1);
clear_bit_unlock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state);
return ret;
} }
static int metadata_update_finish(struct mddev *mddev) static int metadata_update_finish(struct mddev *mddev)
...@@ -1069,7 +1112,14 @@ static int resync_info_update(struct mddev *mddev, sector_t lo, sector_t hi) ...@@ -1069,7 +1112,14 @@ static int resync_info_update(struct mddev *mddev, sector_t lo, sector_t hi)
cmsg.low = cpu_to_le64(lo); cmsg.low = cpu_to_le64(lo);
cmsg.high = cpu_to_le64(hi); cmsg.high = cpu_to_le64(hi);
return sendmsg(cinfo, &cmsg); /*
* mddev_lock is held if resync_info_update is called from
* resync_finish (md_reap_sync_thread -> resync_finish)
*/
if (lo == 0 && hi == 0)
return sendmsg(cinfo, &cmsg, 1);
else
return sendmsg(cinfo, &cmsg, 0);
} }
static int resync_finish(struct mddev *mddev) static int resync_finish(struct mddev *mddev)
...@@ -1119,7 +1169,7 @@ static int add_new_disk(struct mddev *mddev, struct md_rdev *rdev) ...@@ -1119,7 +1169,7 @@ static int add_new_disk(struct mddev *mddev, struct md_rdev *rdev)
cmsg.type = cpu_to_le32(NEWDISK); cmsg.type = cpu_to_le32(NEWDISK);
memcpy(cmsg.uuid, uuid, 16); memcpy(cmsg.uuid, uuid, 16);
cmsg.raid_slot = cpu_to_le32(rdev->desc_nr); cmsg.raid_slot = cpu_to_le32(rdev->desc_nr);
lock_comm(cinfo); lock_comm(cinfo, 1);
ret = __sendmsg(cinfo, &cmsg); ret = __sendmsg(cinfo, &cmsg);
if (ret) if (ret)
return ret; return ret;
...@@ -1179,7 +1229,7 @@ static int remove_disk(struct mddev *mddev, struct md_rdev *rdev) ...@@ -1179,7 +1229,7 @@ static int remove_disk(struct mddev *mddev, struct md_rdev *rdev)
struct md_cluster_info *cinfo = mddev->cluster_info; struct md_cluster_info *cinfo = mddev->cluster_info;
cmsg.type = cpu_to_le32(REMOVE); cmsg.type = cpu_to_le32(REMOVE);
cmsg.raid_slot = cpu_to_le32(rdev->desc_nr); cmsg.raid_slot = cpu_to_le32(rdev->desc_nr);
return sendmsg(cinfo, &cmsg); return sendmsg(cinfo, &cmsg, 1);
} }
static int lock_all_bitmaps(struct mddev *mddev) static int lock_all_bitmaps(struct mddev *mddev)
...@@ -1243,7 +1293,7 @@ static int gather_bitmaps(struct md_rdev *rdev) ...@@ -1243,7 +1293,7 @@ static int gather_bitmaps(struct md_rdev *rdev)
cmsg.type = cpu_to_le32(RE_ADD); cmsg.type = cpu_to_le32(RE_ADD);
cmsg.raid_slot = cpu_to_le32(rdev->desc_nr); cmsg.raid_slot = cpu_to_le32(rdev->desc_nr);
err = sendmsg(cinfo, &cmsg); err = sendmsg(cinfo, &cmsg, 1);
if (err) if (err)
goto out; goto out;
......
...@@ -8385,7 +8385,6 @@ void md_check_recovery(struct mddev *mddev) ...@@ -8385,7 +8385,6 @@ void md_check_recovery(struct mddev *mddev)
(mddev->sb_flags & ~ (1<<MD_SB_CHANGE_PENDING)) || (mddev->sb_flags & ~ (1<<MD_SB_CHANGE_PENDING)) ||
test_bit(MD_RECOVERY_NEEDED, &mddev->recovery) || test_bit(MD_RECOVERY_NEEDED, &mddev->recovery) ||
test_bit(MD_RECOVERY_DONE, &mddev->recovery) || test_bit(MD_RECOVERY_DONE, &mddev->recovery) ||
test_bit(MD_RELOAD_SB, &mddev->flags) ||
(mddev->external == 0 && mddev->safemode == 1) || (mddev->external == 0 && mddev->safemode == 1) ||
(mddev->safemode == 2 && ! atomic_read(&mddev->writes_pending) (mddev->safemode == 2 && ! atomic_read(&mddev->writes_pending)
&& !mddev->in_sync && mddev->recovery_cp == MaxSector) && !mddev->in_sync && mddev->recovery_cp == MaxSector)
...@@ -8434,9 +8433,6 @@ void md_check_recovery(struct mddev *mddev) ...@@ -8434,9 +8433,6 @@ void md_check_recovery(struct mddev *mddev)
rdev->raid_disk < 0) rdev->raid_disk < 0)
md_kick_rdev_from_array(rdev); md_kick_rdev_from_array(rdev);
} }
if (test_and_clear_bit(MD_RELOAD_SB, &mddev->flags))
md_reload_sb(mddev, mddev->good_device_nr);
} }
if (!mddev->external) { if (!mddev->external) {
......
...@@ -219,9 +219,6 @@ enum mddev_flags { ...@@ -219,9 +219,6 @@ enum mddev_flags {
* it then */ * it then */
MD_JOURNAL_CLEAN, /* A raid with journal is already clean */ MD_JOURNAL_CLEAN, /* A raid with journal is already clean */
MD_HAS_JOURNAL, /* The raid array has journal feature set */ MD_HAS_JOURNAL, /* The raid array has journal feature set */
MD_RELOAD_SB, /* Reload the superblock because another node
* updated it.
*/
MD_CLUSTER_RESYNC_LOCKED, /* cluster raid only, which means node MD_CLUSTER_RESYNC_LOCKED, /* cluster raid only, which means node
* already took resync lock, need to * already took resync lock, need to
* release the lock */ * release the lock */
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册