提交 fdd514e1 编写于 作者: T Tejun Heo 提交者: Jens Axboe

block: make disk_block_events() properly wait for work cancellation

disk_block_events() should guarantee that the event work is not in
flight on return and once blocked it shouldn't issue further
cancellations.

Because there was no synchronization between the first blocker doing
cancel_delayed_work_sync() and the following blockers, the following
blockers could finish before cancellation was complete, which broke
both guarantees - event work could be in flight and cancellation could
happen after return.

This bug triggered WARN_ON_ONCE() in disk_clear_events() reported in
bug#34662.

  https://bugzilla.kernel.org/show_bug.cgi?id=34662

Fix it by adding an outer mutex which protects both block count
manipulation and work cancellation.

-v2: Use outer mutex instead of bit waitqueue per Linus.
Signed-off-by: NTejun Heo <tj@kernel.org>
Tested-by: NSitsofe Wheeler <sitsofe@yahoo.com>
Reported-by: NSitsofe Wheeler <sitsofe@yahoo.com>
Reported-by: NBorislav Petkov <bp@alien8.de>
Reported-by: NMeelis Roos <mroos@linux.ee>
Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
上级 c3af54af
...@@ -1371,6 +1371,7 @@ struct disk_events { ...@@ -1371,6 +1371,7 @@ struct disk_events {
struct gendisk *disk; /* the associated disk */ struct gendisk *disk; /* the associated disk */
spinlock_t lock; spinlock_t lock;
struct mutex block_mutex; /* protects blocking */
int block; /* event blocking depth */ int block; /* event blocking depth */
unsigned int pending; /* events already sent out */ unsigned int pending; /* events already sent out */
unsigned int clearing; /* events being cleared */ unsigned int clearing; /* events being cleared */
...@@ -1438,12 +1439,20 @@ void disk_block_events(struct gendisk *disk) ...@@ -1438,12 +1439,20 @@ void disk_block_events(struct gendisk *disk)
if (!ev) if (!ev)
return; return;
/*
* Outer mutex ensures that the first blocker completes canceling
* the event work before further blockers are allowed to finish.
*/
mutex_lock(&ev->block_mutex);
spin_lock_irqsave(&ev->lock, flags); spin_lock_irqsave(&ev->lock, flags);
cancel = !ev->block++; cancel = !ev->block++;
spin_unlock_irqrestore(&ev->lock, flags); spin_unlock_irqrestore(&ev->lock, flags);
if (cancel) if (cancel)
cancel_delayed_work_sync(&disk->ev->dwork); cancel_delayed_work_sync(&disk->ev->dwork);
mutex_unlock(&ev->block_mutex);
} }
static void __disk_unblock_events(struct gendisk *disk, bool check_now) static void __disk_unblock_events(struct gendisk *disk, bool check_now)
...@@ -1751,6 +1760,7 @@ static void disk_add_events(struct gendisk *disk) ...@@ -1751,6 +1760,7 @@ static void disk_add_events(struct gendisk *disk)
INIT_LIST_HEAD(&ev->node); INIT_LIST_HEAD(&ev->node);
ev->disk = disk; ev->disk = disk;
spin_lock_init(&ev->lock); spin_lock_init(&ev->lock);
mutex_init(&ev->block_mutex);
ev->block = 1; ev->block = 1;
ev->poll_msecs = -1; ev->poll_msecs = -1;
INIT_DELAYED_WORK(&ev->dwork, disk_events_workfn); INIT_DELAYED_WORK(&ev->dwork, disk_events_workfn);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册