提交 f71ad62a 编写于 作者: M Michael Holzheu 提交者: Martin Schwidefsky

[S390] tape: Fix race condition in tape block device driver

Due to incorrect function call sequence it can happen that a tape block
request is finished before the request is taken from the block request queue.

The following sequence leads to that condition:
 * tapeblock_start_request() -> start CCW program
 * Request finishes -> IO interrupt
 * tapeblock_end_request()
 * end_that_request_last()

If blkdev_dequeue_request() has not been called before end_that_request_last(),
a kernel bug is triggered in end_that_request_last() because the request is
still queued. To solve that problem blkdev_dequeue_request() has to be called
before starting the CCW program.
Signed-off-by: NMichael Holzheu <holzheu@de.ibm.com>
Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
上级 97195d6b
...@@ -179,11 +179,11 @@ tapeblock_requeue(struct work_struct *work) { ...@@ -179,11 +179,11 @@ tapeblock_requeue(struct work_struct *work) {
tapeblock_end_request(req, -EIO); tapeblock_end_request(req, -EIO);
continue; continue;
} }
blkdev_dequeue_request(req);
nr_queued++;
spin_unlock_irq(&device->blk_data.request_queue_lock); spin_unlock_irq(&device->blk_data.request_queue_lock);
rc = tapeblock_start_request(device, req); rc = tapeblock_start_request(device, req);
spin_lock_irq(&device->blk_data.request_queue_lock); spin_lock_irq(&device->blk_data.request_queue_lock);
blkdev_dequeue_request(req);
nr_queued++;
} }
spin_unlock_irq(&device->blk_data.request_queue_lock); spin_unlock_irq(&device->blk_data.request_queue_lock);
atomic_set(&device->blk_data.requeue_scheduled, 0); atomic_set(&device->blk_data.requeue_scheduled, 0);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册