提交 c9059598 编写于 作者: L Linus Torvalds

Merge branch 'for-2.6.31' of git://git.kernel.dk/linux-2.6-block

* 'for-2.6.31' of git://git.kernel.dk/linux-2.6-block: (153 commits)
  block: add request clone interface (v2)
  floppy: fix hibernation
  ramdisk: remove long-deprecated "ramdisk=" boot-time parameter
  fs/bio.c: add missing __user annotation
  block: prevent possible io_context->refcount overflow
  Add serial number support for virtio_blk, V4a
  block: Add missing bounce_pfn stacking and fix comments
  Revert "block: Fix bounce limit setting in DM"
  cciss: decode unit attention in SCSI error handling code
  cciss: Remove no longer needed sendcmd reject processing code
  cciss: change SCSI error handling routines to work with interrupts enabled.
  cciss: separate error processing and command retrying code in sendcmd_withirq_core()
  cciss: factor out fix target status processing code from sendcmd functions
  cciss: simplify interface of sendcmd() and sendcmd_withirq()
  cciss: factor out core of sendcmd_withirq() for use by SCSI error handling code
  cciss: Use schedule_timeout_uninterruptible in SCSI error handling code
  block: needs to set the residual length of a bidi request
  Revert "block: implement blkdev_readpages"
  block: Fix bounce limit setting in DM
  Removed reference to non-existing file Documentation/PCI/PCI-DMA-mapping.txt
  ...

Manually fix conflicts with tracing updates in:
	block/blk-sysfs.c
	drivers/ide/ide-atapi.c
	drivers/ide/ide-cd.c
	drivers/ide/ide-floppy.c
	drivers/ide/ide-tape.c
	include/trace/events/block.h
	kernel/trace/blktrace.c
...@@ -60,3 +60,62 @@ Description: ...@@ -60,3 +60,62 @@ Description:
Indicates whether the block layer should automatically Indicates whether the block layer should automatically
generate checksums for write requests bound for generate checksums for write requests bound for
devices that support receiving integrity metadata. devices that support receiving integrity metadata.
What: /sys/block/<disk>/alignment_offset
Date: April 2009
Contact: Martin K. Petersen <martin.petersen@oracle.com>
Description:
Storage devices may report a physical block size that is
bigger than the logical block size (for instance a drive
with 4KB physical sectors exposing 512-byte logical
blocks to the operating system). This parameter
indicates how many bytes the beginning of the device is
offset from the disk's natural alignment.
What: /sys/block/<disk>/<partition>/alignment_offset
Date: April 2009
Contact: Martin K. Petersen <martin.petersen@oracle.com>
Description:
Storage devices may report a physical block size that is
bigger than the logical block size (for instance a drive
with 4KB physical sectors exposing 512-byte logical
blocks to the operating system). This parameter
indicates how many bytes the beginning of the partition
is offset from the disk's natural alignment.
What: /sys/block/<disk>/queue/logical_block_size
Date: May 2009
Contact: Martin K. Petersen <martin.petersen@oracle.com>
Description:
This is the smallest unit the storage device can
address. It is typically 512 bytes.
What: /sys/block/<disk>/queue/physical_block_size
Date: May 2009
Contact: Martin K. Petersen <martin.petersen@oracle.com>
Description:
This is the smallest unit the storage device can write
without resorting to read-modify-write operation. It is
usually the same as the logical block size but may be
bigger. One example is SATA drives with 4KB sectors
that expose a 512-byte logical block size to the
operating system.
What: /sys/block/<disk>/queue/minimum_io_size
Date: April 2009
Contact: Martin K. Petersen <martin.petersen@oracle.com>
Description:
Storage devices may report a preferred minimum I/O size,
which is the smallest request the device can perform
without incurring a read-modify-write penalty. For disk
drives this is often the physical block size. For RAID
arrays it is often the stripe chunk size.
What: /sys/block/<disk>/queue/optimal_io_size
Date: April 2009
Contact: Martin K. Petersen <martin.petersen@oracle.com>
Description:
Storage devices may report an optimal I/O size, which is
the device's preferred unit of receiving I/O. This is
rarely reported for disk drives. For RAID devices it is
usually the stripe width or the internal block size.
Where: /sys/bus/pci/devices/<dev>/ccissX/cXdY/model
Date: March 2009
Kernel Version: 2.6.30
Contact: iss_storagedev@hp.com
Description: Displays the SCSI INQUIRY page 0 model for logical drive
Y of controller X.
Where: /sys/bus/pci/devices/<dev>/ccissX/cXdY/rev
Date: March 2009
Kernel Version: 2.6.30
Contact: iss_storagedev@hp.com
Description: Displays the SCSI INQUIRY page 0 revision for logical
drive Y of controller X.
Where: /sys/bus/pci/devices/<dev>/ccissX/cXdY/unique_id
Date: March 2009
Kernel Version: 2.6.30
Contact: iss_storagedev@hp.com
Description: Displays the SCSI INQUIRY page 83 serial number for logical
drive Y of controller X.
Where: /sys/bus/pci/devices/<dev>/ccissX/cXdY/vendor
Date: March 2009
Kernel Version: 2.6.30
Contact: iss_storagedev@hp.com
Description: Displays the SCSI INQUIRY page 0 vendor for logical drive
Y of controller X.
Where: /sys/bus/pci/devices/<dev>/ccissX/cXdY/block:cciss!cXdY
Date: March 2009
Kernel Version: 2.6.30
Contact: iss_storagedev@hp.com
Description: A symbolic link to /sys/block/cciss!cXdY
...@@ -186,7 +186,7 @@ a virtual address mapping (unlike the earlier scheme of virtual address ...@@ -186,7 +186,7 @@ a virtual address mapping (unlike the earlier scheme of virtual address
do not have a corresponding kernel virtual address space mapping) and do not have a corresponding kernel virtual address space mapping) and
low-memory pages. low-memory pages.
Note: Please refer to Documentation/PCI/PCI-DMA-mapping.txt for a discussion Note: Please refer to Documentation/DMA-mapping.txt for a discussion
on PCI high mem DMA aspects and mapping of scatter gather lists, and support on PCI high mem DMA aspects and mapping of scatter gather lists, and support
for 64 bit PCI. for 64 bit PCI.
......
...@@ -147,24 +147,40 @@ static int __mbox_msg_send(struct omap_mbox *mbox, mbox_msg_t msg, void *arg) ...@@ -147,24 +147,40 @@ static int __mbox_msg_send(struct omap_mbox *mbox, mbox_msg_t msg, void *arg)
return ret; return ret;
} }
struct omap_msg_tx_data {
mbox_msg_t msg;
void *arg;
};
static void omap_msg_tx_end_io(struct request *rq, int error)
{
kfree(rq->special);
__blk_put_request(rq->q, rq);
}
int omap_mbox_msg_send(struct omap_mbox *mbox, mbox_msg_t msg, void* arg) int omap_mbox_msg_send(struct omap_mbox *mbox, mbox_msg_t msg, void* arg)
{ {
struct omap_msg_tx_data *tx_data;
struct request *rq; struct request *rq;
struct request_queue *q = mbox->txq->queue; struct request_queue *q = mbox->txq->queue;
int ret = 0;
tx_data = kmalloc(sizeof(*tx_data), GFP_ATOMIC);
if (unlikely(!tx_data))
return -ENOMEM;
rq = blk_get_request(q, WRITE, GFP_ATOMIC); rq = blk_get_request(q, WRITE, GFP_ATOMIC);
if (unlikely(!rq)) { if (unlikely(!rq)) {
ret = -ENOMEM; kfree(tx_data);
goto fail; return -ENOMEM;
} }
rq->data = (void *)msg; tx_data->msg = msg;
blk_insert_request(q, rq, 0, arg); tx_data->arg = arg;
rq->end_io = omap_msg_tx_end_io;
blk_insert_request(q, rq, 0, tx_data);
schedule_work(&mbox->txq->work); schedule_work(&mbox->txq->work);
fail: return 0;
return ret;
} }
EXPORT_SYMBOL(omap_mbox_msg_send); EXPORT_SYMBOL(omap_mbox_msg_send);
...@@ -178,22 +194,28 @@ static void mbox_tx_work(struct work_struct *work) ...@@ -178,22 +194,28 @@ static void mbox_tx_work(struct work_struct *work)
struct request_queue *q = mbox->txq->queue; struct request_queue *q = mbox->txq->queue;
while (1) { while (1) {
struct omap_msg_tx_data *tx_data;
spin_lock(q->queue_lock); spin_lock(q->queue_lock);
rq = elv_next_request(q); rq = blk_fetch_request(q);
spin_unlock(q->queue_lock); spin_unlock(q->queue_lock);
if (!rq) if (!rq)
break; break;
ret = __mbox_msg_send(mbox, (mbox_msg_t) rq->data, rq->special); tx_data = rq->special;
ret = __mbox_msg_send(mbox, tx_data->msg, tx_data->arg);
if (ret) { if (ret) {
enable_mbox_irq(mbox, IRQ_TX); enable_mbox_irq(mbox, IRQ_TX);
spin_lock(q->queue_lock);
blk_requeue_request(q, rq);
spin_unlock(q->queue_lock);
return; return;
} }
spin_lock(q->queue_lock); spin_lock(q->queue_lock);
if (__blk_end_request(rq, 0, 0)) __blk_end_request_all(rq, 0);
BUG();
spin_unlock(q->queue_lock); spin_unlock(q->queue_lock);
} }
} }
...@@ -218,16 +240,13 @@ static void mbox_rx_work(struct work_struct *work) ...@@ -218,16 +240,13 @@ static void mbox_rx_work(struct work_struct *work)
while (1) { while (1) {
spin_lock_irqsave(q->queue_lock, flags); spin_lock_irqsave(q->queue_lock, flags);
rq = elv_next_request(q); rq = blk_fetch_request(q);
spin_unlock_irqrestore(q->queue_lock, flags); spin_unlock_irqrestore(q->queue_lock, flags);
if (!rq) if (!rq)
break; break;
msg = (mbox_msg_t) rq->data; msg = (mbox_msg_t)rq->special;
blk_end_request_all(rq, 0);
if (blk_end_request(rq, 0, 0))
BUG();
mbox->rxq->callback((void *)msg); mbox->rxq->callback((void *)msg);
} }
} }
...@@ -264,7 +283,6 @@ static void __mbox_rx_interrupt(struct omap_mbox *mbox) ...@@ -264,7 +283,6 @@ static void __mbox_rx_interrupt(struct omap_mbox *mbox)
goto nomem; goto nomem;
msg = mbox_fifo_read(mbox); msg = mbox_fifo_read(mbox);
rq->data = (void *)msg;
if (unlikely(mbox_seq_test(mbox, msg))) { if (unlikely(mbox_seq_test(mbox, msg))) {
pr_info("mbox: Illegal seq bit!(%08x)\n", msg); pr_info("mbox: Illegal seq bit!(%08x)\n", msg);
...@@ -272,7 +290,7 @@ static void __mbox_rx_interrupt(struct omap_mbox *mbox) ...@@ -272,7 +290,7 @@ static void __mbox_rx_interrupt(struct omap_mbox *mbox)
mbox->err_notify(); mbox->err_notify();
} }
blk_insert_request(q, rq, 0, NULL); blk_insert_request(q, rq, 0, (void *)msg);
if (mbox->ops->type == OMAP_MBOX_TYPE1) if (mbox->ops->type == OMAP_MBOX_TYPE1)
break; break;
} }
...@@ -329,16 +347,15 @@ omap_mbox_read(struct device *dev, struct device_attribute *attr, char *buf) ...@@ -329,16 +347,15 @@ omap_mbox_read(struct device *dev, struct device_attribute *attr, char *buf)
while (1) { while (1) {
spin_lock_irqsave(q->queue_lock, flags); spin_lock_irqsave(q->queue_lock, flags);
rq = elv_next_request(q); rq = blk_fetch_request(q);
spin_unlock_irqrestore(q->queue_lock, flags); spin_unlock_irqrestore(q->queue_lock, flags);
if (!rq) if (!rq)
break; break;
*p = (mbox_msg_t) rq->data; *p = (mbox_msg_t)rq->special;
if (blk_end_request(rq, 0, 0)) blk_end_request_all(rq, 0);
BUG();
if (unlikely(mbox_seq_test(mbox, *p))) { if (unlikely(mbox_seq_test(mbox, *p))) {
pr_info("mbox: Illegal seq bit!(%08x) ignored\n", *p); pr_info("mbox: Illegal seq bit!(%08x) ignored\n", *p);
......
...@@ -250,7 +250,7 @@ axon_ram_probe(struct of_device *device, const struct of_device_id *device_id) ...@@ -250,7 +250,7 @@ axon_ram_probe(struct of_device *device, const struct of_device_id *device_id)
set_capacity(bank->disk, bank->size >> AXON_RAM_SECTOR_SHIFT); set_capacity(bank->disk, bank->size >> AXON_RAM_SECTOR_SHIFT);
blk_queue_make_request(bank->disk->queue, axon_ram_make_request); blk_queue_make_request(bank->disk->queue, axon_ram_make_request);
blk_queue_hardsect_size(bank->disk->queue, AXON_RAM_SECTOR_SIZE); blk_queue_logical_block_size(bank->disk->queue, AXON_RAM_SECTOR_SIZE);
add_disk(bank->disk); add_disk(bank->disk);
bank->irq_id = irq_of_parse_and_map(device->node, 0); bank->irq_id = irq_of_parse_and_map(device->node, 0);
......
...@@ -451,23 +451,6 @@ static void do_ubd_request(struct request_queue * q); ...@@ -451,23 +451,6 @@ static void do_ubd_request(struct request_queue * q);
/* Only changed by ubd_init, which is an initcall. */ /* Only changed by ubd_init, which is an initcall. */
static int thread_fd = -1; static int thread_fd = -1;
static void ubd_end_request(struct request *req, int bytes, int error)
{
blk_end_request(req, error, bytes);
}
/* Callable only from interrupt context - otherwise you need to do
* spin_lock_irq()/spin_lock_irqsave() */
static inline void ubd_finish(struct request *req, int bytes)
{
if(bytes < 0){
ubd_end_request(req, 0, -EIO);
return;
}
ubd_end_request(req, bytes, 0);
}
static LIST_HEAD(restart); static LIST_HEAD(restart);
/* XXX - move this inside ubd_intr. */ /* XXX - move this inside ubd_intr. */
...@@ -475,7 +458,6 @@ static LIST_HEAD(restart); ...@@ -475,7 +458,6 @@ static LIST_HEAD(restart);
static void ubd_handler(void) static void ubd_handler(void)
{ {
struct io_thread_req *req; struct io_thread_req *req;
struct request *rq;
struct ubd *ubd; struct ubd *ubd;
struct list_head *list, *next_ele; struct list_head *list, *next_ele;
unsigned long flags; unsigned long flags;
...@@ -492,10 +474,7 @@ static void ubd_handler(void) ...@@ -492,10 +474,7 @@ static void ubd_handler(void)
return; return;
} }
rq = req->req; blk_end_request(req->req, 0, req->length);
rq->nr_sectors -= req->length >> 9;
if(rq->nr_sectors == 0)
ubd_finish(rq, rq->hard_nr_sectors << 9);
kfree(req); kfree(req);
} }
reactivate_fd(thread_fd, UBD_IRQ); reactivate_fd(thread_fd, UBD_IRQ);
...@@ -1243,27 +1222,26 @@ static void do_ubd_request(struct request_queue *q) ...@@ -1243,27 +1222,26 @@ static void do_ubd_request(struct request_queue *q)
{ {
struct io_thread_req *io_req; struct io_thread_req *io_req;
struct request *req; struct request *req;
int n, last_sectors; sector_t sector;
int n;
while(1){ while(1){
struct ubd *dev = q->queuedata; struct ubd *dev = q->queuedata;
if(dev->end_sg == 0){ if(dev->end_sg == 0){
struct request *req = elv_next_request(q); struct request *req = blk_fetch_request(q);
if(req == NULL) if(req == NULL)
return; return;
dev->request = req; dev->request = req;
blkdev_dequeue_request(req);
dev->start_sg = 0; dev->start_sg = 0;
dev->end_sg = blk_rq_map_sg(q, req, dev->sg); dev->end_sg = blk_rq_map_sg(q, req, dev->sg);
} }
req = dev->request; req = dev->request;
last_sectors = 0; sector = blk_rq_pos(req);
while(dev->start_sg < dev->end_sg){ while(dev->start_sg < dev->end_sg){
struct scatterlist *sg = &dev->sg[dev->start_sg]; struct scatterlist *sg = &dev->sg[dev->start_sg];
req->sector += last_sectors;
io_req = kmalloc(sizeof(struct io_thread_req), io_req = kmalloc(sizeof(struct io_thread_req),
GFP_ATOMIC); GFP_ATOMIC);
if(io_req == NULL){ if(io_req == NULL){
...@@ -1272,10 +1250,10 @@ static void do_ubd_request(struct request_queue *q) ...@@ -1272,10 +1250,10 @@ static void do_ubd_request(struct request_queue *q)
return; return;
} }
prepare_request(req, io_req, prepare_request(req, io_req,
(unsigned long long) req->sector << 9, (unsigned long long)sector << 9,
sg->offset, sg->length, sg_page(sg)); sg->offset, sg->length, sg_page(sg));
last_sectors = sg->length >> 9; sector += sg->length >> 9;
n = os_write_file(thread_fd, &io_req, n = os_write_file(thread_fd, &io_req,
sizeof(struct io_thread_req *)); sizeof(struct io_thread_req *));
if(n != sizeof(struct io_thread_req *)){ if(n != sizeof(struct io_thread_req *)){
......
...@@ -26,6 +26,7 @@ if BLOCK ...@@ -26,6 +26,7 @@ if BLOCK
config LBD config LBD
bool "Support for large block devices and files" bool "Support for large block devices and files"
depends on !64BIT depends on !64BIT
default y
help help
Enable block devices or files of size 2TB and larger. Enable block devices or files of size 2TB and larger.
...@@ -38,11 +39,13 @@ config LBD ...@@ -38,11 +39,13 @@ config LBD
The ext4 filesystem requires that this feature be enabled in The ext4 filesystem requires that this feature be enabled in
order to support filesystems that have the huge_file feature order to support filesystems that have the huge_file feature
enabled. Otherwise, it will refuse to mount any filesystems enabled. Otherwise, it will refuse to mount in the read-write
that use the huge_file feature, which is enabled by default mode any filesystems that use the huge_file feature, which is
by mke2fs.ext4. The GFS2 filesystem also requires this feature. enabled by default by mke2fs.ext4.
If unsure, say N. The GFS2 filesystem also requires this feature.
If unsure, say Y.
config BLK_DEV_BSG config BLK_DEV_BSG
bool "Block layer SG support v4 (EXPERIMENTAL)" bool "Block layer SG support v4 (EXPERIMENTAL)"
......
...@@ -306,8 +306,8 @@ as_choose_req(struct as_data *ad, struct request *rq1, struct request *rq2) ...@@ -306,8 +306,8 @@ as_choose_req(struct as_data *ad, struct request *rq1, struct request *rq2)
data_dir = rq_is_sync(rq1); data_dir = rq_is_sync(rq1);
last = ad->last_sector[data_dir]; last = ad->last_sector[data_dir];
s1 = rq1->sector; s1 = blk_rq_pos(rq1);
s2 = rq2->sector; s2 = blk_rq_pos(rq2);
BUG_ON(data_dir != rq_is_sync(rq2)); BUG_ON(data_dir != rq_is_sync(rq2));
...@@ -566,13 +566,15 @@ static void as_update_iohist(struct as_data *ad, struct as_io_context *aic, ...@@ -566,13 +566,15 @@ static void as_update_iohist(struct as_data *ad, struct as_io_context *aic,
as_update_thinktime(ad, aic, thinktime); as_update_thinktime(ad, aic, thinktime);
/* Calculate read -> read seek distance */ /* Calculate read -> read seek distance */
if (aic->last_request_pos < rq->sector) if (aic->last_request_pos < blk_rq_pos(rq))
seek_dist = rq->sector - aic->last_request_pos; seek_dist = blk_rq_pos(rq) -
aic->last_request_pos;
else else
seek_dist = aic->last_request_pos - rq->sector; seek_dist = aic->last_request_pos -
blk_rq_pos(rq);
as_update_seekdist(ad, aic, seek_dist); as_update_seekdist(ad, aic, seek_dist);
} }
aic->last_request_pos = rq->sector + rq->nr_sectors; aic->last_request_pos = blk_rq_pos(rq) + blk_rq_sectors(rq);
set_bit(AS_TASK_IOSTARTED, &aic->state); set_bit(AS_TASK_IOSTARTED, &aic->state);
spin_unlock(&aic->lock); spin_unlock(&aic->lock);
} }
...@@ -587,7 +589,7 @@ static int as_close_req(struct as_data *ad, struct as_io_context *aic, ...@@ -587,7 +589,7 @@ static int as_close_req(struct as_data *ad, struct as_io_context *aic,
{ {
unsigned long delay; /* jiffies */ unsigned long delay; /* jiffies */
sector_t last = ad->last_sector[ad->batch_data_dir]; sector_t last = ad->last_sector[ad->batch_data_dir];
sector_t next = rq->sector; sector_t next = blk_rq_pos(rq);
sector_t delta; /* acceptable close offset (in sectors) */ sector_t delta; /* acceptable close offset (in sectors) */
sector_t s; sector_t s;
...@@ -981,7 +983,7 @@ static void as_move_to_dispatch(struct as_data *ad, struct request *rq) ...@@ -981,7 +983,7 @@ static void as_move_to_dispatch(struct as_data *ad, struct request *rq)
* This has to be set in order to be correctly updated by * This has to be set in order to be correctly updated by
* as_find_next_rq * as_find_next_rq
*/ */
ad->last_sector[data_dir] = rq->sector + rq->nr_sectors; ad->last_sector[data_dir] = blk_rq_pos(rq) + blk_rq_sectors(rq);
if (data_dir == BLK_RW_SYNC) { if (data_dir == BLK_RW_SYNC) {
struct io_context *ioc = RQ_IOC(rq); struct io_context *ioc = RQ_IOC(rq);
...@@ -1312,12 +1314,8 @@ static void as_merged_requests(struct request_queue *q, struct request *req, ...@@ -1312,12 +1314,8 @@ static void as_merged_requests(struct request_queue *q, struct request *req,
static void as_work_handler(struct work_struct *work) static void as_work_handler(struct work_struct *work)
{ {
struct as_data *ad = container_of(work, struct as_data, antic_work); struct as_data *ad = container_of(work, struct as_data, antic_work);
struct request_queue *q = ad->q;
unsigned long flags;
spin_lock_irqsave(q->queue_lock, flags); blk_run_queue(ad->q);
blk_start_queueing(q);
spin_unlock_irqrestore(q->queue_lock, flags);
} }
static int as_may_queue(struct request_queue *q, int rw) static int as_may_queue(struct request_queue *q, int rw)
......
...@@ -106,10 +106,7 @@ bool blk_ordered_complete_seq(struct request_queue *q, unsigned seq, int error) ...@@ -106,10 +106,7 @@ bool blk_ordered_complete_seq(struct request_queue *q, unsigned seq, int error)
*/ */
q->ordseq = 0; q->ordseq = 0;
rq = q->orig_bar_rq; rq = q->orig_bar_rq;
__blk_end_request_all(rq, q->orderr);
if (__blk_end_request(rq, q->orderr, blk_rq_bytes(rq)))
BUG();
return true; return true;
} }
...@@ -166,7 +163,7 @@ static inline bool start_ordered(struct request_queue *q, struct request **rqp) ...@@ -166,7 +163,7 @@ static inline bool start_ordered(struct request_queue *q, struct request **rqp)
* For an empty barrier, there's no actual BAR request, which * For an empty barrier, there's no actual BAR request, which
* in turn makes POSTFLUSH unnecessary. Mask them off. * in turn makes POSTFLUSH unnecessary. Mask them off.
*/ */
if (!rq->hard_nr_sectors) { if (!blk_rq_sectors(rq)) {
q->ordered &= ~(QUEUE_ORDERED_DO_BAR | q->ordered &= ~(QUEUE_ORDERED_DO_BAR |
QUEUE_ORDERED_DO_POSTFLUSH); QUEUE_ORDERED_DO_POSTFLUSH);
/* /*
...@@ -183,7 +180,7 @@ static inline bool start_ordered(struct request_queue *q, struct request **rqp) ...@@ -183,7 +180,7 @@ static inline bool start_ordered(struct request_queue *q, struct request **rqp)
} }
/* stash away the original request */ /* stash away the original request */
elv_dequeue_request(q, rq); blk_dequeue_request(rq);
q->orig_bar_rq = rq; q->orig_bar_rq = rq;
rq = NULL; rq = NULL;
...@@ -221,7 +218,7 @@ static inline bool start_ordered(struct request_queue *q, struct request **rqp) ...@@ -221,7 +218,7 @@ static inline bool start_ordered(struct request_queue *q, struct request **rqp)
} else } else
skip |= QUEUE_ORDSEQ_PREFLUSH; skip |= QUEUE_ORDSEQ_PREFLUSH;
if ((q->ordered & QUEUE_ORDERED_BY_DRAIN) && q->in_flight) if ((q->ordered & QUEUE_ORDERED_BY_DRAIN) && queue_in_flight(q))
rq = NULL; rq = NULL;
else else
skip |= QUEUE_ORDSEQ_DRAIN; skip |= QUEUE_ORDSEQ_DRAIN;
...@@ -251,10 +248,8 @@ bool blk_do_ordered(struct request_queue *q, struct request **rqp) ...@@ -251,10 +248,8 @@ bool blk_do_ordered(struct request_queue *q, struct request **rqp)
* Queue ordering not supported. Terminate * Queue ordering not supported. Terminate
* with prejudice. * with prejudice.
*/ */
elv_dequeue_request(q, rq); blk_dequeue_request(rq);
if (__blk_end_request(rq, -EOPNOTSUPP, __blk_end_request_all(rq, -EOPNOTSUPP);
blk_rq_bytes(rq)))
BUG();
*rqp = NULL; *rqp = NULL;
return false; return false;
} }
...@@ -329,7 +324,7 @@ int blkdev_issue_flush(struct block_device *bdev, sector_t *error_sector) ...@@ -329,7 +324,7 @@ int blkdev_issue_flush(struct block_device *bdev, sector_t *error_sector)
/* /*
* The driver must store the error location in ->bi_sector, if * The driver must store the error location in ->bi_sector, if
* it supports it. For non-stacked drivers, this should be copied * it supports it. For non-stacked drivers, this should be copied
* from rq->sector. * from blk_rq_pos(rq).
*/ */
if (error_sector) if (error_sector)
*error_sector = bio->bi_sector; *error_sector = bio->bi_sector;
...@@ -393,10 +388,10 @@ int blkdev_issue_discard(struct block_device *bdev, ...@@ -393,10 +388,10 @@ int blkdev_issue_discard(struct block_device *bdev,
bio->bi_sector = sector; bio->bi_sector = sector;
if (nr_sects > q->max_hw_sectors) { if (nr_sects > queue_max_hw_sectors(q)) {
bio->bi_size = q->max_hw_sectors << 9; bio->bi_size = queue_max_hw_sectors(q) << 9;
nr_sects -= q->max_hw_sectors; nr_sects -= queue_max_hw_sectors(q);
sector += q->max_hw_sectors; sector += queue_max_hw_sectors(q);
} else { } else {
bio->bi_size = nr_sects << 9; bio->bi_size = nr_sects << 9;
nr_sects = 0; nr_sects = 0;
......
此差异已折叠。
...@@ -51,7 +51,6 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk, ...@@ -51,7 +51,6 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK; int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK;
rq->rq_disk = bd_disk; rq->rq_disk = bd_disk;
rq->cmd_flags |= REQ_NOMERGE;
rq->end_io = done; rq->end_io = done;
WARN_ON(irqs_disabled()); WARN_ON(irqs_disabled());
spin_lock_irq(q->queue_lock); spin_lock_irq(q->queue_lock);
......
...@@ -340,7 +340,7 @@ int blk_integrity_register(struct gendisk *disk, struct blk_integrity *template) ...@@ -340,7 +340,7 @@ int blk_integrity_register(struct gendisk *disk, struct blk_integrity *template)
kobject_uevent(&bi->kobj, KOBJ_ADD); kobject_uevent(&bi->kobj, KOBJ_ADD);
bi->flags |= INTEGRITY_FLAG_READ | INTEGRITY_FLAG_WRITE; bi->flags |= INTEGRITY_FLAG_READ | INTEGRITY_FLAG_WRITE;
bi->sector_size = disk->queue->hardsect_size; bi->sector_size = queue_logical_block_size(disk->queue);
disk->integrity = bi; disk->integrity = bi;
} else } else
bi = disk->integrity; bi = disk->integrity;
......
...@@ -35,9 +35,9 @@ int put_io_context(struct io_context *ioc) ...@@ -35,9 +35,9 @@ int put_io_context(struct io_context *ioc)
if (ioc == NULL) if (ioc == NULL)
return 1; return 1;
BUG_ON(atomic_read(&ioc->refcount) == 0); BUG_ON(atomic_long_read(&ioc->refcount) == 0);
if (atomic_dec_and_test(&ioc->refcount)) { if (atomic_long_dec_and_test(&ioc->refcount)) {
rcu_read_lock(); rcu_read_lock();
if (ioc->aic && ioc->aic->dtor) if (ioc->aic && ioc->aic->dtor)
ioc->aic->dtor(ioc->aic); ioc->aic->dtor(ioc->aic);
...@@ -90,7 +90,7 @@ struct io_context *alloc_io_context(gfp_t gfp_flags, int node) ...@@ -90,7 +90,7 @@ struct io_context *alloc_io_context(gfp_t gfp_flags, int node)
ret = kmem_cache_alloc_node(iocontext_cachep, gfp_flags, node); ret = kmem_cache_alloc_node(iocontext_cachep, gfp_flags, node);
if (ret) { if (ret) {
atomic_set(&ret->refcount, 1); atomic_long_set(&ret->refcount, 1);
atomic_set(&ret->nr_tasks, 1); atomic_set(&ret->nr_tasks, 1);
spin_lock_init(&ret->lock); spin_lock_init(&ret->lock);
ret->ioprio_changed = 0; ret->ioprio_changed = 0;
...@@ -151,7 +151,7 @@ struct io_context *get_io_context(gfp_t gfp_flags, int node) ...@@ -151,7 +151,7 @@ struct io_context *get_io_context(gfp_t gfp_flags, int node)
ret = current_io_context(gfp_flags, node); ret = current_io_context(gfp_flags, node);
if (unlikely(!ret)) if (unlikely(!ret))
break; break;
} while (!atomic_inc_not_zero(&ret->refcount)); } while (!atomic_long_inc_not_zero(&ret->refcount));
return ret; return ret;
} }
...@@ -163,8 +163,8 @@ void copy_io_context(struct io_context **pdst, struct io_context **psrc) ...@@ -163,8 +163,8 @@ void copy_io_context(struct io_context **pdst, struct io_context **psrc)
struct io_context *dst = *pdst; struct io_context *dst = *pdst;
if (src) { if (src) {
BUG_ON(atomic_read(&src->refcount) == 0); BUG_ON(atomic_long_read(&src->refcount) == 0);
atomic_inc(&src->refcount); atomic_long_inc(&src->refcount);
put_io_context(dst); put_io_context(dst);
*pdst = src; *pdst = src;
} }
......
...@@ -20,11 +20,10 @@ int blk_rq_append_bio(struct request_queue *q, struct request *rq, ...@@ -20,11 +20,10 @@ int blk_rq_append_bio(struct request_queue *q, struct request *rq,
rq->biotail->bi_next = bio; rq->biotail->bi_next = bio;
rq->biotail = bio; rq->biotail = bio;
rq->data_len += bio->bi_size; rq->__data_len += bio->bi_size;
} }
return 0; return 0;
} }
EXPORT_SYMBOL(blk_rq_append_bio);
static int __blk_rq_unmap_user(struct bio *bio) static int __blk_rq_unmap_user(struct bio *bio)
{ {
...@@ -116,7 +115,7 @@ int blk_rq_map_user(struct request_queue *q, struct request *rq, ...@@ -116,7 +115,7 @@ int blk_rq_map_user(struct request_queue *q, struct request *rq,
struct bio *bio = NULL; struct bio *bio = NULL;
int ret; int ret;
if (len > (q->max_hw_sectors << 9)) if (len > (queue_max_hw_sectors(q) << 9))
return -EINVAL; return -EINVAL;
if (!len) if (!len)
return -EINVAL; return -EINVAL;
...@@ -156,7 +155,7 @@ int blk_rq_map_user(struct request_queue *q, struct request *rq, ...@@ -156,7 +155,7 @@ int blk_rq_map_user(struct request_queue *q, struct request *rq,
if (!bio_flagged(bio, BIO_USER_MAPPED)) if (!bio_flagged(bio, BIO_USER_MAPPED))
rq->cmd_flags |= REQ_COPY_USER; rq->cmd_flags |= REQ_COPY_USER;
rq->buffer = rq->data = NULL; rq->buffer = NULL;
return 0; return 0;
unmap_rq: unmap_rq:
blk_rq_unmap_user(bio); blk_rq_unmap_user(bio);
...@@ -235,7 +234,7 @@ int blk_rq_map_user_iov(struct request_queue *q, struct request *rq, ...@@ -235,7 +234,7 @@ int blk_rq_map_user_iov(struct request_queue *q, struct request *rq,
blk_queue_bounce(q, &bio); blk_queue_bounce(q, &bio);
bio_get(bio); bio_get(bio);
blk_rq_bio_prep(q, rq, bio); blk_rq_bio_prep(q, rq, bio);
rq->buffer = rq->data = NULL; rq->buffer = NULL;
return 0; return 0;
} }
EXPORT_SYMBOL(blk_rq_map_user_iov); EXPORT_SYMBOL(blk_rq_map_user_iov);
...@@ -282,7 +281,8 @@ EXPORT_SYMBOL(blk_rq_unmap_user); ...@@ -282,7 +281,8 @@ EXPORT_SYMBOL(blk_rq_unmap_user);
* *
* Description: * Description:
* Data will be mapped directly if possible. Otherwise a bounce * Data will be mapped directly if possible. Otherwise a bounce
* buffer is used. * buffer is used. Can be called multple times to append multple
* buffers.
*/ */
int blk_rq_map_kern(struct request_queue *q, struct request *rq, void *kbuf, int blk_rq_map_kern(struct request_queue *q, struct request *rq, void *kbuf,
unsigned int len, gfp_t gfp_mask) unsigned int len, gfp_t gfp_mask)
...@@ -290,8 +290,9 @@ int blk_rq_map_kern(struct request_queue *q, struct request *rq, void *kbuf, ...@@ -290,8 +290,9 @@ int blk_rq_map_kern(struct request_queue *q, struct request *rq, void *kbuf,
int reading = rq_data_dir(rq) == READ; int reading = rq_data_dir(rq) == READ;
int do_copy = 0; int do_copy = 0;
struct bio *bio; struct bio *bio;
int ret;
if (len > (q->max_hw_sectors << 9)) if (len > (queue_max_hw_sectors(q) << 9))
return -EINVAL; return -EINVAL;
if (!len || !kbuf) if (!len || !kbuf)
return -EINVAL; return -EINVAL;
...@@ -311,9 +312,15 @@ int blk_rq_map_kern(struct request_queue *q, struct request *rq, void *kbuf, ...@@ -311,9 +312,15 @@ int blk_rq_map_kern(struct request_queue *q, struct request *rq, void *kbuf,
if (do_copy) if (do_copy)
rq->cmd_flags |= REQ_COPY_USER; rq->cmd_flags |= REQ_COPY_USER;
blk_rq_bio_prep(q, rq, bio); ret = blk_rq_append_bio(q, rq, bio);
if (unlikely(ret)) {
/* request is too big */
bio_put(bio);
return ret;
}
blk_queue_bounce(q, &rq->bio); blk_queue_bounce(q, &rq->bio);
rq->buffer = rq->data = NULL; rq->buffer = NULL;
return 0; return 0;
} }
EXPORT_SYMBOL(blk_rq_map_kern); EXPORT_SYMBOL(blk_rq_map_kern);
...@@ -9,35 +9,6 @@ ...@@ -9,35 +9,6 @@
#include "blk.h" #include "blk.h"
void blk_recalc_rq_sectors(struct request *rq, int nsect)
{
if (blk_fs_request(rq) || blk_discard_rq(rq)) {
rq->hard_sector += nsect;
rq->hard_nr_sectors -= nsect;
/*
* Move the I/O submission pointers ahead if required.
*/
if ((rq->nr_sectors >= rq->hard_nr_sectors) &&
(rq->sector <= rq->hard_sector)) {
rq->sector = rq->hard_sector;
rq->nr_sectors = rq->hard_nr_sectors;
rq->hard_cur_sectors = bio_cur_sectors(rq->bio);
rq->current_nr_sectors = rq->hard_cur_sectors;
rq->buffer = bio_data(rq->bio);
}
/*
* if total number of sectors is less than the first segment
* size, something has gone terribly wrong
*/
if (rq->nr_sectors < rq->current_nr_sectors) {
printk(KERN_ERR "blk: request botched\n");
rq->nr_sectors = rq->current_nr_sectors;
}
}
}
static unsigned int __blk_recalc_rq_segments(struct request_queue *q, static unsigned int __blk_recalc_rq_segments(struct request_queue *q,
struct bio *bio) struct bio *bio)
{ {
...@@ -61,11 +32,12 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, ...@@ -61,11 +32,12 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q,
* never considered part of another segment, since that * never considered part of another segment, since that
* might change with the bounce page. * might change with the bounce page.
*/ */
high = page_to_pfn(bv->bv_page) > q->bounce_pfn; high = page_to_pfn(bv->bv_page) > queue_bounce_pfn(q);
if (high || highprv) if (high || highprv)
goto new_segment; goto new_segment;
if (cluster) { if (cluster) {
if (seg_size + bv->bv_len > q->max_segment_size) if (seg_size + bv->bv_len
> queue_max_segment_size(q))
goto new_segment; goto new_segment;
if (!BIOVEC_PHYS_MERGEABLE(bvprv, bv)) if (!BIOVEC_PHYS_MERGEABLE(bvprv, bv))
goto new_segment; goto new_segment;
...@@ -120,7 +92,7 @@ static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio, ...@@ -120,7 +92,7 @@ static int blk_phys_contig_segment(struct request_queue *q, struct bio *bio,
return 0; return 0;
if (bio->bi_seg_back_size + nxt->bi_seg_front_size > if (bio->bi_seg_back_size + nxt->bi_seg_front_size >
q->max_segment_size) queue_max_segment_size(q))
return 0; return 0;
if (!bio_has_data(bio)) if (!bio_has_data(bio))
...@@ -163,7 +135,7 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq, ...@@ -163,7 +135,7 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq,
int nbytes = bvec->bv_len; int nbytes = bvec->bv_len;
if (bvprv && cluster) { if (bvprv && cluster) {
if (sg->length + nbytes > q->max_segment_size) if (sg->length + nbytes > queue_max_segment_size(q))
goto new_segment; goto new_segment;
if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec)) if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec))
...@@ -199,8 +171,9 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq, ...@@ -199,8 +171,9 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq,
if (unlikely(rq->cmd_flags & REQ_COPY_USER) && if (unlikely(rq->cmd_flags & REQ_COPY_USER) &&
(rq->data_len & q->dma_pad_mask)) { (blk_rq_bytes(rq) & q->dma_pad_mask)) {
unsigned int pad_len = (q->dma_pad_mask & ~rq->data_len) + 1; unsigned int pad_len =
(q->dma_pad_mask & ~blk_rq_bytes(rq)) + 1;
sg->length += pad_len; sg->length += pad_len;
rq->extra_len += pad_len; rq->extra_len += pad_len;
...@@ -233,8 +206,8 @@ static inline int ll_new_hw_segment(struct request_queue *q, ...@@ -233,8 +206,8 @@ static inline int ll_new_hw_segment(struct request_queue *q,
{ {
int nr_phys_segs = bio_phys_segments(q, bio); int nr_phys_segs = bio_phys_segments(q, bio);
if (req->nr_phys_segments + nr_phys_segs > q->max_hw_segments if (req->nr_phys_segments + nr_phys_segs > queue_max_hw_segments(q) ||
|| req->nr_phys_segments + nr_phys_segs > q->max_phys_segments) { req->nr_phys_segments + nr_phys_segs > queue_max_phys_segments(q)) {
req->cmd_flags |= REQ_NOMERGE; req->cmd_flags |= REQ_NOMERGE;
if (req == q->last_merge) if (req == q->last_merge)
q->last_merge = NULL; q->last_merge = NULL;
...@@ -255,11 +228,11 @@ int ll_back_merge_fn(struct request_queue *q, struct request *req, ...@@ -255,11 +228,11 @@ int ll_back_merge_fn(struct request_queue *q, struct request *req,
unsigned short max_sectors; unsigned short max_sectors;
if (unlikely(blk_pc_request(req))) if (unlikely(blk_pc_request(req)))
max_sectors = q->max_hw_sectors; max_sectors = queue_max_hw_sectors(q);
else else
max_sectors = q->max_sectors; max_sectors = queue_max_sectors(q);
if (req->nr_sectors + bio_sectors(bio) > max_sectors) { if (blk_rq_sectors(req) + bio_sectors(bio) > max_sectors) {
req->cmd_flags |= REQ_NOMERGE; req->cmd_flags |= REQ_NOMERGE;
if (req == q->last_merge) if (req == q->last_merge)
q->last_merge = NULL; q->last_merge = NULL;
...@@ -279,12 +252,12 @@ int ll_front_merge_fn(struct request_queue *q, struct request *req, ...@@ -279,12 +252,12 @@ int ll_front_merge_fn(struct request_queue *q, struct request *req,
unsigned short max_sectors; unsigned short max_sectors;
if (unlikely(blk_pc_request(req))) if (unlikely(blk_pc_request(req)))
max_sectors = q->max_hw_sectors; max_sectors = queue_max_hw_sectors(q);
else else
max_sectors = q->max_sectors; max_sectors = queue_max_sectors(q);
if (req->nr_sectors + bio_sectors(bio) > max_sectors) { if (blk_rq_sectors(req) + bio_sectors(bio) > max_sectors) {
req->cmd_flags |= REQ_NOMERGE; req->cmd_flags |= REQ_NOMERGE;
if (req == q->last_merge) if (req == q->last_merge)
q->last_merge = NULL; q->last_merge = NULL;
...@@ -315,7 +288,7 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req, ...@@ -315,7 +288,7 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
/* /*
* Will it become too large? * Will it become too large?
*/ */
if ((req->nr_sectors + next->nr_sectors) > q->max_sectors) if ((blk_rq_sectors(req) + blk_rq_sectors(next)) > queue_max_sectors(q))
return 0; return 0;
total_phys_segments = req->nr_phys_segments + next->nr_phys_segments; total_phys_segments = req->nr_phys_segments + next->nr_phys_segments;
...@@ -327,10 +300,10 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req, ...@@ -327,10 +300,10 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req,
total_phys_segments--; total_phys_segments--;
} }
if (total_phys_segments > q->max_phys_segments) if (total_phys_segments > queue_max_phys_segments(q))
return 0; return 0;
if (total_phys_segments > q->max_hw_segments) if (total_phys_segments > queue_max_hw_segments(q))
return 0; return 0;
/* Merge is OK... */ /* Merge is OK... */
...@@ -345,7 +318,7 @@ static void blk_account_io_merge(struct request *req) ...@@ -345,7 +318,7 @@ static void blk_account_io_merge(struct request *req)
int cpu; int cpu;
cpu = part_stat_lock(); cpu = part_stat_lock();
part = disk_map_sector_rcu(req->rq_disk, req->sector); part = disk_map_sector_rcu(req->rq_disk, blk_rq_pos(req));
part_round_stats(cpu, part); part_round_stats(cpu, part);
part_dec_in_flight(part); part_dec_in_flight(part);
...@@ -366,7 +339,7 @@ static int attempt_merge(struct request_queue *q, struct request *req, ...@@ -366,7 +339,7 @@ static int attempt_merge(struct request_queue *q, struct request *req,
/* /*
* not contiguous * not contiguous
*/ */
if (req->sector + req->nr_sectors != next->sector) if (blk_rq_pos(req) + blk_rq_sectors(req) != blk_rq_pos(next))
return 0; return 0;
if (rq_data_dir(req) != rq_data_dir(next) if (rq_data_dir(req) != rq_data_dir(next)
...@@ -398,7 +371,7 @@ static int attempt_merge(struct request_queue *q, struct request *req, ...@@ -398,7 +371,7 @@ static int attempt_merge(struct request_queue *q, struct request *req,
req->biotail->bi_next = next->bio; req->biotail->bi_next = next->bio;
req->biotail = next->biotail; req->biotail = next->biotail;
req->nr_sectors = req->hard_nr_sectors += next->hard_nr_sectors; req->__data_len += blk_rq_bytes(next);
elv_merge_requests(q, req, next); elv_merge_requests(q, req, next);
......
...@@ -134,7 +134,7 @@ void blk_queue_make_request(struct request_queue *q, make_request_fn *mfn) ...@@ -134,7 +134,7 @@ void blk_queue_make_request(struct request_queue *q, make_request_fn *mfn)
q->backing_dev_info.state = 0; q->backing_dev_info.state = 0;
q->backing_dev_info.capabilities = BDI_CAP_MAP_COPY; q->backing_dev_info.capabilities = BDI_CAP_MAP_COPY;
blk_queue_max_sectors(q, SAFE_MAX_SECTORS); blk_queue_max_sectors(q, SAFE_MAX_SECTORS);
blk_queue_hardsect_size(q, 512); blk_queue_logical_block_size(q, 512);
blk_queue_dma_alignment(q, 511); blk_queue_dma_alignment(q, 511);
blk_queue_congestion_threshold(q); blk_queue_congestion_threshold(q);
q->nr_batching = BLK_BATCH_REQ; q->nr_batching = BLK_BATCH_REQ;
...@@ -179,16 +179,16 @@ void blk_queue_bounce_limit(struct request_queue *q, u64 dma_mask) ...@@ -179,16 +179,16 @@ void blk_queue_bounce_limit(struct request_queue *q, u64 dma_mask)
*/ */
if (b_pfn < (min_t(u64, 0xffffffffUL, BLK_BOUNCE_HIGH) >> PAGE_SHIFT)) if (b_pfn < (min_t(u64, 0xffffffffUL, BLK_BOUNCE_HIGH) >> PAGE_SHIFT))
dma = 1; dma = 1;
q->bounce_pfn = max_low_pfn; q->limits.bounce_pfn = max_low_pfn;
#else #else
if (b_pfn < blk_max_low_pfn) if (b_pfn < blk_max_low_pfn)
dma = 1; dma = 1;
q->bounce_pfn = b_pfn; q->limits.bounce_pfn = b_pfn;
#endif #endif
if (dma) { if (dma) {
init_emergency_isa_pool(); init_emergency_isa_pool();
q->bounce_gfp = GFP_NOIO | GFP_DMA; q->bounce_gfp = GFP_NOIO | GFP_DMA;
q->bounce_pfn = b_pfn; q->limits.bounce_pfn = b_pfn;
} }
} }
EXPORT_SYMBOL(blk_queue_bounce_limit); EXPORT_SYMBOL(blk_queue_bounce_limit);
...@@ -211,14 +211,23 @@ void blk_queue_max_sectors(struct request_queue *q, unsigned int max_sectors) ...@@ -211,14 +211,23 @@ void blk_queue_max_sectors(struct request_queue *q, unsigned int max_sectors)
} }
if (BLK_DEF_MAX_SECTORS > max_sectors) if (BLK_DEF_MAX_SECTORS > max_sectors)
q->max_hw_sectors = q->max_sectors = max_sectors; q->limits.max_hw_sectors = q->limits.max_sectors = max_sectors;
else { else {
q->max_sectors = BLK_DEF_MAX_SECTORS; q->limits.max_sectors = BLK_DEF_MAX_SECTORS;
q->max_hw_sectors = max_sectors; q->limits.max_hw_sectors = max_sectors;
} }
} }
EXPORT_SYMBOL(blk_queue_max_sectors); EXPORT_SYMBOL(blk_queue_max_sectors);
void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_sectors)
{
if (BLK_DEF_MAX_SECTORS > max_sectors)
q->limits.max_hw_sectors = BLK_DEF_MAX_SECTORS;
else
q->limits.max_hw_sectors = max_sectors;
}
EXPORT_SYMBOL(blk_queue_max_hw_sectors);
/** /**
* blk_queue_max_phys_segments - set max phys segments for a request for this queue * blk_queue_max_phys_segments - set max phys segments for a request for this queue
* @q: the request queue for the device * @q: the request queue for the device
...@@ -238,7 +247,7 @@ void blk_queue_max_phys_segments(struct request_queue *q, ...@@ -238,7 +247,7 @@ void blk_queue_max_phys_segments(struct request_queue *q,
__func__, max_segments); __func__, max_segments);
} }
q->max_phys_segments = max_segments; q->limits.max_phys_segments = max_segments;
} }
EXPORT_SYMBOL(blk_queue_max_phys_segments); EXPORT_SYMBOL(blk_queue_max_phys_segments);
...@@ -262,7 +271,7 @@ void blk_queue_max_hw_segments(struct request_queue *q, ...@@ -262,7 +271,7 @@ void blk_queue_max_hw_segments(struct request_queue *q,
__func__, max_segments); __func__, max_segments);
} }
q->max_hw_segments = max_segments; q->limits.max_hw_segments = max_segments;
} }
EXPORT_SYMBOL(blk_queue_max_hw_segments); EXPORT_SYMBOL(blk_queue_max_hw_segments);
...@@ -283,26 +292,110 @@ void blk_queue_max_segment_size(struct request_queue *q, unsigned int max_size) ...@@ -283,26 +292,110 @@ void blk_queue_max_segment_size(struct request_queue *q, unsigned int max_size)
__func__, max_size); __func__, max_size);
} }
q->max_segment_size = max_size; q->limits.max_segment_size = max_size;
} }
EXPORT_SYMBOL(blk_queue_max_segment_size); EXPORT_SYMBOL(blk_queue_max_segment_size);
/** /**
* blk_queue_hardsect_size - set hardware sector size for the queue * blk_queue_logical_block_size - set logical block size for the queue
* @q: the request queue for the device * @q: the request queue for the device
* @size: the hardware sector size, in bytes * @size: the logical block size, in bytes
* *
* Description: * Description:
* This should typically be set to the lowest possible sector size * This should be set to the lowest possible block size that the
* that the hardware can operate on (possible without reverting to * storage device can address. The default of 512 covers most
* even internal read-modify-write operations). Usually the default * hardware.
* of 512 covers most hardware.
**/ **/
void blk_queue_hardsect_size(struct request_queue *q, unsigned short size) void blk_queue_logical_block_size(struct request_queue *q, unsigned short size)
{
q->limits.logical_block_size = size;
if (q->limits.physical_block_size < size)
q->limits.physical_block_size = size;
if (q->limits.io_min < q->limits.physical_block_size)
q->limits.io_min = q->limits.physical_block_size;
}
EXPORT_SYMBOL(blk_queue_logical_block_size);
/**
* blk_queue_physical_block_size - set physical block size for the queue
* @q: the request queue for the device
* @size: the physical block size, in bytes
*
* Description:
* This should be set to the lowest possible sector size that the
* hardware can operate on without reverting to read-modify-write
* operations.
*/
void blk_queue_physical_block_size(struct request_queue *q, unsigned short size)
{
q->limits.physical_block_size = size;
if (q->limits.physical_block_size < q->limits.logical_block_size)
q->limits.physical_block_size = q->limits.logical_block_size;
if (q->limits.io_min < q->limits.physical_block_size)
q->limits.io_min = q->limits.physical_block_size;
}
EXPORT_SYMBOL(blk_queue_physical_block_size);
/**
* blk_queue_alignment_offset - set physical block alignment offset
* @q: the request queue for the device
* @alignment: alignment offset in bytes
*
* Description:
* Some devices are naturally misaligned to compensate for things like
* the legacy DOS partition table 63-sector offset. Low-level drivers
* should call this function for devices whose first sector is not
* naturally aligned.
*/
void blk_queue_alignment_offset(struct request_queue *q, unsigned int offset)
{ {
q->hardsect_size = size; q->limits.alignment_offset =
offset & (q->limits.physical_block_size - 1);
q->limits.misaligned = 0;
} }
EXPORT_SYMBOL(blk_queue_hardsect_size); EXPORT_SYMBOL(blk_queue_alignment_offset);
/**
* blk_queue_io_min - set minimum request size for the queue
* @q: the request queue for the device
* @io_min: smallest I/O size in bytes
*
* Description:
* Some devices have an internal block size bigger than the reported
* hardware sector size. This function can be used to signal the
* smallest I/O the device can perform without incurring a performance
* penalty.
*/
void blk_queue_io_min(struct request_queue *q, unsigned int min)
{
q->limits.io_min = min;
if (q->limits.io_min < q->limits.logical_block_size)
q->limits.io_min = q->limits.logical_block_size;
if (q->limits.io_min < q->limits.physical_block_size)
q->limits.io_min = q->limits.physical_block_size;
}
EXPORT_SYMBOL(blk_queue_io_min);
/**
* blk_queue_io_opt - set optimal request size for the queue
* @q: the request queue for the device
* @io_opt: optimal request size in bytes
*
* Description:
* Drivers can call this function to set the preferred I/O request
* size for devices that report such a value.
*/
void blk_queue_io_opt(struct request_queue *q, unsigned int opt)
{
q->limits.io_opt = opt;
}
EXPORT_SYMBOL(blk_queue_io_opt);
/* /*
* Returns the minimum that is _not_ zero, unless both are zero. * Returns the minimum that is _not_ zero, unless both are zero.
...@@ -317,14 +410,27 @@ EXPORT_SYMBOL(blk_queue_hardsect_size); ...@@ -317,14 +410,27 @@ EXPORT_SYMBOL(blk_queue_hardsect_size);
void blk_queue_stack_limits(struct request_queue *t, struct request_queue *b) void blk_queue_stack_limits(struct request_queue *t, struct request_queue *b)
{ {
/* zero is "infinity" */ /* zero is "infinity" */
t->max_sectors = min_not_zero(t->max_sectors, b->max_sectors); t->limits.max_sectors = min_not_zero(queue_max_sectors(t),
t->max_hw_sectors = min_not_zero(t->max_hw_sectors, b->max_hw_sectors); queue_max_sectors(b));
t->seg_boundary_mask = min_not_zero(t->seg_boundary_mask, b->seg_boundary_mask);
t->limits.max_hw_sectors = min_not_zero(queue_max_hw_sectors(t),
queue_max_hw_sectors(b));
t->limits.seg_boundary_mask = min_not_zero(queue_segment_boundary(t),
queue_segment_boundary(b));
t->limits.max_phys_segments = min_not_zero(queue_max_phys_segments(t),
queue_max_phys_segments(b));
t->limits.max_hw_segments = min_not_zero(queue_max_hw_segments(t),
queue_max_hw_segments(b));
t->limits.max_segment_size = min_not_zero(queue_max_segment_size(t),
queue_max_segment_size(b));
t->limits.logical_block_size = max(queue_logical_block_size(t),
queue_logical_block_size(b));
t->max_phys_segments = min_not_zero(t->max_phys_segments, b->max_phys_segments);
t->max_hw_segments = min_not_zero(t->max_hw_segments, b->max_hw_segments);
t->max_segment_size = min_not_zero(t->max_segment_size, b->max_segment_size);
t->hardsect_size = max(t->hardsect_size, b->hardsect_size);
if (!t->queue_lock) if (!t->queue_lock)
WARN_ON_ONCE(1); WARN_ON_ONCE(1);
else if (!test_bit(QUEUE_FLAG_CLUSTER, &b->queue_flags)) { else if (!test_bit(QUEUE_FLAG_CLUSTER, &b->queue_flags)) {
...@@ -336,6 +442,109 @@ void blk_queue_stack_limits(struct request_queue *t, struct request_queue *b) ...@@ -336,6 +442,109 @@ void blk_queue_stack_limits(struct request_queue *t, struct request_queue *b)
} }
EXPORT_SYMBOL(blk_queue_stack_limits); EXPORT_SYMBOL(blk_queue_stack_limits);
/**
* blk_stack_limits - adjust queue_limits for stacked devices
* @t: the stacking driver limits (top)
* @b: the underlying queue limits (bottom)
* @offset: offset to beginning of data within component device
*
* Description:
* Merges two queue_limit structs. Returns 0 if alignment didn't
* change. Returns -1 if adding the bottom device caused
* misalignment.
*/
int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
sector_t offset)
{
t->max_sectors = min_not_zero(t->max_sectors, b->max_sectors);
t->max_hw_sectors = min_not_zero(t->max_hw_sectors, b->max_hw_sectors);
t->bounce_pfn = min_not_zero(t->bounce_pfn, b->bounce_pfn);
t->seg_boundary_mask = min_not_zero(t->seg_boundary_mask,
b->seg_boundary_mask);
t->max_phys_segments = min_not_zero(t->max_phys_segments,
b->max_phys_segments);
t->max_hw_segments = min_not_zero(t->max_hw_segments,
b->max_hw_segments);
t->max_segment_size = min_not_zero(t->max_segment_size,
b->max_segment_size);
t->logical_block_size = max(t->logical_block_size,
b->logical_block_size);
t->physical_block_size = max(t->physical_block_size,
b->physical_block_size);
t->io_min = max(t->io_min, b->io_min);
t->no_cluster |= b->no_cluster;
/* Bottom device offset aligned? */
if (offset &&
(offset & (b->physical_block_size - 1)) != b->alignment_offset) {
t->misaligned = 1;
return -1;
}
/* If top has no alignment offset, inherit from bottom */
if (!t->alignment_offset)
t->alignment_offset =
b->alignment_offset & (b->physical_block_size - 1);
/* Top device aligned on logical block boundary? */
if (t->alignment_offset & (t->logical_block_size - 1)) {
t->misaligned = 1;
return -1;
}
return 0;
}
EXPORT_SYMBOL(blk_stack_limits);
/**
* disk_stack_limits - adjust queue limits for stacked drivers
* @disk: MD/DM gendisk (top)
* @bdev: the underlying block device (bottom)
* @offset: offset to beginning of data within component device
*
* Description:
* Merges the limits for two queues. Returns 0 if alignment
* didn't change. Returns -1 if adding the bottom device caused
* misalignment.
*/
void disk_stack_limits(struct gendisk *disk, struct block_device *bdev,
sector_t offset)
{
struct request_queue *t = disk->queue;
struct request_queue *b = bdev_get_queue(bdev);
offset += get_start_sect(bdev) << 9;
if (blk_stack_limits(&t->limits, &b->limits, offset) < 0) {
char top[BDEVNAME_SIZE], bottom[BDEVNAME_SIZE];
disk_name(disk, 0, top);
bdevname(bdev, bottom);
printk(KERN_NOTICE "%s: Warning: Device %s is misaligned\n",
top, bottom);
}
if (!t->queue_lock)
WARN_ON_ONCE(1);
else if (!test_bit(QUEUE_FLAG_CLUSTER, &b->queue_flags)) {
unsigned long flags;
spin_lock_irqsave(t->queue_lock, flags);
if (!test_bit(QUEUE_FLAG_CLUSTER, &b->queue_flags))
queue_flag_clear(QUEUE_FLAG_CLUSTER, t);
spin_unlock_irqrestore(t->queue_lock, flags);
}
}
EXPORT_SYMBOL(disk_stack_limits);
/** /**
* blk_queue_dma_pad - set pad mask * blk_queue_dma_pad - set pad mask
* @q: the request queue for the device * @q: the request queue for the device
...@@ -396,11 +605,11 @@ int blk_queue_dma_drain(struct request_queue *q, ...@@ -396,11 +605,11 @@ int blk_queue_dma_drain(struct request_queue *q,
dma_drain_needed_fn *dma_drain_needed, dma_drain_needed_fn *dma_drain_needed,
void *buf, unsigned int size) void *buf, unsigned int size)
{ {
if (q->max_hw_segments < 2 || q->max_phys_segments < 2) if (queue_max_hw_segments(q) < 2 || queue_max_phys_segments(q) < 2)
return -EINVAL; return -EINVAL;
/* make room for appending the drain */ /* make room for appending the drain */
--q->max_hw_segments; blk_queue_max_hw_segments(q, queue_max_hw_segments(q) - 1);
--q->max_phys_segments; blk_queue_max_phys_segments(q, queue_max_phys_segments(q) - 1);
q->dma_drain_needed = dma_drain_needed; q->dma_drain_needed = dma_drain_needed;
q->dma_drain_buffer = buf; q->dma_drain_buffer = buf;
q->dma_drain_size = size; q->dma_drain_size = size;
...@@ -422,7 +631,7 @@ void blk_queue_segment_boundary(struct request_queue *q, unsigned long mask) ...@@ -422,7 +631,7 @@ void blk_queue_segment_boundary(struct request_queue *q, unsigned long mask)
__func__, mask); __func__, mask);
} }
q->seg_boundary_mask = mask; q->limits.seg_boundary_mask = mask;
} }
EXPORT_SYMBOL(blk_queue_segment_boundary); EXPORT_SYMBOL(blk_queue_segment_boundary);
......
...@@ -95,21 +95,36 @@ queue_ra_store(struct request_queue *q, const char *page, size_t count) ...@@ -95,21 +95,36 @@ queue_ra_store(struct request_queue *q, const char *page, size_t count)
static ssize_t queue_max_sectors_show(struct request_queue *q, char *page) static ssize_t queue_max_sectors_show(struct request_queue *q, char *page)
{ {
int max_sectors_kb = q->max_sectors >> 1; int max_sectors_kb = queue_max_sectors(q) >> 1;
return queue_var_show(max_sectors_kb, (page)); return queue_var_show(max_sectors_kb, (page));
} }
static ssize_t queue_hw_sector_size_show(struct request_queue *q, char *page) static ssize_t queue_logical_block_size_show(struct request_queue *q, char *page)
{ {
return queue_var_show(q->hardsect_size, page); return queue_var_show(queue_logical_block_size(q), page);
}
static ssize_t queue_physical_block_size_show(struct request_queue *q, char *page)
{
return queue_var_show(queue_physical_block_size(q), page);
}
static ssize_t queue_io_min_show(struct request_queue *q, char *page)
{
return queue_var_show(queue_io_min(q), page);
}
static ssize_t queue_io_opt_show(struct request_queue *q, char *page)
{
return queue_var_show(queue_io_opt(q), page);
} }
static ssize_t static ssize_t
queue_max_sectors_store(struct request_queue *q, const char *page, size_t count) queue_max_sectors_store(struct request_queue *q, const char *page, size_t count)
{ {
unsigned long max_sectors_kb, unsigned long max_sectors_kb,
max_hw_sectors_kb = q->max_hw_sectors >> 1, max_hw_sectors_kb = queue_max_hw_sectors(q) >> 1,
page_kb = 1 << (PAGE_CACHE_SHIFT - 10); page_kb = 1 << (PAGE_CACHE_SHIFT - 10);
ssize_t ret = queue_var_store(&max_sectors_kb, page, count); ssize_t ret = queue_var_store(&max_sectors_kb, page, count);
...@@ -117,7 +132,7 @@ queue_max_sectors_store(struct request_queue *q, const char *page, size_t count) ...@@ -117,7 +132,7 @@ queue_max_sectors_store(struct request_queue *q, const char *page, size_t count)
return -EINVAL; return -EINVAL;
spin_lock_irq(q->queue_lock); spin_lock_irq(q->queue_lock);
q->max_sectors = max_sectors_kb << 1; blk_queue_max_sectors(q, max_sectors_kb << 1);
spin_unlock_irq(q->queue_lock); spin_unlock_irq(q->queue_lock);
return ret; return ret;
...@@ -125,7 +140,7 @@ queue_max_sectors_store(struct request_queue *q, const char *page, size_t count) ...@@ -125,7 +140,7 @@ queue_max_sectors_store(struct request_queue *q, const char *page, size_t count)
static ssize_t queue_max_hw_sectors_show(struct request_queue *q, char *page) static ssize_t queue_max_hw_sectors_show(struct request_queue *q, char *page)
{ {
int max_hw_sectors_kb = q->max_hw_sectors >> 1; int max_hw_sectors_kb = queue_max_hw_sectors(q) >> 1;
return queue_var_show(max_hw_sectors_kb, (page)); return queue_var_show(max_hw_sectors_kb, (page));
} }
...@@ -249,7 +264,27 @@ static struct queue_sysfs_entry queue_iosched_entry = { ...@@ -249,7 +264,27 @@ static struct queue_sysfs_entry queue_iosched_entry = {
static struct queue_sysfs_entry queue_hw_sector_size_entry = { static struct queue_sysfs_entry queue_hw_sector_size_entry = {
.attr = {.name = "hw_sector_size", .mode = S_IRUGO }, .attr = {.name = "hw_sector_size", .mode = S_IRUGO },
.show = queue_hw_sector_size_show, .show = queue_logical_block_size_show,
};
static struct queue_sysfs_entry queue_logical_block_size_entry = {
.attr = {.name = "logical_block_size", .mode = S_IRUGO },
.show = queue_logical_block_size_show,
};
static struct queue_sysfs_entry queue_physical_block_size_entry = {
.attr = {.name = "physical_block_size", .mode = S_IRUGO },
.show = queue_physical_block_size_show,
};
static struct queue_sysfs_entry queue_io_min_entry = {
.attr = {.name = "minimum_io_size", .mode = S_IRUGO },
.show = queue_io_min_show,
};
static struct queue_sysfs_entry queue_io_opt_entry = {
.attr = {.name = "optimal_io_size", .mode = S_IRUGO },
.show = queue_io_opt_show,
}; };
static struct queue_sysfs_entry queue_nonrot_entry = { static struct queue_sysfs_entry queue_nonrot_entry = {
...@@ -283,6 +318,10 @@ static struct attribute *default_attrs[] = { ...@@ -283,6 +318,10 @@ static struct attribute *default_attrs[] = {
&queue_max_sectors_entry.attr, &queue_max_sectors_entry.attr,
&queue_iosched_entry.attr, &queue_iosched_entry.attr,
&queue_hw_sector_size_entry.attr, &queue_hw_sector_size_entry.attr,
&queue_logical_block_size_entry.attr,
&queue_physical_block_size_entry.attr,
&queue_io_min_entry.attr,
&queue_io_opt_entry.attr,
&queue_nonrot_entry.attr, &queue_nonrot_entry.attr,
&queue_nomerges_entry.attr, &queue_nomerges_entry.attr,
&queue_rq_affinity_entry.attr, &queue_rq_affinity_entry.attr,
...@@ -394,16 +433,15 @@ int blk_register_queue(struct gendisk *disk) ...@@ -394,16 +433,15 @@ int blk_register_queue(struct gendisk *disk)
if (ret) if (ret)
return ret; return ret;
if (!q->request_fn) ret = kobject_add(&q->kobj, kobject_get(&dev->kobj), "%s", "queue");
return 0;
ret = kobject_add(&q->kobj, kobject_get(&dev->kobj),
"%s", "queue");
if (ret < 0) if (ret < 0)
return ret; return ret;
kobject_uevent(&q->kobj, KOBJ_ADD); kobject_uevent(&q->kobj, KOBJ_ADD);
if (!q->request_fn)
return 0;
ret = elv_register_queue(q); ret = elv_register_queue(q);
if (ret) { if (ret) {
kobject_uevent(&q->kobj, KOBJ_REMOVE); kobject_uevent(&q->kobj, KOBJ_REMOVE);
......
...@@ -336,7 +336,7 @@ EXPORT_SYMBOL(blk_queue_end_tag); ...@@ -336,7 +336,7 @@ EXPORT_SYMBOL(blk_queue_end_tag);
int blk_queue_start_tag(struct request_queue *q, struct request *rq) int blk_queue_start_tag(struct request_queue *q, struct request *rq)
{ {
struct blk_queue_tag *bqt = q->queue_tags; struct blk_queue_tag *bqt = q->queue_tags;
unsigned max_depth, offset; unsigned max_depth;
int tag; int tag;
if (unlikely((rq->cmd_flags & REQ_QUEUED))) { if (unlikely((rq->cmd_flags & REQ_QUEUED))) {
...@@ -355,13 +355,16 @@ int blk_queue_start_tag(struct request_queue *q, struct request *rq) ...@@ -355,13 +355,16 @@ int blk_queue_start_tag(struct request_queue *q, struct request *rq)
* to starve sync IO on behalf of flooding async IO. * to starve sync IO on behalf of flooding async IO.
*/ */
max_depth = bqt->max_depth; max_depth = bqt->max_depth;
if (rq_is_sync(rq)) if (!rq_is_sync(rq) && max_depth > 1) {
offset = 0; max_depth -= 2;
else if (!max_depth)
offset = max_depth >> 2; max_depth = 1;
if (q->in_flight[0] > max_depth)
return 1;
}
do { do {
tag = find_next_zero_bit(bqt->tag_map, max_depth, offset); tag = find_first_zero_bit(bqt->tag_map, max_depth);
if (tag >= max_depth) if (tag >= max_depth)
return 1; return 1;
...@@ -374,7 +377,7 @@ int blk_queue_start_tag(struct request_queue *q, struct request *rq) ...@@ -374,7 +377,7 @@ int blk_queue_start_tag(struct request_queue *q, struct request *rq)
rq->cmd_flags |= REQ_QUEUED; rq->cmd_flags |= REQ_QUEUED;
rq->tag = tag; rq->tag = tag;
bqt->tag_index[tag] = rq; bqt->tag_index[tag] = rq;
blkdev_dequeue_request(rq); blk_start_request(rq);
list_add(&rq->queuelist, &q->tag_busy_list); list_add(&rq->queuelist, &q->tag_busy_list);
return 0; return 0;
} }
......
...@@ -122,10 +122,8 @@ void blk_rq_timed_out_timer(unsigned long data) ...@@ -122,10 +122,8 @@ void blk_rq_timed_out_timer(unsigned long data)
if (blk_mark_rq_complete(rq)) if (blk_mark_rq_complete(rq))
continue; continue;
blk_rq_timed_out(rq); blk_rq_timed_out(rq);
} else { } else if (!next || time_after(next, rq->deadline))
if (!next || time_after(next, rq->deadline)) next = rq->deadline;
next = rq->deadline;
}
} }
/* /*
...@@ -176,16 +174,14 @@ void blk_add_timer(struct request *req) ...@@ -176,16 +174,14 @@ void blk_add_timer(struct request *req)
BUG_ON(!list_empty(&req->timeout_list)); BUG_ON(!list_empty(&req->timeout_list));
BUG_ON(test_bit(REQ_ATOM_COMPLETE, &req->atomic_flags)); BUG_ON(test_bit(REQ_ATOM_COMPLETE, &req->atomic_flags));
if (req->timeout) /*
req->deadline = jiffies + req->timeout; * Some LLDs, like scsi, peek at the timeout to prevent a
else { * command from being retried forever.
req->deadline = jiffies + q->rq_timeout; */
/* if (!req->timeout)
* Some LLDs, like scsi, peek at the timeout to prevent
* a command from being retried forever.
*/
req->timeout = q->rq_timeout; req->timeout = q->rq_timeout;
}
req->deadline = jiffies + req->timeout;
list_add_tail(&req->timeout_list, &q->timeout_list); list_add_tail(&req->timeout_list, &q->timeout_list);
/* /*
......
...@@ -13,6 +13,9 @@ extern struct kobj_type blk_queue_ktype; ...@@ -13,6 +13,9 @@ extern struct kobj_type blk_queue_ktype;
void init_request_from_bio(struct request *req, struct bio *bio); void init_request_from_bio(struct request *req, struct bio *bio);
void blk_rq_bio_prep(struct request_queue *q, struct request *rq, void blk_rq_bio_prep(struct request_queue *q, struct request *rq,
struct bio *bio); struct bio *bio);
int blk_rq_append_bio(struct request_queue *q, struct request *rq,
struct bio *bio);
void blk_dequeue_request(struct request *rq);
void __blk_queue_free_tags(struct request_queue *q); void __blk_queue_free_tags(struct request_queue *q);
void blk_unplug_work(struct work_struct *work); void blk_unplug_work(struct work_struct *work);
...@@ -43,6 +46,43 @@ static inline void blk_clear_rq_complete(struct request *rq) ...@@ -43,6 +46,43 @@ static inline void blk_clear_rq_complete(struct request *rq)
clear_bit(REQ_ATOM_COMPLETE, &rq->atomic_flags); clear_bit(REQ_ATOM_COMPLETE, &rq->atomic_flags);
} }
/*
* Internal elevator interface
*/
#define ELV_ON_HASH(rq) (!hlist_unhashed(&(rq)->hash))
static inline struct request *__elv_next_request(struct request_queue *q)
{
struct request *rq;
while (1) {
while (!list_empty(&q->queue_head)) {
rq = list_entry_rq(q->queue_head.next);
if (blk_do_ordered(q, &rq))
return rq;
}
if (!q->elevator->ops->elevator_dispatch_fn(q, 0))
return NULL;
}
}
static inline void elv_activate_rq(struct request_queue *q, struct request *rq)
{
struct elevator_queue *e = q->elevator;
if (e->ops->elevator_activate_req_fn)
e->ops->elevator_activate_req_fn(q, rq);
}
static inline void elv_deactivate_rq(struct request_queue *q, struct request *rq)
{
struct elevator_queue *e = q->elevator;
if (e->ops->elevator_deactivate_req_fn)
e->ops->elevator_deactivate_req_fn(q, rq);
}
#ifdef CONFIG_FAIL_IO_TIMEOUT #ifdef CONFIG_FAIL_IO_TIMEOUT
int blk_should_fake_timeout(struct request_queue *); int blk_should_fake_timeout(struct request_queue *);
ssize_t part_timeout_show(struct device *, struct device_attribute *, char *); ssize_t part_timeout_show(struct device *, struct device_attribute *, char *);
...@@ -64,7 +104,6 @@ int ll_front_merge_fn(struct request_queue *q, struct request *req, ...@@ -64,7 +104,6 @@ int ll_front_merge_fn(struct request_queue *q, struct request *req,
int attempt_back_merge(struct request_queue *q, struct request *rq); int attempt_back_merge(struct request_queue *q, struct request *rq);
int attempt_front_merge(struct request_queue *q, struct request *rq); int attempt_front_merge(struct request_queue *q, struct request *rq);
void blk_recalc_rq_segments(struct request *rq); void blk_recalc_rq_segments(struct request *rq);
void blk_recalc_rq_sectors(struct request *rq, int nsect);
void blk_queue_congestion_threshold(struct request_queue *q); void blk_queue_congestion_threshold(struct request_queue *q);
...@@ -112,9 +151,17 @@ static inline int blk_cpu_to_group(int cpu) ...@@ -112,9 +151,17 @@ static inline int blk_cpu_to_group(int cpu)
#endif #endif
} }
/*
* Contribute to IO statistics IFF:
*
* a) it's attached to a gendisk, and
* b) the queue had IO stats enabled when this request was started, and
* c) it's a file system request or a discard request
*/
static inline int blk_do_io_stat(struct request *rq) static inline int blk_do_io_stat(struct request *rq)
{ {
return rq->rq_disk && blk_rq_io_stat(rq); return rq->rq_disk && blk_rq_io_stat(rq) &&
(blk_fs_request(rq) || blk_discard_rq(rq));
} }
#endif #endif
...@@ -446,15 +446,15 @@ static int blk_complete_sgv4_hdr_rq(struct request *rq, struct sg_io_v4 *hdr, ...@@ -446,15 +446,15 @@ static int blk_complete_sgv4_hdr_rq(struct request *rq, struct sg_io_v4 *hdr,
} }
if (rq->next_rq) { if (rq->next_rq) {
hdr->dout_resid = rq->data_len; hdr->dout_resid = rq->resid_len;
hdr->din_resid = rq->next_rq->data_len; hdr->din_resid = rq->next_rq->resid_len;
blk_rq_unmap_user(bidi_bio); blk_rq_unmap_user(bidi_bio);
rq->next_rq->bio = NULL; rq->next_rq->bio = NULL;
blk_put_request(rq->next_rq); blk_put_request(rq->next_rq);
} else if (rq_data_dir(rq) == READ) } else if (rq_data_dir(rq) == READ)
hdr->din_resid = rq->data_len; hdr->din_resid = rq->resid_len;
else else
hdr->dout_resid = rq->data_len; hdr->dout_resid = rq->resid_len;
/* /*
* If the request generated a negative error number, return it * If the request generated a negative error number, return it
......
...@@ -349,8 +349,8 @@ cfq_choose_req(struct cfq_data *cfqd, struct request *rq1, struct request *rq2) ...@@ -349,8 +349,8 @@ cfq_choose_req(struct cfq_data *cfqd, struct request *rq1, struct request *rq2)
else if (rq_is_meta(rq2) && !rq_is_meta(rq1)) else if (rq_is_meta(rq2) && !rq_is_meta(rq1))
return rq2; return rq2;
s1 = rq1->sector; s1 = blk_rq_pos(rq1);
s2 = rq2->sector; s2 = blk_rq_pos(rq2);
last = cfqd->last_position; last = cfqd->last_position;
...@@ -579,9 +579,9 @@ cfq_prio_tree_lookup(struct cfq_data *cfqd, struct rb_root *root, ...@@ -579,9 +579,9 @@ cfq_prio_tree_lookup(struct cfq_data *cfqd, struct rb_root *root,
* Sort strictly based on sector. Smallest to the left, * Sort strictly based on sector. Smallest to the left,
* largest to the right. * largest to the right.
*/ */
if (sector > cfqq->next_rq->sector) if (sector > blk_rq_pos(cfqq->next_rq))
n = &(*p)->rb_right; n = &(*p)->rb_right;
else if (sector < cfqq->next_rq->sector) else if (sector < blk_rq_pos(cfqq->next_rq))
n = &(*p)->rb_left; n = &(*p)->rb_left;
else else
break; break;
...@@ -611,8 +611,8 @@ static void cfq_prio_tree_add(struct cfq_data *cfqd, struct cfq_queue *cfqq) ...@@ -611,8 +611,8 @@ static void cfq_prio_tree_add(struct cfq_data *cfqd, struct cfq_queue *cfqq)
return; return;
cfqq->p_root = &cfqd->prio_trees[cfqq->org_ioprio]; cfqq->p_root = &cfqd->prio_trees[cfqq->org_ioprio];
__cfqq = cfq_prio_tree_lookup(cfqd, cfqq->p_root, cfqq->next_rq->sector, __cfqq = cfq_prio_tree_lookup(cfqd, cfqq->p_root,
&parent, &p); blk_rq_pos(cfqq->next_rq), &parent, &p);
if (!__cfqq) { if (!__cfqq) {
rb_link_node(&cfqq->p_node, parent, p); rb_link_node(&cfqq->p_node, parent, p);
rb_insert_color(&cfqq->p_node, cfqq->p_root); rb_insert_color(&cfqq->p_node, cfqq->p_root);
...@@ -760,7 +760,7 @@ static void cfq_activate_request(struct request_queue *q, struct request *rq) ...@@ -760,7 +760,7 @@ static void cfq_activate_request(struct request_queue *q, struct request *rq)
cfq_log_cfqq(cfqd, RQ_CFQQ(rq), "activate rq, drv=%d", cfq_log_cfqq(cfqd, RQ_CFQQ(rq), "activate rq, drv=%d",
cfqd->rq_in_driver); cfqd->rq_in_driver);
cfqd->last_position = rq->hard_sector + rq->hard_nr_sectors; cfqd->last_position = blk_rq_pos(rq) + blk_rq_sectors(rq);
} }
static void cfq_deactivate_request(struct request_queue *q, struct request *rq) static void cfq_deactivate_request(struct request_queue *q, struct request *rq)
...@@ -949,10 +949,10 @@ static struct cfq_queue *cfq_set_active_queue(struct cfq_data *cfqd, ...@@ -949,10 +949,10 @@ static struct cfq_queue *cfq_set_active_queue(struct cfq_data *cfqd,
static inline sector_t cfq_dist_from_last(struct cfq_data *cfqd, static inline sector_t cfq_dist_from_last(struct cfq_data *cfqd,
struct request *rq) struct request *rq)
{ {
if (rq->sector >= cfqd->last_position) if (blk_rq_pos(rq) >= cfqd->last_position)
return rq->sector - cfqd->last_position; return blk_rq_pos(rq) - cfqd->last_position;
else else
return cfqd->last_position - rq->sector; return cfqd->last_position - blk_rq_pos(rq);
} }
#define CIC_SEEK_THR 8 * 1024 #define CIC_SEEK_THR 8 * 1024
...@@ -996,7 +996,7 @@ static struct cfq_queue *cfqq_close(struct cfq_data *cfqd, ...@@ -996,7 +996,7 @@ static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
if (cfq_rq_close(cfqd, __cfqq->next_rq)) if (cfq_rq_close(cfqd, __cfqq->next_rq))
return __cfqq; return __cfqq;
if (__cfqq->next_rq->sector < sector) if (blk_rq_pos(__cfqq->next_rq) < sector)
node = rb_next(&__cfqq->p_node); node = rb_next(&__cfqq->p_node);
else else
node = rb_prev(&__cfqq->p_node); node = rb_prev(&__cfqq->p_node);
...@@ -1282,7 +1282,7 @@ static void cfq_dispatch_request(struct cfq_data *cfqd, struct cfq_queue *cfqq) ...@@ -1282,7 +1282,7 @@ static void cfq_dispatch_request(struct cfq_data *cfqd, struct cfq_queue *cfqq)
if (!cfqd->active_cic) { if (!cfqd->active_cic) {
struct cfq_io_context *cic = RQ_CIC(rq); struct cfq_io_context *cic = RQ_CIC(rq);
atomic_inc(&cic->ioc->refcount); atomic_long_inc(&cic->ioc->refcount);
cfqd->active_cic = cic; cfqd->active_cic = cic;
} }
} }
...@@ -1918,10 +1918,10 @@ cfq_update_io_seektime(struct cfq_data *cfqd, struct cfq_io_context *cic, ...@@ -1918,10 +1918,10 @@ cfq_update_io_seektime(struct cfq_data *cfqd, struct cfq_io_context *cic,
if (!cic->last_request_pos) if (!cic->last_request_pos)
sdist = 0; sdist = 0;
else if (cic->last_request_pos < rq->sector) else if (cic->last_request_pos < blk_rq_pos(rq))
sdist = rq->sector - cic->last_request_pos; sdist = blk_rq_pos(rq) - cic->last_request_pos;
else else
sdist = cic->last_request_pos - rq->sector; sdist = cic->last_request_pos - blk_rq_pos(rq);
/* /*
* Don't allow the seek distance to get too large from the * Don't allow the seek distance to get too large from the
...@@ -2071,7 +2071,7 @@ cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq, ...@@ -2071,7 +2071,7 @@ cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq,
cfq_update_io_seektime(cfqd, cic, rq); cfq_update_io_seektime(cfqd, cic, rq);
cfq_update_idle_window(cfqd, cfqq, cic); cfq_update_idle_window(cfqd, cfqq, cic);
cic->last_request_pos = rq->sector + rq->nr_sectors; cic->last_request_pos = blk_rq_pos(rq) + blk_rq_sectors(rq);
if (cfqq == cfqd->active_queue) { if (cfqq == cfqd->active_queue) {
/* /*
...@@ -2088,7 +2088,7 @@ cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq, ...@@ -2088,7 +2088,7 @@ cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq,
if (blk_rq_bytes(rq) > PAGE_CACHE_SIZE || if (blk_rq_bytes(rq) > PAGE_CACHE_SIZE ||
cfqd->busy_queues > 1) { cfqd->busy_queues > 1) {
del_timer(&cfqd->idle_slice_timer); del_timer(&cfqd->idle_slice_timer);
blk_start_queueing(cfqd->queue); __blk_run_queue(cfqd->queue);
} }
cfq_mark_cfqq_must_dispatch(cfqq); cfq_mark_cfqq_must_dispatch(cfqq);
} }
...@@ -2100,7 +2100,7 @@ cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq, ...@@ -2100,7 +2100,7 @@ cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq,
* this new queue is RT and the current one is BE * this new queue is RT and the current one is BE
*/ */
cfq_preempt_queue(cfqd, cfqq); cfq_preempt_queue(cfqd, cfqq);
blk_start_queueing(cfqd->queue); __blk_run_queue(cfqd->queue);
} }
} }
...@@ -2345,7 +2345,7 @@ static void cfq_kick_queue(struct work_struct *work) ...@@ -2345,7 +2345,7 @@ static void cfq_kick_queue(struct work_struct *work)
struct request_queue *q = cfqd->queue; struct request_queue *q = cfqd->queue;
spin_lock_irq(q->queue_lock); spin_lock_irq(q->queue_lock);
blk_start_queueing(q); __blk_run_queue(cfqd->queue);
spin_unlock_irq(q->queue_lock); spin_unlock_irq(q->queue_lock);
} }
......
...@@ -763,10 +763,10 @@ long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg) ...@@ -763,10 +763,10 @@ long compat_blkdev_ioctl(struct file *file, unsigned cmd, unsigned long arg)
case BLKBSZGET_32: /* get the logical block size (cf. BLKSSZGET) */ case BLKBSZGET_32: /* get the logical block size (cf. BLKSSZGET) */
return compat_put_int(arg, block_size(bdev)); return compat_put_int(arg, block_size(bdev));
case BLKSSZGET: /* get block device hardware sector size */ case BLKSSZGET: /* get block device hardware sector size */
return compat_put_int(arg, bdev_hardsect_size(bdev)); return compat_put_int(arg, bdev_logical_block_size(bdev));
case BLKSECTGET: case BLKSECTGET:
return compat_put_ushort(arg, return compat_put_ushort(arg,
bdev_get_queue(bdev)->max_sectors); queue_max_sectors(bdev_get_queue(bdev)));
case BLKRASET: /* compatible, but no compat_ptr (!) */ case BLKRASET: /* compatible, but no compat_ptr (!) */
case BLKFRASET: case BLKFRASET:
if (!capable(CAP_SYS_ADMIN)) if (!capable(CAP_SYS_ADMIN))
......
...@@ -138,7 +138,7 @@ deadline_merge(struct request_queue *q, struct request **req, struct bio *bio) ...@@ -138,7 +138,7 @@ deadline_merge(struct request_queue *q, struct request **req, struct bio *bio)
__rq = elv_rb_find(&dd->sort_list[bio_data_dir(bio)], sector); __rq = elv_rb_find(&dd->sort_list[bio_data_dir(bio)], sector);
if (__rq) { if (__rq) {
BUG_ON(sector != __rq->sector); BUG_ON(sector != blk_rq_pos(__rq));
if (elv_rq_merge_ok(__rq, bio)) { if (elv_rq_merge_ok(__rq, bio)) {
ret = ELEVATOR_FRONT_MERGE; ret = ELEVATOR_FRONT_MERGE;
......
...@@ -51,8 +51,7 @@ static const int elv_hash_shift = 6; ...@@ -51,8 +51,7 @@ static const int elv_hash_shift = 6;
#define ELV_HASH_FN(sec) \ #define ELV_HASH_FN(sec) \
(hash_long(ELV_HASH_BLOCK((sec)), elv_hash_shift)) (hash_long(ELV_HASH_BLOCK((sec)), elv_hash_shift))
#define ELV_HASH_ENTRIES (1 << elv_hash_shift) #define ELV_HASH_ENTRIES (1 << elv_hash_shift)
#define rq_hash_key(rq) ((rq)->sector + (rq)->nr_sectors) #define rq_hash_key(rq) (blk_rq_pos(rq) + blk_rq_sectors(rq))
#define ELV_ON_HASH(rq) (!hlist_unhashed(&(rq)->hash))
/* /*
* Query io scheduler to see if the current process issuing bio may be * Query io scheduler to see if the current process issuing bio may be
...@@ -116,9 +115,9 @@ static inline int elv_try_merge(struct request *__rq, struct bio *bio) ...@@ -116,9 +115,9 @@ static inline int elv_try_merge(struct request *__rq, struct bio *bio)
* we can merge and sequence is ok, check if it's possible * we can merge and sequence is ok, check if it's possible
*/ */
if (elv_rq_merge_ok(__rq, bio)) { if (elv_rq_merge_ok(__rq, bio)) {
if (__rq->sector + __rq->nr_sectors == bio->bi_sector) if (blk_rq_pos(__rq) + blk_rq_sectors(__rq) == bio->bi_sector)
ret = ELEVATOR_BACK_MERGE; ret = ELEVATOR_BACK_MERGE;
else if (__rq->sector - bio_sectors(bio) == bio->bi_sector) else if (blk_rq_pos(__rq) - bio_sectors(bio) == bio->bi_sector)
ret = ELEVATOR_FRONT_MERGE; ret = ELEVATOR_FRONT_MERGE;
} }
...@@ -306,22 +305,6 @@ void elevator_exit(struct elevator_queue *e) ...@@ -306,22 +305,6 @@ void elevator_exit(struct elevator_queue *e)
} }
EXPORT_SYMBOL(elevator_exit); EXPORT_SYMBOL(elevator_exit);
static void elv_activate_rq(struct request_queue *q, struct request *rq)
{
struct elevator_queue *e = q->elevator;
if (e->ops->elevator_activate_req_fn)
e->ops->elevator_activate_req_fn(q, rq);
}
static void elv_deactivate_rq(struct request_queue *q, struct request *rq)
{
struct elevator_queue *e = q->elevator;
if (e->ops->elevator_deactivate_req_fn)
e->ops->elevator_deactivate_req_fn(q, rq);
}
static inline void __elv_rqhash_del(struct request *rq) static inline void __elv_rqhash_del(struct request *rq)
{ {
hlist_del_init(&rq->hash); hlist_del_init(&rq->hash);
...@@ -383,9 +366,9 @@ struct request *elv_rb_add(struct rb_root *root, struct request *rq) ...@@ -383,9 +366,9 @@ struct request *elv_rb_add(struct rb_root *root, struct request *rq)
parent = *p; parent = *p;
__rq = rb_entry(parent, struct request, rb_node); __rq = rb_entry(parent, struct request, rb_node);
if (rq->sector < __rq->sector) if (blk_rq_pos(rq) < blk_rq_pos(__rq))
p = &(*p)->rb_left; p = &(*p)->rb_left;
else if (rq->sector > __rq->sector) else if (blk_rq_pos(rq) > blk_rq_pos(__rq))
p = &(*p)->rb_right; p = &(*p)->rb_right;
else else
return __rq; return __rq;
...@@ -413,9 +396,9 @@ struct request *elv_rb_find(struct rb_root *root, sector_t sector) ...@@ -413,9 +396,9 @@ struct request *elv_rb_find(struct rb_root *root, sector_t sector)
while (n) { while (n) {
rq = rb_entry(n, struct request, rb_node); rq = rb_entry(n, struct request, rb_node);
if (sector < rq->sector) if (sector < blk_rq_pos(rq))
n = n->rb_left; n = n->rb_left;
else if (sector > rq->sector) else if (sector > blk_rq_pos(rq))
n = n->rb_right; n = n->rb_right;
else else
return rq; return rq;
...@@ -454,14 +437,14 @@ void elv_dispatch_sort(struct request_queue *q, struct request *rq) ...@@ -454,14 +437,14 @@ void elv_dispatch_sort(struct request_queue *q, struct request *rq)
break; break;
if (pos->cmd_flags & stop_flags) if (pos->cmd_flags & stop_flags)
break; break;
if (rq->sector >= boundary) { if (blk_rq_pos(rq) >= boundary) {
if (pos->sector < boundary) if (blk_rq_pos(pos) < boundary)
continue; continue;
} else { } else {
if (pos->sector >= boundary) if (blk_rq_pos(pos) >= boundary)
break; break;
} }
if (rq->sector >= pos->sector) if (blk_rq_pos(rq) >= blk_rq_pos(pos))
break; break;
} }
...@@ -559,7 +542,7 @@ void elv_requeue_request(struct request_queue *q, struct request *rq) ...@@ -559,7 +542,7 @@ void elv_requeue_request(struct request_queue *q, struct request *rq)
* in_flight count again * in_flight count again
*/ */
if (blk_account_rq(rq)) { if (blk_account_rq(rq)) {
q->in_flight--; q->in_flight[rq_is_sync(rq)]--;
if (blk_sorted_rq(rq)) if (blk_sorted_rq(rq))
elv_deactivate_rq(q, rq); elv_deactivate_rq(q, rq);
} }
...@@ -588,6 +571,9 @@ void elv_drain_elevator(struct request_queue *q) ...@@ -588,6 +571,9 @@ void elv_drain_elevator(struct request_queue *q)
*/ */
void elv_quiesce_start(struct request_queue *q) void elv_quiesce_start(struct request_queue *q)
{ {
if (!q->elevator)
return;
queue_flag_set(QUEUE_FLAG_ELVSWITCH, q); queue_flag_set(QUEUE_FLAG_ELVSWITCH, q);
/* /*
...@@ -595,7 +581,7 @@ void elv_quiesce_start(struct request_queue *q) ...@@ -595,7 +581,7 @@ void elv_quiesce_start(struct request_queue *q)
*/ */
elv_drain_elevator(q); elv_drain_elevator(q);
while (q->rq.elvpriv) { while (q->rq.elvpriv) {
blk_start_queueing(q); __blk_run_queue(q);
spin_unlock_irq(q->queue_lock); spin_unlock_irq(q->queue_lock);
msleep(10); msleep(10);
spin_lock_irq(q->queue_lock); spin_lock_irq(q->queue_lock);
...@@ -639,8 +625,7 @@ void elv_insert(struct request_queue *q, struct request *rq, int where) ...@@ -639,8 +625,7 @@ void elv_insert(struct request_queue *q, struct request *rq, int where)
* with anything. There's no point in delaying queue * with anything. There's no point in delaying queue
* processing. * processing.
*/ */
blk_remove_plug(q); __blk_run_queue(q);
blk_start_queueing(q);
break; break;
case ELEVATOR_INSERT_SORT: case ELEVATOR_INSERT_SORT:
...@@ -699,7 +684,7 @@ void elv_insert(struct request_queue *q, struct request *rq, int where) ...@@ -699,7 +684,7 @@ void elv_insert(struct request_queue *q, struct request *rq, int where)
if (unplug_it && blk_queue_plugged(q)) { if (unplug_it && blk_queue_plugged(q)) {
int nrq = q->rq.count[BLK_RW_SYNC] + q->rq.count[BLK_RW_ASYNC] int nrq = q->rq.count[BLK_RW_SYNC] + q->rq.count[BLK_RW_ASYNC]
- q->in_flight; - queue_in_flight(q);
if (nrq >= q->unplug_thresh) if (nrq >= q->unplug_thresh)
__generic_unplug_device(q); __generic_unplug_device(q);
...@@ -755,117 +740,6 @@ void elv_add_request(struct request_queue *q, struct request *rq, int where, ...@@ -755,117 +740,6 @@ void elv_add_request(struct request_queue *q, struct request *rq, int where,
} }
EXPORT_SYMBOL(elv_add_request); EXPORT_SYMBOL(elv_add_request);
static inline struct request *__elv_next_request(struct request_queue *q)
{
struct request *rq;
while (1) {
while (!list_empty(&q->queue_head)) {
rq = list_entry_rq(q->queue_head.next);
if (blk_do_ordered(q, &rq))
return rq;
}
if (!q->elevator->ops->elevator_dispatch_fn(q, 0))
return NULL;
}
}
struct request *elv_next_request(struct request_queue *q)
{
struct request *rq;
int ret;
while ((rq = __elv_next_request(q)) != NULL) {
if (!(rq->cmd_flags & REQ_STARTED)) {
/*
* This is the first time the device driver
* sees this request (possibly after
* requeueing). Notify IO scheduler.
*/
if (blk_sorted_rq(rq))
elv_activate_rq(q, rq);
/*
* just mark as started even if we don't start
* it, a request that has been delayed should
* not be passed by new incoming requests
*/
rq->cmd_flags |= REQ_STARTED;
trace_block_rq_issue(q, rq);
}
if (!q->boundary_rq || q->boundary_rq == rq) {
q->end_sector = rq_end_sector(rq);
q->boundary_rq = NULL;
}
if (rq->cmd_flags & REQ_DONTPREP)
break;
if (q->dma_drain_size && rq->data_len) {
/*
* make sure space for the drain appears we
* know we can do this because max_hw_segments
* has been adjusted to be one fewer than the
* device can handle
*/
rq->nr_phys_segments++;
}
if (!q->prep_rq_fn)
break;
ret = q->prep_rq_fn(q, rq);
if (ret == BLKPREP_OK) {
break;
} else if (ret == BLKPREP_DEFER) {
/*
* the request may have been (partially) prepped.
* we need to keep this request in the front to
* avoid resource deadlock. REQ_STARTED will
* prevent other fs requests from passing this one.
*/
if (q->dma_drain_size && rq->data_len &&
!(rq->cmd_flags & REQ_DONTPREP)) {
/*
* remove the space for the drain we added
* so that we don't add it again
*/
--rq->nr_phys_segments;
}
rq = NULL;
break;
} else if (ret == BLKPREP_KILL) {
rq->cmd_flags |= REQ_QUIET;
__blk_end_request(rq, -EIO, blk_rq_bytes(rq));
} else {
printk(KERN_ERR "%s: bad return=%d\n", __func__, ret);
break;
}
}
return rq;
}
EXPORT_SYMBOL(elv_next_request);
void elv_dequeue_request(struct request_queue *q, struct request *rq)
{
BUG_ON(list_empty(&rq->queuelist));
BUG_ON(ELV_ON_HASH(rq));
list_del_init(&rq->queuelist);
/*
* the time frame between a request being removed from the lists
* and to it is freed is accounted as io that is in progress at
* the driver side.
*/
if (blk_account_rq(rq))
q->in_flight++;
}
int elv_queue_empty(struct request_queue *q) int elv_queue_empty(struct request_queue *q)
{ {
struct elevator_queue *e = q->elevator; struct elevator_queue *e = q->elevator;
...@@ -935,7 +809,12 @@ void elv_abort_queue(struct request_queue *q) ...@@ -935,7 +809,12 @@ void elv_abort_queue(struct request_queue *q)
rq = list_entry_rq(q->queue_head.next); rq = list_entry_rq(q->queue_head.next);
rq->cmd_flags |= REQ_QUIET; rq->cmd_flags |= REQ_QUIET;
trace_block_rq_abort(q, rq); trace_block_rq_abort(q, rq);
__blk_end_request(rq, -EIO, blk_rq_bytes(rq)); /*
* Mark this request as started so we don't trigger
* any debug logic in the end I/O path.
*/
blk_start_request(rq);
__blk_end_request_all(rq, -EIO);
} }
} }
EXPORT_SYMBOL(elv_abort_queue); EXPORT_SYMBOL(elv_abort_queue);
...@@ -948,7 +827,7 @@ void elv_completed_request(struct request_queue *q, struct request *rq) ...@@ -948,7 +827,7 @@ void elv_completed_request(struct request_queue *q, struct request *rq)
* request is released from the driver, io must be done * request is released from the driver, io must be done
*/ */
if (blk_account_rq(rq)) { if (blk_account_rq(rq)) {
q->in_flight--; q->in_flight[rq_is_sync(rq)]--;
if (blk_sorted_rq(rq) && e->ops->elevator_completed_req_fn) if (blk_sorted_rq(rq) && e->ops->elevator_completed_req_fn)
e->ops->elevator_completed_req_fn(q, rq); e->ops->elevator_completed_req_fn(q, rq);
} }
...@@ -963,11 +842,11 @@ void elv_completed_request(struct request_queue *q, struct request *rq) ...@@ -963,11 +842,11 @@ void elv_completed_request(struct request_queue *q, struct request *rq)
if (!list_empty(&q->queue_head)) if (!list_empty(&q->queue_head))
next = list_entry_rq(q->queue_head.next); next = list_entry_rq(q->queue_head.next);
if (!q->in_flight && if (!queue_in_flight(q) &&
blk_ordered_cur_seq(q) == QUEUE_ORDSEQ_DRAIN && blk_ordered_cur_seq(q) == QUEUE_ORDSEQ_DRAIN &&
(!next || blk_ordered_req_seq(next) > QUEUE_ORDSEQ_DRAIN)) { (!next || blk_ordered_req_seq(next) > QUEUE_ORDSEQ_DRAIN)) {
blk_ordered_complete_seq(q, QUEUE_ORDSEQ_DRAIN, 0); blk_ordered_complete_seq(q, QUEUE_ORDSEQ_DRAIN, 0);
blk_start_queueing(q); __blk_run_queue(q);
} }
} }
} }
...@@ -1175,6 +1054,9 @@ ssize_t elv_iosched_store(struct request_queue *q, const char *name, ...@@ -1175,6 +1054,9 @@ ssize_t elv_iosched_store(struct request_queue *q, const char *name,
char elevator_name[ELV_NAME_MAX]; char elevator_name[ELV_NAME_MAX];
struct elevator_type *e; struct elevator_type *e;
if (!q->elevator)
return count;
strlcpy(elevator_name, name, sizeof(elevator_name)); strlcpy(elevator_name, name, sizeof(elevator_name));
strstrip(elevator_name); strstrip(elevator_name);
...@@ -1198,10 +1080,15 @@ ssize_t elv_iosched_store(struct request_queue *q, const char *name, ...@@ -1198,10 +1080,15 @@ ssize_t elv_iosched_store(struct request_queue *q, const char *name,
ssize_t elv_iosched_show(struct request_queue *q, char *name) ssize_t elv_iosched_show(struct request_queue *q, char *name)
{ {
struct elevator_queue *e = q->elevator; struct elevator_queue *e = q->elevator;
struct elevator_type *elv = e->elevator_type; struct elevator_type *elv;
struct elevator_type *__e; struct elevator_type *__e;
int len = 0; int len = 0;
if (!q->elevator)
return sprintf(name, "none\n");
elv = e->elevator_type;
spin_lock(&elv_list_lock); spin_lock(&elv_list_lock);
list_for_each_entry(__e, &elv_list, list) { list_for_each_entry(__e, &elv_list, list) {
if (!strcmp(elv->elevator_name, __e->elevator_name)) if (!strcmp(elv->elevator_name, __e->elevator_name))
......
...@@ -852,11 +852,21 @@ static ssize_t disk_capability_show(struct device *dev, ...@@ -852,11 +852,21 @@ static ssize_t disk_capability_show(struct device *dev,
return sprintf(buf, "%x\n", disk->flags); return sprintf(buf, "%x\n", disk->flags);
} }
static ssize_t disk_alignment_offset_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct gendisk *disk = dev_to_disk(dev);
return sprintf(buf, "%d\n", queue_alignment_offset(disk->queue));
}
static DEVICE_ATTR(range, S_IRUGO, disk_range_show, NULL); static DEVICE_ATTR(range, S_IRUGO, disk_range_show, NULL);
static DEVICE_ATTR(ext_range, S_IRUGO, disk_ext_range_show, NULL); static DEVICE_ATTR(ext_range, S_IRUGO, disk_ext_range_show, NULL);
static DEVICE_ATTR(removable, S_IRUGO, disk_removable_show, NULL); static DEVICE_ATTR(removable, S_IRUGO, disk_removable_show, NULL);
static DEVICE_ATTR(ro, S_IRUGO, disk_ro_show, NULL); static DEVICE_ATTR(ro, S_IRUGO, disk_ro_show, NULL);
static DEVICE_ATTR(size, S_IRUGO, part_size_show, NULL); static DEVICE_ATTR(size, S_IRUGO, part_size_show, NULL);
static DEVICE_ATTR(alignment_offset, S_IRUGO, disk_alignment_offset_show, NULL);
static DEVICE_ATTR(capability, S_IRUGO, disk_capability_show, NULL); static DEVICE_ATTR(capability, S_IRUGO, disk_capability_show, NULL);
static DEVICE_ATTR(stat, S_IRUGO, part_stat_show, NULL); static DEVICE_ATTR(stat, S_IRUGO, part_stat_show, NULL);
#ifdef CONFIG_FAIL_MAKE_REQUEST #ifdef CONFIG_FAIL_MAKE_REQUEST
...@@ -875,6 +885,7 @@ static struct attribute *disk_attrs[] = { ...@@ -875,6 +885,7 @@ static struct attribute *disk_attrs[] = {
&dev_attr_removable.attr, &dev_attr_removable.attr,
&dev_attr_ro.attr, &dev_attr_ro.attr,
&dev_attr_size.attr, &dev_attr_size.attr,
&dev_attr_alignment_offset.attr,
&dev_attr_capability.attr, &dev_attr_capability.attr,
&dev_attr_stat.attr, &dev_attr_stat.attr,
#ifdef CONFIG_FAIL_MAKE_REQUEST #ifdef CONFIG_FAIL_MAKE_REQUEST
......
...@@ -152,10 +152,10 @@ static int blk_ioctl_discard(struct block_device *bdev, uint64_t start, ...@@ -152,10 +152,10 @@ static int blk_ioctl_discard(struct block_device *bdev, uint64_t start,
bio->bi_private = &wait; bio->bi_private = &wait;
bio->bi_sector = start; bio->bi_sector = start;
if (len > q->max_hw_sectors) { if (len > queue_max_hw_sectors(q)) {
bio->bi_size = q->max_hw_sectors << 9; bio->bi_size = queue_max_hw_sectors(q) << 9;
len -= q->max_hw_sectors; len -= queue_max_hw_sectors(q);
start += q->max_hw_sectors; start += queue_max_hw_sectors(q);
} else { } else {
bio->bi_size = len << 9; bio->bi_size = len << 9;
len = 0; len = 0;
...@@ -311,9 +311,9 @@ int blkdev_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd, ...@@ -311,9 +311,9 @@ int blkdev_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd,
case BLKBSZGET: /* get the logical block size (cf. BLKSSZGET) */ case BLKBSZGET: /* get the logical block size (cf. BLKSSZGET) */
return put_int(arg, block_size(bdev)); return put_int(arg, block_size(bdev));
case BLKSSZGET: /* get block device hardware sector size */ case BLKSSZGET: /* get block device hardware sector size */
return put_int(arg, bdev_hardsect_size(bdev)); return put_int(arg, bdev_logical_block_size(bdev));
case BLKSECTGET: case BLKSECTGET:
return put_ushort(arg, bdev_get_queue(bdev)->max_sectors); return put_ushort(arg, queue_max_sectors(bdev_get_queue(bdev)));
case BLKRASET: case BLKRASET:
case BLKFRASET: case BLKFRASET:
if(!capable(CAP_SYS_ADMIN)) if(!capable(CAP_SYS_ADMIN))
......
...@@ -75,7 +75,7 @@ static int sg_set_timeout(struct request_queue *q, int __user *p) ...@@ -75,7 +75,7 @@ static int sg_set_timeout(struct request_queue *q, int __user *p)
static int sg_get_reserved_size(struct request_queue *q, int __user *p) static int sg_get_reserved_size(struct request_queue *q, int __user *p)
{ {
unsigned val = min(q->sg_reserved_size, q->max_sectors << 9); unsigned val = min(q->sg_reserved_size, queue_max_sectors(q) << 9);
return put_user(val, p); return put_user(val, p);
} }
...@@ -89,8 +89,8 @@ static int sg_set_reserved_size(struct request_queue *q, int __user *p) ...@@ -89,8 +89,8 @@ static int sg_set_reserved_size(struct request_queue *q, int __user *p)
if (size < 0) if (size < 0)
return -EINVAL; return -EINVAL;
if (size > (q->max_sectors << 9)) if (size > (queue_max_sectors(q) << 9))
size = q->max_sectors << 9; size = queue_max_sectors(q) << 9;
q->sg_reserved_size = size; q->sg_reserved_size = size;
return 0; return 0;
...@@ -230,7 +230,7 @@ static int blk_complete_sghdr_rq(struct request *rq, struct sg_io_hdr *hdr, ...@@ -230,7 +230,7 @@ static int blk_complete_sghdr_rq(struct request *rq, struct sg_io_hdr *hdr,
hdr->info = 0; hdr->info = 0;
if (hdr->masked_status || hdr->host_status || hdr->driver_status) if (hdr->masked_status || hdr->host_status || hdr->driver_status)
hdr->info |= SG_INFO_CHECK; hdr->info |= SG_INFO_CHECK;
hdr->resid = rq->data_len; hdr->resid = rq->resid_len;
hdr->sb_len_wr = 0; hdr->sb_len_wr = 0;
if (rq->sense_len && hdr->sbp) { if (rq->sense_len && hdr->sbp) {
...@@ -264,7 +264,7 @@ static int sg_io(struct request_queue *q, struct gendisk *bd_disk, ...@@ -264,7 +264,7 @@ static int sg_io(struct request_queue *q, struct gendisk *bd_disk,
if (hdr->cmd_len > BLK_MAX_CDB) if (hdr->cmd_len > BLK_MAX_CDB)
return -EINVAL; return -EINVAL;
if (hdr->dxfer_len > (q->max_hw_sectors << 9)) if (hdr->dxfer_len > (queue_max_hw_sectors(q) << 9))
return -EIO; return -EIO;
if (hdr->dxfer_len) if (hdr->dxfer_len)
...@@ -500,9 +500,6 @@ static int __blk_send_generic(struct request_queue *q, struct gendisk *bd_disk, ...@@ -500,9 +500,6 @@ static int __blk_send_generic(struct request_queue *q, struct gendisk *bd_disk,
rq = blk_get_request(q, WRITE, __GFP_WAIT); rq = blk_get_request(q, WRITE, __GFP_WAIT);
rq->cmd_type = REQ_TYPE_BLOCK_PC; rq->cmd_type = REQ_TYPE_BLOCK_PC;
rq->data = NULL;
rq->data_len = 0;
rq->extra_len = 0;
rq->timeout = BLK_DEFAULT_SG_TIMEOUT; rq->timeout = BLK_DEFAULT_SG_TIMEOUT;
rq->cmd[0] = cmd; rq->cmd[0] = cmd;
rq->cmd[4] = data; rq->cmd[4] = data;
......
...@@ -1084,7 +1084,7 @@ static int atapi_drain_needed(struct request *rq) ...@@ -1084,7 +1084,7 @@ static int atapi_drain_needed(struct request *rq)
if (likely(!blk_pc_request(rq))) if (likely(!blk_pc_request(rq)))
return 0; return 0;
if (!rq->data_len || (rq->cmd_flags & REQ_RW)) if (!blk_rq_bytes(rq) || (rq->cmd_flags & REQ_RW))
return 0; return 0;
return atapi_cmd_type(rq->cmd[0]) == ATAPI_MISC; return atapi_cmd_type(rq->cmd[0]) == ATAPI_MISC;
......
...@@ -3321,7 +3321,7 @@ static int DAC960_process_queue(DAC960_Controller_T *Controller, struct request_ ...@@ -3321,7 +3321,7 @@ static int DAC960_process_queue(DAC960_Controller_T *Controller, struct request_
DAC960_Command_T *Command; DAC960_Command_T *Command;
while(1) { while(1) {
Request = elv_next_request(req_q); Request = blk_peek_request(req_q);
if (!Request) if (!Request)
return 1; return 1;
...@@ -3338,10 +3338,10 @@ static int DAC960_process_queue(DAC960_Controller_T *Controller, struct request_ ...@@ -3338,10 +3338,10 @@ static int DAC960_process_queue(DAC960_Controller_T *Controller, struct request_
} }
Command->Completion = Request->end_io_data; Command->Completion = Request->end_io_data;
Command->LogicalDriveNumber = (long)Request->rq_disk->private_data; Command->LogicalDriveNumber = (long)Request->rq_disk->private_data;
Command->BlockNumber = Request->sector; Command->BlockNumber = blk_rq_pos(Request);
Command->BlockCount = Request->nr_sectors; Command->BlockCount = blk_rq_sectors(Request);
Command->Request = Request; Command->Request = Request;
blkdev_dequeue_request(Request); blk_start_request(Request);
Command->SegmentCount = blk_rq_map_sg(req_q, Command->SegmentCount = blk_rq_map_sg(req_q,
Command->Request, Command->cmd_sglist); Command->Request, Command->cmd_sglist);
/* pci_map_sg MAY change the value of SegCount */ /* pci_map_sg MAY change the value of SegCount */
...@@ -3431,7 +3431,7 @@ static void DAC960_queue_partial_rw(DAC960_Command_T *Command) ...@@ -3431,7 +3431,7 @@ static void DAC960_queue_partial_rw(DAC960_Command_T *Command)
* successfully as possible. * successfully as possible.
*/ */
Command->SegmentCount = 1; Command->SegmentCount = 1;
Command->BlockNumber = Request->sector; Command->BlockNumber = blk_rq_pos(Request);
Command->BlockCount = 1; Command->BlockCount = 1;
DAC960_QueueReadWriteCommand(Command); DAC960_QueueReadWriteCommand(Command);
return; return;
......
...@@ -412,7 +412,7 @@ config ATA_OVER_ETH ...@@ -412,7 +412,7 @@ config ATA_OVER_ETH
config MG_DISK config MG_DISK
tristate "mGine mflash, gflash support" tristate "mGine mflash, gflash support"
depends on ARM && ATA && GPIOLIB depends on ARM && GPIOLIB
help help
mGine mFlash(gFlash) block device driver mGine mFlash(gFlash) block device driver
......
...@@ -112,8 +112,6 @@ module_param(fd_def_df0, ulong, 0); ...@@ -112,8 +112,6 @@ module_param(fd_def_df0, ulong, 0);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
static struct request_queue *floppy_queue; static struct request_queue *floppy_queue;
#define QUEUE (floppy_queue)
#define CURRENT elv_next_request(floppy_queue)
/* /*
* Macros * Macros
...@@ -1335,64 +1333,60 @@ static int get_track(int drive, int track) ...@@ -1335,64 +1333,60 @@ static int get_track(int drive, int track)
static void redo_fd_request(void) static void redo_fd_request(void)
{ {
struct request *rq;
unsigned int cnt, block, track, sector; unsigned int cnt, block, track, sector;
int drive; int drive;
struct amiga_floppy_struct *floppy; struct amiga_floppy_struct *floppy;
char *data; char *data;
unsigned long flags; unsigned long flags;
int err;
repeat: next_req:
if (!CURRENT) { rq = blk_fetch_request(floppy_queue);
if (!rq) {
/* Nothing left to do */ /* Nothing left to do */
return; return;
} }
floppy = CURRENT->rq_disk->private_data; floppy = rq->rq_disk->private_data;
drive = floppy - unit; drive = floppy - unit;
next_segment:
/* Here someone could investigate to be more efficient */ /* Here someone could investigate to be more efficient */
for (cnt = 0; cnt < CURRENT->current_nr_sectors; cnt++) { for (cnt = 0, err = 0; cnt < blk_rq_cur_sectors(rq); cnt++) {
#ifdef DEBUG #ifdef DEBUG
printk("fd: sector %ld + %d requested for %s\n", printk("fd: sector %ld + %d requested for %s\n",
CURRENT->sector,cnt, blk_rq_pos(rq), cnt,
(rq_data_dir(CURRENT) == READ) ? "read" : "write"); (rq_data_dir(rq) == READ) ? "read" : "write");
#endif #endif
block = CURRENT->sector + cnt; block = blk_rq_pos(rq) + cnt;
if ((int)block > floppy->blocks) { if ((int)block > floppy->blocks) {
end_request(CURRENT, 0); err = -EIO;
goto repeat; break;
} }
track = block / (floppy->dtype->sects * floppy->type->sect_mult); track = block / (floppy->dtype->sects * floppy->type->sect_mult);
sector = block % (floppy->dtype->sects * floppy->type->sect_mult); sector = block % (floppy->dtype->sects * floppy->type->sect_mult);
data = CURRENT->buffer + 512 * cnt; data = rq->buffer + 512 * cnt;
#ifdef DEBUG #ifdef DEBUG
printk("access to track %d, sector %d, with buffer at " printk("access to track %d, sector %d, with buffer at "
"0x%08lx\n", track, sector, data); "0x%08lx\n", track, sector, data);
#endif #endif
if ((rq_data_dir(CURRENT) != READ) && (rq_data_dir(CURRENT) != WRITE)) {
printk(KERN_WARNING "do_fd_request: unknown command\n");
end_request(CURRENT, 0);
goto repeat;
}
if (get_track(drive, track) == -1) { if (get_track(drive, track) == -1) {
end_request(CURRENT, 0); err = -EIO;
goto repeat; break;
} }
switch (rq_data_dir(CURRENT)) { if (rq_data_dir(rq) == READ) {
case READ:
memcpy(data, floppy->trackbuf + sector * 512, 512); memcpy(data, floppy->trackbuf + sector * 512, 512);
break; } else {
case WRITE:
memcpy(floppy->trackbuf + sector * 512, data, 512); memcpy(floppy->trackbuf + sector * 512, data, 512);
/* keep the drive spinning while writes are scheduled */ /* keep the drive spinning while writes are scheduled */
if (!fd_motor_on(drive)) { if (!fd_motor_on(drive)) {
end_request(CURRENT, 0); err = -EIO;
goto repeat; break;
} }
/* /*
* setup a callback to write the track buffer * setup a callback to write the track buffer
...@@ -1404,14 +1398,12 @@ static void redo_fd_request(void) ...@@ -1404,14 +1398,12 @@ static void redo_fd_request(void)
/* reset the timer */ /* reset the timer */
mod_timer (flush_track_timer + drive, jiffies + 1); mod_timer (flush_track_timer + drive, jiffies + 1);
local_irq_restore(flags); local_irq_restore(flags);
break;
} }
} }
CURRENT->nr_sectors -= CURRENT->current_nr_sectors;
CURRENT->sector += CURRENT->current_nr_sectors;
end_request(CURRENT, 1); if (__blk_end_request_cur(rq, err))
goto repeat; goto next_segment;
goto next_req;
} }
static void do_fd_request(struct request_queue * q) static void do_fd_request(struct request_queue * q)
......
...@@ -79,9 +79,7 @@ ...@@ -79,9 +79,7 @@
#undef DEBUG #undef DEBUG
static struct request_queue *floppy_queue; static struct request_queue *floppy_queue;
static struct request *fd_request;
#define QUEUE (floppy_queue)
#define CURRENT elv_next_request(floppy_queue)
/* Disk types: DD, HD, ED */ /* Disk types: DD, HD, ED */
static struct atari_disk_type { static struct atari_disk_type {
...@@ -376,6 +374,12 @@ static DEFINE_TIMER(readtrack_timer, fd_readtrack_check, 0, 0); ...@@ -376,6 +374,12 @@ static DEFINE_TIMER(readtrack_timer, fd_readtrack_check, 0, 0);
static DEFINE_TIMER(timeout_timer, fd_times_out, 0, 0); static DEFINE_TIMER(timeout_timer, fd_times_out, 0, 0);
static DEFINE_TIMER(fd_timer, check_change, 0, 0); static DEFINE_TIMER(fd_timer, check_change, 0, 0);
static void fd_end_request_cur(int err)
{
if (!__blk_end_request_cur(fd_request, err))
fd_request = NULL;
}
static inline void start_motor_off_timer(void) static inline void start_motor_off_timer(void)
{ {
mod_timer(&motor_off_timer, jiffies + FD_MOTOR_OFF_DELAY); mod_timer(&motor_off_timer, jiffies + FD_MOTOR_OFF_DELAY);
...@@ -606,15 +610,15 @@ static void fd_error( void ) ...@@ -606,15 +610,15 @@ static void fd_error( void )
return; return;
} }
if (!CURRENT) if (!fd_request)
return; return;
CURRENT->errors++; fd_request->errors++;
if (CURRENT->errors >= MAX_ERRORS) { if (fd_request->errors >= MAX_ERRORS) {
printk(KERN_ERR "fd%d: too many errors.\n", SelectedDrive ); printk(KERN_ERR "fd%d: too many errors.\n", SelectedDrive );
end_request(CURRENT, 0); fd_end_request_cur(-EIO);
} }
else if (CURRENT->errors == RECALIBRATE_ERRORS) { else if (fd_request->errors == RECALIBRATE_ERRORS) {
printk(KERN_WARNING "fd%d: recalibrating\n", SelectedDrive ); printk(KERN_WARNING "fd%d: recalibrating\n", SelectedDrive );
if (SelectedDrive != -1) if (SelectedDrive != -1)
SUD.track = -1; SUD.track = -1;
...@@ -725,16 +729,14 @@ static void do_fd_action( int drive ) ...@@ -725,16 +729,14 @@ static void do_fd_action( int drive )
if (IS_BUFFERED( drive, ReqSide, ReqTrack )) { if (IS_BUFFERED( drive, ReqSide, ReqTrack )) {
if (ReqCmd == READ) { if (ReqCmd == READ) {
copy_buffer( SECTOR_BUFFER(ReqSector), ReqData ); copy_buffer( SECTOR_BUFFER(ReqSector), ReqData );
if (++ReqCnt < CURRENT->current_nr_sectors) { if (++ReqCnt < blk_rq_cur_sectors(fd_request)) {
/* read next sector */ /* read next sector */
setup_req_params( drive ); setup_req_params( drive );
goto repeat; goto repeat;
} }
else { else {
/* all sectors finished */ /* all sectors finished */
CURRENT->nr_sectors -= CURRENT->current_nr_sectors; fd_end_request_cur(0);
CURRENT->sector += CURRENT->current_nr_sectors;
end_request(CURRENT, 1);
redo_fd_request(); redo_fd_request();
return; return;
} }
...@@ -1132,16 +1134,14 @@ static void fd_rwsec_done1(int status) ...@@ -1132,16 +1134,14 @@ static void fd_rwsec_done1(int status)
} }
} }
if (++ReqCnt < CURRENT->current_nr_sectors) { if (++ReqCnt < blk_rq_cur_sectors(fd_request)) {
/* read next sector */ /* read next sector */
setup_req_params( SelectedDrive ); setup_req_params( SelectedDrive );
do_fd_action( SelectedDrive ); do_fd_action( SelectedDrive );
} }
else { else {
/* all sectors finished */ /* all sectors finished */
CURRENT->nr_sectors -= CURRENT->current_nr_sectors; fd_end_request_cur(0);
CURRENT->sector += CURRENT->current_nr_sectors;
end_request(CURRENT, 1);
redo_fd_request(); redo_fd_request();
} }
return; return;
...@@ -1382,7 +1382,7 @@ static void setup_req_params( int drive ) ...@@ -1382,7 +1382,7 @@ static void setup_req_params( int drive )
ReqData = ReqBuffer + 512 * ReqCnt; ReqData = ReqBuffer + 512 * ReqCnt;
if (UseTrackbuffer) if (UseTrackbuffer)
read_track = (ReqCmd == READ && CURRENT->errors == 0); read_track = (ReqCmd == READ && fd_request->errors == 0);
else else
read_track = 0; read_track = 0;
...@@ -1396,25 +1396,27 @@ static void redo_fd_request(void) ...@@ -1396,25 +1396,27 @@ static void redo_fd_request(void)
int drive, type; int drive, type;
struct atari_floppy_struct *floppy; struct atari_floppy_struct *floppy;
DPRINT(("redo_fd_request: CURRENT=%p dev=%s CURRENT->sector=%ld\n", DPRINT(("redo_fd_request: fd_request=%p dev=%s fd_request->sector=%ld\n",
CURRENT, CURRENT ? CURRENT->rq_disk->disk_name : "", fd_request, fd_request ? fd_request->rq_disk->disk_name : "",
CURRENT ? CURRENT->sector : 0 )); fd_request ? blk_rq_pos(fd_request) : 0 ));
IsFormatting = 0; IsFormatting = 0;
repeat: repeat:
if (!fd_request) {
fd_request = blk_fetch_request(floppy_queue);
if (!fd_request)
goto the_end;
}
if (!CURRENT) floppy = fd_request->rq_disk->private_data;
goto the_end;
floppy = CURRENT->rq_disk->private_data;
drive = floppy - unit; drive = floppy - unit;
type = floppy->type; type = floppy->type;
if (!UD.connected) { if (!UD.connected) {
/* drive not connected */ /* drive not connected */
printk(KERN_ERR "Unknown Device: fd%d\n", drive ); printk(KERN_ERR "Unknown Device: fd%d\n", drive );
end_request(CURRENT, 0); fd_end_request_cur(-EIO);
goto repeat; goto repeat;
} }
...@@ -1430,12 +1432,12 @@ static void redo_fd_request(void) ...@@ -1430,12 +1432,12 @@ static void redo_fd_request(void)
/* user supplied disk type */ /* user supplied disk type */
if (--type >= NUM_DISK_MINORS) { if (--type >= NUM_DISK_MINORS) {
printk(KERN_WARNING "fd%d: invalid disk format", drive ); printk(KERN_WARNING "fd%d: invalid disk format", drive );
end_request(CURRENT, 0); fd_end_request_cur(-EIO);
goto repeat; goto repeat;
} }
if (minor2disktype[type].drive_types > DriveType) { if (minor2disktype[type].drive_types > DriveType) {
printk(KERN_WARNING "fd%d: unsupported disk format", drive ); printk(KERN_WARNING "fd%d: unsupported disk format", drive );
end_request(CURRENT, 0); fd_end_request_cur(-EIO);
goto repeat; goto repeat;
} }
type = minor2disktype[type].index; type = minor2disktype[type].index;
...@@ -1444,8 +1446,8 @@ static void redo_fd_request(void) ...@@ -1444,8 +1446,8 @@ static void redo_fd_request(void)
UD.autoprobe = 0; UD.autoprobe = 0;
} }
if (CURRENT->sector + 1 > UDT->blocks) { if (blk_rq_pos(fd_request) + 1 > UDT->blocks) {
end_request(CURRENT, 0); fd_end_request_cur(-EIO);
goto repeat; goto repeat;
} }
...@@ -1453,9 +1455,9 @@ static void redo_fd_request(void) ...@@ -1453,9 +1455,9 @@ static void redo_fd_request(void)
del_timer( &motor_off_timer ); del_timer( &motor_off_timer );
ReqCnt = 0; ReqCnt = 0;
ReqCmd = rq_data_dir(CURRENT); ReqCmd = rq_data_dir(fd_request);
ReqBlock = CURRENT->sector; ReqBlock = blk_rq_pos(fd_request);
ReqBuffer = CURRENT->buffer; ReqBuffer = fd_request->buffer;
setup_req_params( drive ); setup_req_params( drive );
do_fd_action( drive ); do_fd_action( drive );
......
...@@ -407,12 +407,7 @@ static int __init ramdisk_size(char *str) ...@@ -407,12 +407,7 @@ static int __init ramdisk_size(char *str)
rd_size = simple_strtol(str, NULL, 0); rd_size = simple_strtol(str, NULL, 0);
return 1; return 1;
} }
static int __init ramdisk_size2(char *str) __setup("ramdisk_size=", ramdisk_size);
{
return ramdisk_size(str);
}
__setup("ramdisk=", ramdisk_size);
__setup("ramdisk_size=", ramdisk_size2);
#endif #endif
/* /*
......
此差异已折叠。
...@@ -11,6 +11,11 @@ ...@@ -11,6 +11,11 @@
#define IO_OK 0 #define IO_OK 0
#define IO_ERROR 1 #define IO_ERROR 1
#define IO_NEEDS_RETRY 3
#define VENDOR_LEN 8
#define MODEL_LEN 16
#define REV_LEN 4
struct ctlr_info; struct ctlr_info;
typedef struct ctlr_info ctlr_info_t; typedef struct ctlr_info ctlr_info_t;
...@@ -34,23 +39,20 @@ typedef struct _drive_info_struct ...@@ -34,23 +39,20 @@ typedef struct _drive_info_struct
int cylinders; int cylinders;
int raid_level; /* set to -1 to indicate that int raid_level; /* set to -1 to indicate that
* the drive is not in use/configured * the drive is not in use/configured
*/ */
int busy_configuring; /*This is set when the drive is being removed int busy_configuring; /* This is set when a drive is being removed
*to prevent it from being opened or it's queue * to prevent it from being opened or it's
*from being started. * queue from being started.
*/ */
__u8 serial_no[16]; /* from inquiry page 0x83, */ struct device dev;
/* not necc. null terminated. */ __u8 serial_no[16]; /* from inquiry page 0x83,
* not necc. null terminated.
*/
char vendor[VENDOR_LEN + 1]; /* SCSI vendor string */
char model[MODEL_LEN + 1]; /* SCSI model string */
char rev[REV_LEN + 1]; /* SCSI revision string */
} drive_info_struct; } drive_info_struct;
#ifdef CONFIG_CISS_SCSI_TAPE
struct sendcmd_reject_list {
int ncompletions;
unsigned long *complete; /* array of NR_CMDS tags */
};
#endif
struct ctlr_info struct ctlr_info
{ {
int ctlr; int ctlr;
...@@ -118,11 +120,11 @@ struct ctlr_info ...@@ -118,11 +120,11 @@ struct ctlr_info
void *scsi_ctlr; /* ptr to structure containing scsi related stuff */ void *scsi_ctlr; /* ptr to structure containing scsi related stuff */
/* list of block side commands the scsi error handling sucked up */ /* list of block side commands the scsi error handling sucked up */
/* and saved for later processing */ /* and saved for later processing */
struct sendcmd_reject_list scsi_rejects;
#endif #endif
unsigned char alive; unsigned char alive;
struct completion *rescan_wait; struct completion *rescan_wait;
struct task_struct *cciss_scan_thread; struct task_struct *cciss_scan_thread;
struct device dev;
}; };
/* Defining the diffent access_menthods */ /* Defining the diffent access_menthods */
......
...@@ -217,6 +217,8 @@ typedef union _LUNAddr_struct { ...@@ -217,6 +217,8 @@ typedef union _LUNAddr_struct {
LogDevAddr_struct LogDev; LogDevAddr_struct LogDev;
} LUNAddr_struct; } LUNAddr_struct;
#define CTLR_LUNID "\0\0\0\0\0\0\0\0"
typedef struct _CommandListHeader_struct { typedef struct _CommandListHeader_struct {
BYTE ReplyQueue; BYTE ReplyQueue;
BYTE SGList; BYTE SGList;
......
...@@ -44,20 +44,13 @@ ...@@ -44,20 +44,13 @@
#define CCISS_ABORT_MSG 0x00 #define CCISS_ABORT_MSG 0x00
#define CCISS_RESET_MSG 0x01 #define CCISS_RESET_MSG 0x01
/* some prototypes... */ static int fill_cmd(CommandList_struct *c, __u8 cmd, int ctlr, void *buff,
static int sendcmd( size_t size,
__u8 cmd, __u8 page_code, unsigned char *scsi3addr,
int ctlr,
void *buff,
size_t size,
unsigned int use_unit_num, /* 0: address the controller,
1: address logical volume log_unit,
2: address is in scsi3addr */
unsigned int log_unit,
__u8 page_code,
unsigned char *scsi3addr,
int cmd_type); int cmd_type);
static CommandList_struct *cmd_alloc(ctlr_info_t *h, int get_from_pool);
static void cmd_free(ctlr_info_t *h, CommandList_struct *c, int got_from_pool);
static int cciss_scsi_proc_info( static int cciss_scsi_proc_info(
struct Scsi_Host *sh, struct Scsi_Host *sh,
...@@ -1575,6 +1568,75 @@ cciss_seq_tape_report(struct seq_file *seq, int ctlr) ...@@ -1575,6 +1568,75 @@ cciss_seq_tape_report(struct seq_file *seq, int ctlr)
CPQ_TAPE_UNLOCK(ctlr, flags); CPQ_TAPE_UNLOCK(ctlr, flags);
} }
static int wait_for_device_to_become_ready(ctlr_info_t *h,
unsigned char lunaddr[])
{
int rc;
int count = 0;
int waittime = HZ;
CommandList_struct *c;
c = cmd_alloc(h, 1);
if (!c) {
printk(KERN_WARNING "cciss%d: out of memory in "
"wait_for_device_to_become_ready.\n", h->ctlr);
return IO_ERROR;
}
/* Send test unit ready until device ready, or give up. */
while (count < 20) {
/* Wait for a bit. do this first, because if we send
* the TUR right away, the reset will just abort it.
*/
schedule_timeout_uninterruptible(waittime);
count++;
/* Increase wait time with each try, up to a point. */
if (waittime < (HZ * 30))
waittime = waittime * 2;
/* Send the Test Unit Ready */
rc = fill_cmd(c, TEST_UNIT_READY, h->ctlr, NULL, 0, 0,
lunaddr, TYPE_CMD);
if (rc == 0)
rc = sendcmd_withirq_core(h, c, 0);
(void) process_sendcmd_error(h, c);
if (rc != 0)
goto retry_tur;
if (c->err_info->CommandStatus == CMD_SUCCESS)
break;
if (c->err_info->CommandStatus == CMD_TARGET_STATUS &&
c->err_info->ScsiStatus == SAM_STAT_CHECK_CONDITION) {
if (c->err_info->SenseInfo[2] == NO_SENSE)
break;
if (c->err_info->SenseInfo[2] == UNIT_ATTENTION) {
unsigned char asc;
asc = c->err_info->SenseInfo[12];
check_for_unit_attention(h, c);
if (asc == POWER_OR_RESET)
break;
}
}
retry_tur:
printk(KERN_WARNING "cciss%d: Waiting %d secs "
"for device to become ready.\n",
h->ctlr, waittime / HZ);
rc = 1; /* device not ready. */
}
if (rc)
printk("cciss%d: giving up on device.\n", h->ctlr);
else
printk(KERN_WARNING "cciss%d: device is ready.\n", h->ctlr);
cmd_free(h, c, 1);
return rc;
}
/* Need at least one of these error handlers to keep ../scsi/hosts.c from /* Need at least one of these error handlers to keep ../scsi/hosts.c from
* complaining. Doing a host- or bus-reset can't do anything good here. * complaining. Doing a host- or bus-reset can't do anything good here.
...@@ -1591,6 +1653,7 @@ static int cciss_eh_device_reset_handler(struct scsi_cmnd *scsicmd) ...@@ -1591,6 +1653,7 @@ static int cciss_eh_device_reset_handler(struct scsi_cmnd *scsicmd)
{ {
int rc; int rc;
CommandList_struct *cmd_in_trouble; CommandList_struct *cmd_in_trouble;
unsigned char lunaddr[8];
ctlr_info_t **c; ctlr_info_t **c;
int ctlr; int ctlr;
...@@ -1600,19 +1663,15 @@ static int cciss_eh_device_reset_handler(struct scsi_cmnd *scsicmd) ...@@ -1600,19 +1663,15 @@ static int cciss_eh_device_reset_handler(struct scsi_cmnd *scsicmd)
return FAILED; return FAILED;
ctlr = (*c)->ctlr; ctlr = (*c)->ctlr;
printk(KERN_WARNING "cciss%d: resetting tape drive or medium changer.\n", ctlr); printk(KERN_WARNING "cciss%d: resetting tape drive or medium changer.\n", ctlr);
/* find the command that's giving us trouble */ /* find the command that's giving us trouble */
cmd_in_trouble = (CommandList_struct *) scsicmd->host_scribble; cmd_in_trouble = (CommandList_struct *) scsicmd->host_scribble;
if (cmd_in_trouble == NULL) { /* paranoia */ if (cmd_in_trouble == NULL) /* paranoia */
return FAILED; return FAILED;
} memcpy(lunaddr, &cmd_in_trouble->Header.LUN.LunAddrBytes[0], 8);
/* send a reset to the SCSI LUN which the command was sent to */ /* send a reset to the SCSI LUN which the command was sent to */
rc = sendcmd(CCISS_RESET_MSG, ctlr, NULL, 0, 2, 0, 0, rc = sendcmd_withirq(CCISS_RESET_MSG, ctlr, NULL, 0, 0, lunaddr,
(unsigned char *) &cmd_in_trouble->Header.LUN.LunAddrBytes[0],
TYPE_MSG); TYPE_MSG);
/* sendcmd turned off interrupts on the board, turn 'em back on. */ if (rc == 0 && wait_for_device_to_become_ready(*c, lunaddr) == 0)
(*c)->access.set_intr_mask(*c, CCISS_INTR_ON);
if (rc == 0)
return SUCCESS; return SUCCESS;
printk(KERN_WARNING "cciss%d: resetting device failed.\n", ctlr); printk(KERN_WARNING "cciss%d: resetting device failed.\n", ctlr);
return FAILED; return FAILED;
...@@ -1622,6 +1681,7 @@ static int cciss_eh_abort_handler(struct scsi_cmnd *scsicmd) ...@@ -1622,6 +1681,7 @@ static int cciss_eh_abort_handler(struct scsi_cmnd *scsicmd)
{ {
int rc; int rc;
CommandList_struct *cmd_to_abort; CommandList_struct *cmd_to_abort;
unsigned char lunaddr[8];
ctlr_info_t **c; ctlr_info_t **c;
int ctlr; int ctlr;
...@@ -1636,12 +1696,9 @@ static int cciss_eh_abort_handler(struct scsi_cmnd *scsicmd) ...@@ -1636,12 +1696,9 @@ static int cciss_eh_abort_handler(struct scsi_cmnd *scsicmd)
cmd_to_abort = (CommandList_struct *) scsicmd->host_scribble; cmd_to_abort = (CommandList_struct *) scsicmd->host_scribble;
if (cmd_to_abort == NULL) /* paranoia */ if (cmd_to_abort == NULL) /* paranoia */
return FAILED; return FAILED;
rc = sendcmd(CCISS_ABORT_MSG, ctlr, &cmd_to_abort->Header.Tag, memcpy(lunaddr, &cmd_to_abort->Header.LUN.LunAddrBytes[0], 8);
0, 2, 0, 0, rc = sendcmd_withirq(CCISS_ABORT_MSG, ctlr, &cmd_to_abort->Header.Tag,
(unsigned char *) &cmd_to_abort->Header.LUN.LunAddrBytes[0], 0, 0, lunaddr, TYPE_MSG);
TYPE_MSG);
/* sendcmd turned off interrupts on the board, turn 'em back on. */
(*c)->access.set_intr_mask(*c, CCISS_INTR_ON);
if (rc == 0) if (rc == 0)
return SUCCESS; return SUCCESS;
return FAILED; return FAILED;
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
...@@ -110,7 +110,7 @@ static void nbd_end_request(struct request *req) ...@@ -110,7 +110,7 @@ static void nbd_end_request(struct request *req)
req, error ? "failed" : "done"); req, error ? "failed" : "done");
spin_lock_irqsave(q->queue_lock, flags); spin_lock_irqsave(q->queue_lock, flags);
__blk_end_request(req, error, req->nr_sectors << 9); __blk_end_request_all(req, error);
spin_unlock_irqrestore(q->queue_lock, flags); spin_unlock_irqrestore(q->queue_lock, flags);
} }
...@@ -231,19 +231,19 @@ static int nbd_send_req(struct nbd_device *lo, struct request *req) ...@@ -231,19 +231,19 @@ static int nbd_send_req(struct nbd_device *lo, struct request *req)
{ {
int result, flags; int result, flags;
struct nbd_request request; struct nbd_request request;
unsigned long size = req->nr_sectors << 9; unsigned long size = blk_rq_bytes(req);
request.magic = htonl(NBD_REQUEST_MAGIC); request.magic = htonl(NBD_REQUEST_MAGIC);
request.type = htonl(nbd_cmd(req)); request.type = htonl(nbd_cmd(req));
request.from = cpu_to_be64((u64) req->sector << 9); request.from = cpu_to_be64((u64)blk_rq_pos(req) << 9);
request.len = htonl(size); request.len = htonl(size);
memcpy(request.handle, &req, sizeof(req)); memcpy(request.handle, &req, sizeof(req));
dprintk(DBG_TX, "%s: request %p: sending control (%s@%llu,%luB)\n", dprintk(DBG_TX, "%s: request %p: sending control (%s@%llu,%uB)\n",
lo->disk->disk_name, req, lo->disk->disk_name, req,
nbdcmd_to_ascii(nbd_cmd(req)), nbdcmd_to_ascii(nbd_cmd(req)),
(unsigned long long)req->sector << 9, (unsigned long long)blk_rq_pos(req) << 9,
req->nr_sectors << 9); blk_rq_bytes(req));
result = sock_xmit(lo, 1, &request, sizeof(request), result = sock_xmit(lo, 1, &request, sizeof(request),
(nbd_cmd(req) == NBD_CMD_WRITE) ? MSG_MORE : 0); (nbd_cmd(req) == NBD_CMD_WRITE) ? MSG_MORE : 0);
if (result <= 0) { if (result <= 0) {
...@@ -533,11 +533,9 @@ static void do_nbd_request(struct request_queue *q) ...@@ -533,11 +533,9 @@ static void do_nbd_request(struct request_queue *q)
{ {
struct request *req; struct request *req;
while ((req = elv_next_request(q)) != NULL) { while ((req = blk_fetch_request(q)) != NULL) {
struct nbd_device *lo; struct nbd_device *lo;
blkdev_dequeue_request(req);
spin_unlock_irq(q->queue_lock); spin_unlock_irq(q->queue_lock);
dprintk(DBG_BLKDEV, "%s: request %p: dequeued (flags=%x)\n", dprintk(DBG_BLKDEV, "%s: request %p: dequeued (flags=%x)\n",
...@@ -580,13 +578,6 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *lo, ...@@ -580,13 +578,6 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *lo,
blk_rq_init(NULL, &sreq); blk_rq_init(NULL, &sreq);
sreq.cmd_type = REQ_TYPE_SPECIAL; sreq.cmd_type = REQ_TYPE_SPECIAL;
nbd_cmd(&sreq) = NBD_CMD_DISC; nbd_cmd(&sreq) = NBD_CMD_DISC;
/*
* Set these to sane values in case server implementation
* fails to check the request type first and also to keep
* debugging output cleaner.
*/
sreq.sector = 0;
sreq.nr_sectors = 0;
if (!lo->sock) if (!lo->sock)
return -EINVAL; return -EINVAL;
nbd_send_req(lo, &sreq); nbd_send_req(lo, &sreq);
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册