- 20 3月, 2013 3 次提交
-
-
由 Paolo Bonzini 提交于
(This is a respin of Paolo Bonzini's patch, but it calls virtqueue_add_sgs() instead of his multi-part API). This is similar to the previous patch, but a bit more radical because the bio and req paths now share the buffer construction code. Because the req path doesn't use vbr->sg, however, we need to add a couple of arguments to __virtblk_add_req. We also need to teach __virtblk_add_req how to build SCSI command requests. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Reviewed-by: NAsias He <asias@redhat.com>
-
由 Paolo Bonzini 提交于
(This is a respin of Paolo Bonzini's patch, but it calls virtqueue_add_sgs() instead of his multi-part API). Move the creation of the request header and response footer to __virtblk_add_req. vbr->sg only contains the data scatterlist, the header/footer are added separately using virtqueue_add_sgs(). With this change, virtio-blk (with use_bio) is not relying anymore on the virtio functions ignoring the end markers in a scatterlist. The next patch will do the same for the other path. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Reviewed-by: NAsias He <asias@redhat.com>
-
由 Paolo Bonzini 提交于
Right now, both virtblk_add_req and virtblk_add_req_wait call virtqueue_add_buf. To prepare for the next patches, abstract the call to virtqueue_add_buf into a new function __virtblk_add_req, and include the waiting logic directly in virtblk_add_req. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NAsias He <asias@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 12 3月, 2013 1 次提交
-
-
由 Milos Vyletel 提交于
When virtio-blk device is resized from host (using block_resize from QEMU) emit KOBJ_CHANGE uevent to notify guest about such change. This allows user to have custom udev rules which would take whatever action if such event occurs. As a proof of concept I've created simple udev rule that automatically resize filesystem on virtio-blk device. ACTION=="change", KERNEL=="vd*", \ ENV{RESIZE}=="1", \ ENV{ID_FS_TYPE}=="ext[3-4]", \ RUN+="/sbin/resize2fs /dev/%k" ACTION=="change", KERNEL=="vd*", \ ENV{RESIZE}=="1", \ ENV{ID_FS_TYPE}=="LVM2_member", \ RUN+="/sbin/pvresize /dev/%k" Signed-off-by: NMilos Vyletel <milos.vyletel@sde.cz> Tested-by: NAsias He <asias@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (minor simplification)
-
- 04 1月, 2013 1 次提交
-
-
由 Greg Kroah-Hartman 提交于
CONFIG_HOTPLUG is going away as an option. As a result, the __dev* markings need to be removed. This change removes the use of __devinit, __devexit_p, __devinitdata, __devinitconst, and __devexit from these drivers. Based on patches originally written by Bill Pemberton, but redone by me in order to handle some of the coding style issues better, by hand. Cc: Bill Pemberton <wfp5p@virginia.edu> Cc: Mike Miller <mike.miller@hp.com> Cc: Chirag Kantharia <chirag.kantharia@hp.com> Cc: Geoff Levand <geoff@infradead.org> Cc: Jim Paris <jim@jtan.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Grant Likely <grant.likely@secretlab.ca> Cc: Matthew Wilcox <matthew.r.wilcox@intel.com> Cc: Keith Busch <keith.busch@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: NeilBrown <neilb@suse.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Tao Guo <Tao.Guo@emc.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 02 1月, 2013 1 次提交
-
-
由 Alexander Graf 提交于
When a file system is mounted on a virtio-blk disk, we then remove it and then reattach it, the reattached disk gets the same disk name and ids as the hot removed one. This leads to very nasty effects - mostly rendering the newly attached device completely unusable. Trying what happens when I do the same thing with a USB device, I saw that the sd node simply doesn't get free'd when a device gets forcefully removed. Imitate the same behavior for vd devices. This way broken vd devices simply are never free'd and newly attached ones keep working just fine. Signed-off-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Cc: stable@kernel.org
-
- 28 9月, 2012 4 次提交
-
-
由 Asias He 提交于
This reduces unnecessary interrupts that host could send to guest while guest is in the progress of irq handling. If one vcpu is handling the irq, while another interrupt comes, in handle_edge_irq(), the guest will mask the interrupt via mask_msi_irq() which is a very heavy operation that goes all the way down to host. Here are some performance numbers on qemu: Before: ------------------------------------- seq-read : io=0 B, bw=269730KB/s, iops=67432 , runt= 62200msec seq-write : io=0 B, bw=339716KB/s, iops=84929 , runt= 49386msec rand-read : io=0 B, bw=270435KB/s, iops=67608 , runt= 62038msec rand-write: io=0 B, bw=354436KB/s, iops=88608 , runt= 47335msec clat (usec): min=101 , max=138052 , avg=14822.09, stdev=11771.01 clat (usec): min=96 , max=81543 , avg=11798.94, stdev=7735.60 clat (usec): min=128 , max=140043 , avg=14835.85, stdev=11765.33 clat (usec): min=109 , max=147207 , avg=11337.09, stdev=5990.35 cpu : usr=15.93%, sys=60.37%, ctx=7764972, majf=0, minf=54 cpu : usr=32.73%, sys=120.49%, ctx=7372945, majf=0, minf=1 cpu : usr=18.84%, sys=58.18%, ctx=7775420, majf=0, minf=1 cpu : usr=24.20%, sys=59.85%, ctx=8307886, majf=0, minf=0 vdb: ios=8389107/8368136, merge=0/0, ticks=19457874/14616506, in_queue=34206098, util=99.68% 43: interrupt in total: 887320 fio --exec_prerun="echo 3 > /proc/sys/vm/drop_caches" --group_reporting --ioscheduler=noop --thread --bs=4k --size=512MB --direct=1 --numjobs=16 --ioengine=libaio --iodepth=64 --loops=3 --ramp_time=0 --filename=/dev/vdb --name=seq-read --stonewall --rw=read --name=seq-write --stonewall --rw=write --name=rnd-read --stonewall --rw=randread --name=rnd-write --stonewall --rw=randwrite After: ------------------------------------- seq-read : io=0 B, bw=309503KB/s, iops=77375 , runt= 54207msec seq-write : io=0 B, bw=448205KB/s, iops=112051 , runt= 37432msec rand-read : io=0 B, bw=311254KB/s, iops=77813 , runt= 53902msec rand-write: io=0 B, bw=377152KB/s, iops=94287 , runt= 44484msec clat (usec): min=81 , max=90588 , avg=12946.06, stdev=9085.94 clat (usec): min=57 , max=72264 , avg=8967.97, stdev=5951.04 clat (usec): min=29 , max=101046 , avg=12889.95, stdev=9067.91 clat (usec): min=52 , max=106152 , avg=10660.56, stdev=4778.19 cpu : usr=15.05%, sys=57.92%, ctx=77109411, majf=0, minf=54 cpu : usr=26.78%, sys=101.40%, ctx=7387891, majf=0, minf=2 cpu : usr=19.03%, sys=58.17%, ctx=7681976, majf=0, minf=8 cpu : usr=24.65%, sys=58.34%, ctx=8442632, majf=0, minf=4 vdb: ios=8389086/8361888, merge=0/0, ticks=17243780/12742010, in_queue=30078377, util=99.59% 43: interrupt in total: 1259639 fio --exec_prerun="echo 3 > /proc/sys/vm/drop_caches" --group_reporting --ioscheduler=noop --thread --bs=4k --size=512MB --direct=1 --numjobs=16 --ioengine=libaio --iodepth=64 --loops=3 --ramp_time=0 --filename=/dev/vdb --name=seq-read --stonewall --rw=read --name=seq-write --stonewall --rw=write --name=rnd-read --stonewall --rw=randread --name=rnd-write --stonewall --rw=randwrite Signed-off-by: NAsias He <asias@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Dan Carpenter 提交于
Smatch complains about the inconsistent NULL checking here. Fix it to return NULL on failure. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (fixed accidental deletion)
-
由 Asias He 提交于
We need to support both REQ_FLUSH and REQ_FUA for bio based path since it does not get the sequencing of REQ_FUA into REQ_FLUSH that request based drivers can request. REQ_FLUSH is emulated by: A) If the bio has no data to write: 1. Send VIRTIO_BLK_T_FLUSH to device, 2. In the flush I/O completion handler, finish the bio B) If the bio has data to write: 1. Send VIRTIO_BLK_T_FLUSH to device 2. In the flush I/O completion handler, send the actual write data to device 3. In the write I/O completion handler, finish the bio REQ_FUA is emulated by: 1. Send the actual write data to device 2. In the write I/O completion handler, send VIRTIO_BLK_T_FLUSH to device 3. In the flush I/O completion handler, finish the bio Changes in v7: - Using vbr->flags to trace request type - Dropped unnecessary struct virtio_blk *vblk parameter - Reuse struct virtblk_req in bio done function Cahnges in v6: - Reworked REQ_FLUSH and REQ_FUA emulatation order Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jens Axboe <axboe@kernel.dk> Cc: Christoph Hellwig <hch@lst.de> Cc: Tejun Heo <tj@kernel.org> Cc: Shaohua Li <shli@kernel.org> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: virtualization@lists.linux-foundation.org Signed-off-by: NAsias He <asias@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Asias He 提交于
This patch introduces bio-based IO path for virtio-blk. Compared to request-based IO path, bio-based IO path uses driver provided ->make_request_fn() method to bypasses the IO scheduler. It handles the bio to device directly without allocating a request in block layer. This reduces the IO path in guest kernel to achieve high IOPS and lower latency. The downside is that guest can not use the IO scheduler to merge and sort requests. However, this is not a big problem if the backend disk in host side uses faster disk device. When the bio-based IO path is not enabled, virtio-blk still uses the original request-based IO path, no performance difference is observed. Using a slow device e.g. normal SATA disk, the bio-based IO path for sequential read and write are slower than req-based IO path due to lack of merge in guest kernel. So we make the bio-based path optional. Performance evaluation: ----------------------------- 1) Fio test is performed in a 8 vcpu guest with ramdisk based guest using kvm tool. Short version: With bio-based IO path, sequential read/write, random read/write IOPS boost : 28%, 24%, 21%, 16% Latency improvement: 32%, 17%, 21%, 16% Long version: With bio-based IO path: seq-read : io=2048.0MB, bw=116996KB/s, iops=233991 , runt= 17925msec seq-write : io=2048.0MB, bw=100829KB/s, iops=201658 , runt= 20799msec rand-read : io=3095.7MB, bw=112134KB/s, iops=224268 , runt= 28269msec rand-write: io=3095.7MB, bw=96198KB/s, iops=192396 , runt= 32952msec clat (usec): min=0 , max=2631.6K, avg=58716.99, stdev=191377.30 clat (usec): min=0 , max=1753.2K, avg=66423.25, stdev=81774.35 clat (usec): min=0 , max=2915.5K, avg=61685.70, stdev=120598.39 clat (usec): min=0 , max=1933.4K, avg=76935.12, stdev=96603.45 cpu : usr=74.08%, sys=703.84%, ctx=29661403, majf=21354, minf=22460954 cpu : usr=70.92%, sys=702.81%, ctx=77219828, majf=13980, minf=27713137 cpu : usr=72.23%, sys=695.37%, ctx=88081059, majf=18475, minf=28177648 cpu : usr=69.69%, sys=654.13%, ctx=145476035, majf=15867, minf=26176375 With request-based IO path: seq-read : io=2048.0MB, bw=91074KB/s, iops=182147 , runt= 23027msec seq-write : io=2048.0MB, bw=80725KB/s, iops=161449 , runt= 25979msec rand-read : io=3095.7MB, bw=92106KB/s, iops=184211 , runt= 34416msec rand-write: io=3095.7MB, bw=82815KB/s, iops=165630 , runt= 38277msec clat (usec): min=0 , max=1932.4K, avg=77824.17, stdev=170339.49 clat (usec): min=0 , max=2510.2K, avg=78023.96, stdev=146949.15 clat (usec): min=0 , max=3037.2K, avg=74746.53, stdev=128498.27 clat (usec): min=0 , max=1363.4K, avg=89830.75, stdev=114279.68 cpu : usr=53.28%, sys=724.19%, ctx=37988895, majf=17531, minf=23577622 cpu : usr=49.03%, sys=633.20%, ctx=205935380, majf=18197, minf=27288959 cpu : usr=55.78%, sys=722.40%, ctx=101525058, majf=19273, minf=28067082 cpu : usr=56.55%, sys=690.83%, ctx=228205022, majf=18039, minf=26551985 2) Fio test is performed in a 8 vcpu guest with Fusion-IO based guest using kvm tool. Short version: With bio-based IO path, sequential read/write, random read/write IOPS boost : 11%, 11%, 13%, 10% Latency improvement: 10%, 10%, 12%, 10% Long Version: With bio-based IO path: read : io=2048.0MB, bw=58920KB/s, iops=117840 , runt= 35593msec write: io=2048.0MB, bw=64308KB/s, iops=128616 , runt= 32611msec read : io=3095.7MB, bw=59633KB/s, iops=119266 , runt= 53157msec write: io=3095.7MB, bw=62993KB/s, iops=125985 , runt= 50322msec clat (usec): min=0 , max=1284.3K, avg=128109.01, stdev=71513.29 clat (usec): min=94 , max=962339 , avg=116832.95, stdev=65836.80 clat (usec): min=0 , max=1846.6K, avg=128509.99, stdev=89575.07 clat (usec): min=0 , max=2256.4K, avg=121361.84, stdev=82747.25 cpu : usr=56.79%, sys=421.70%, ctx=147335118, majf=21080, minf=19852517 cpu : usr=61.81%, sys=455.53%, ctx=143269950, majf=16027, minf=24800604 cpu : usr=63.10%, sys=455.38%, ctx=178373538, majf=16958, minf=24822612 cpu : usr=62.04%, sys=453.58%, ctx=226902362, majf=16089, minf=23278105 With request-based IO path: read : io=2048.0MB, bw=52896KB/s, iops=105791 , runt= 39647msec write: io=2048.0MB, bw=57856KB/s, iops=115711 , runt= 36248msec read : io=3095.7MB, bw=52387KB/s, iops=104773 , runt= 60510msec write: io=3095.7MB, bw=57310KB/s, iops=114619 , runt= 55312msec clat (usec): min=0 , max=1532.6K, avg=142085.62, stdev=109196.84 clat (usec): min=0 , max=1487.4K, avg=129110.71, stdev=114973.64 clat (usec): min=0 , max=1388.6K, avg=145049.22, stdev=107232.55 clat (usec): min=0 , max=1465.9K, avg=133585.67, stdev=110322.95 cpu : usr=44.08%, sys=590.71%, ctx=451812322, majf=14841, minf=17648641 cpu : usr=48.73%, sys=610.78%, ctx=418953997, majf=22164, minf=26850689 cpu : usr=45.58%, sys=581.16%, ctx=714079216, majf=21497, minf=22558223 cpu : usr=48.40%, sys=599.65%, ctx=656089423, majf=16393, minf=23824409 3) Fio test is performed in a 8 vcpu guest with normal SATA based guest using kvm tool. Short version: With bio-based IO path, sequential read/write, random read/write IOPS boost : -10%, -10%, 4.4%, 0.5% Latency improvement: -12%, -15%, 2.5%, 0.8% Long Version: With bio-based IO path: read : io=124812KB, bw=36537KB/s, iops=9060 , runt= 3416msec write: io=169180KB, bw=24406KB/s, iops=6065 , runt= 6932msec read : io=256200KB, bw=2089.3KB/s, iops=520 , runt=122630msec write: io=257988KB, bw=1545.7KB/s, iops=384 , runt=166910msec clat (msec): min=1 , max=1527 , avg=28.06, stdev=89.54 clat (msec): min=2 , max=344 , avg=41.12, stdev=38.70 clat (msec): min=8 , max=1984 , avg=490.63, stdev=207.28 clat (msec): min=33 , max=4131 , avg=659.19, stdev=304.71 cpu : usr=4.85%, sys=17.15%, ctx=31593, majf=0, minf=7 cpu : usr=3.04%, sys=11.45%, ctx=39377, majf=0, minf=0 cpu : usr=0.47%, sys=1.59%, ctx=262986, majf=0, minf=16 cpu : usr=0.47%, sys=1.46%, ctx=337410, majf=0, minf=0 With request-based IO path: read : io=150120KB, bw=40420KB/s, iops=10037 , runt= 3714msec write: io=194932KB, bw=27029KB/s, iops=6722 , runt= 7212msec read : io=257136KB, bw=2001.1KB/s, iops=498 , runt=128443msec write: io=258276KB, bw=1537.2KB/s, iops=382 , runt=168028msec clat (msec): min=1 , max=1542 , avg=24.84, stdev=32.45 clat (msec): min=3 , max=628 , avg=35.62, stdev=39.71 clat (msec): min=8 , max=2540 , avg=503.28, stdev=236.97 clat (msec): min=41 , max=4398 , avg=653.88, stdev=302.61 cpu : usr=3.91%, sys=15.75%, ctx=26968, majf=0, minf=23 cpu : usr=2.50%, sys=10.56%, ctx=19090, majf=0, minf=0 cpu : usr=0.16%, sys=0.43%, ctx=20159, majf=0, minf=16 cpu : usr=0.18%, sys=0.53%, ctx=81364, majf=0, minf=0 How to use: ----------------------------- Add 'virtio_blk.use_bio=1' to kernel cmdline or 'modprobe virtio_blk use_bio=1' to enable ->make_request_fn() based I/O path. Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jens Axboe <axboe@kernel.dk> Cc: Christoph Hellwig <hch@lst.de> Cc: Tejun Heo <tj@kernel.org> Cc: Shaohua Li <shli@kernel.org> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: virtualization@lists.linux-foundation.org Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NMinchan Kim <minchan.kim@gmail.com> Signed-off-by: NAsias He <asias@redhat.com> Acked-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 30 7月, 2012 4 次提交
-
-
由 Paolo Bonzini 提交于
This patch adds support for the new VIRTIO_BLK_F_CONFIG_WCE feature, which exposes the cache mode in the configuration space and lets the driver modify it. The cache mode is exposed via sysfs. Even if the host does not support the new feature, the cache mode is visible (thanks to the existing VIRTIO_BLK_F_WCE), but not modifiable. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Asias He 提交于
Block layer will allocate a spinlock for the queue if the driver does not provide one in blk_init_queue(). The reason to use the internal spinlock is that blk_cleanup_queue() will switch to use the internal spinlock in the cleanup code path. if (q->queue_lock != &q->__queue_lock) q->queue_lock = &q->__queue_lock; However, processes which are in D state might have taken the driver provided spinlock, when the processes wake up, they would release the block provided spinlock. ===================================== [ BUG: bad unlock balance detected! ] 3.4.0-rc7+ #238 Not tainted ------------------------------------- fio/3587 is trying to release lock (&(&q->__queue_lock)->rlock) at: [<ffffffff813274d2>] blk_queue_bio+0x2a2/0x380 but there are no more locks to release! other info that might help us debug this: 1 lock held by fio/3587: #0: (&(&vblk->lock)->rlock){......}, at: [<ffffffff8132661a>] get_request_wait+0x19a/0x250 Other drivers use block layer provided spinlock as well, e.g. SCSI. Switching to the block layer provided spinlock saves a bit of memory and does not increase lock contention. Performance test shows no real difference is observed before and after this patch. Changes in v2: Improve commit log as Michael suggested. Cc: virtualization@lists.linux-foundation.org Cc: kvm@vger.kernel.org Cc: stable@kernel.org Signed-off-by: NAsias He <asias@redhat.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Asias He 提交于
blk_cleanup_queue() will call blk_drian_queue() to drain all the requests before queue DEAD marking. If we reset the device before blk_cleanup_queue() the drain would fail. 1) if the queue is stopped in do_virtblk_request() because device is full, the q->request_fn() will not be called. blk_drain_queue() { while(true) { ... if (!list_empty(&q->queue_head)) __blk_run_queue(q) { if (queue is not stoped) q->request_fn() } ... } } Do no reset the device before blk_cleanup_queue() gives the chance to start the queue in interrupt handler blk_done(). 2) In commit b79d866c, We abort requests dispatched to driver before blk_cleanup_queue(). There is a race if requests are dispatched to driver after the abort and before the queue DEAD mark. To fix this, instead of aborting the requests explicitly, we can just reset the device after after blk_cleanup_queue so that the device can complete all the requests before queue DEAD marking in the drain process. Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: virtualization@lists.linux-foundation.org Cc: kvm@vger.kernel.org Cc: stable@kernel.org Signed-off-by: NAsias He <asias@redhat.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Asias He 提交于
del_gendisk() might not return due to failing to remove the /sys/block/vda/serial sysfs entry when another thread (udev) is trying to read it. virtblk_remove() vdev->config->reset() : guest will not kick us through interrupt del_gendisk() device_del() kobject_del(): got stuck, sysfs entry ref count non zero sysfs_open_file(): user space process read /sys/block/vda/serial sysfs_get_active() : got sysfs entry ref count dev_attr_show() virtblk_serial_show() blk_execute_rq() : got stuck, interrupt is disabled request cannot be finished This patch fixes it by calling del_gendisk() before we disable guest's interrupt so that the request sent in virtblk_serial_show() will be finished and del_gendisk() will success. This fixes another race in hot-unplug process. It is save to call del_gendisk(vblk->disk) before flush_work(&vblk->config_work) which might access vblk->disk, because vblk->disk is not freed until put_disk(vblk->disk). Cc: virtualization@lists.linux-foundation.org Cc: kvm@vger.kernel.org Cc: stable@kernel.org Signed-off-by: NAsias He <asias@redhat.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 22 5月, 2012 2 次提交
-
-
由 Asias He 提交于
Benchmark shows small performance improvement on fusion io device. Before: seq-read : io=1,024MB, bw=19,982KB/s, iops=39,964, runt= 52475msec seq-write: io=1,024MB, bw=20,321KB/s, iops=40,641, runt= 51601msec rnd-read : io=1,024MB, bw=15,404KB/s, iops=30,808, runt= 68070msec rnd-write: io=1,024MB, bw=14,776KB/s, iops=29,552, runt= 70963msec After: seq-read : io=1,024MB, bw=20,343KB/s, iops=40,685, runt= 51546msec seq-write: io=1,024MB, bw=20,803KB/s, iops=41,606, runt= 50404msec rnd-read : io=1,024MB, bw=16,221KB/s, iops=32,442, runt= 64642msec rnd-write: io=1,024MB, bw=15,199KB/s, iops=30,397, runt= 68991msec Signed-off-by: NAsias He <asias@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Asias He 提交于
If we reset the virtio-blk device before the requests already dispatched to the virtio-blk driver from the block layer are finised, we will stuck in blk_cleanup_queue() and the remove will fail. blk_cleanup_queue() calls blk_drain_queue() to drain all requests queued before DEAD marking. However it will never success if the device is already stopped. We'll have q->in_flight[] > 0, so the drain will not finish. How to reproduce the race: 1. hot-plug a virtio-blk device 2. keep reading/writing the device in guest 3. hot-unplug while the device is busy serving I/O Test: ~1000 rounds of hot-plug/hot-unplug test passed with this patch. Changes in v3: - Drop blk_abort_queue and blk_abort_request - Use __blk_end_request_all to complete request dispatched to driver Changes in v2: - Drop req_in_flight - Use virtqueue_detach_unused_buf to get request dispatched to driver Signed-off-by: NAsias He <asias@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 12 4月, 2012 1 次提交
-
-
由 Ren Mingxin 提交于
The current virtio block's naming algorithm just supports 18278 (26^3 + 26^2 + 26) disks. If there are more virtio blocks, there will be disks with the same name. Based on commit 3e1a7ff8, add a function "virtblk_name_format()" for virtio block to support mass of disks naming. Notes: - Our naming scheme is ugly. We are stuck with it for virtio but don't use it for any new driver: new drivers should name their devices PREFIX%d where the sequence number can be allocated by ida - sd_format_disk_name has exactly the same logic. Moving it to a central place was deferred over worries that this will make people keep using the legacy naming in new drivers. We kept code idential in case someone wants to deduplicate later. Signed-off-by: NRen Mingxin <renmx@cn.fujitsu.com> Acked-by: NAsias He <asias@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 29 3月, 2012 1 次提交
-
-
由 Vivek Goyal 提交于
If a virtio disk is open in guest and a disk resize operation is done, (virsh blockresize), new size is not visible to tools like "fdisk -l". This seems to be happening as we update only part->nr_sects and not bdev->bd_inode size. Call revalidate_disk() which should take care of it. I tested growing disk size of already open disk and it works for me. Signed-off-by: NVivek Goyal <vgoyal@redhat.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
- 15 1月, 2012 1 次提交
-
-
由 Paolo Bonzini 提交于
Introduce a wrapper around scsi_cmd_ioctl that takes a block device. The function will then be enhanced to detect partition block devices and, in that case, subject the ioctls to whitelisting. Cc: linux-scsi@vger.kernel.org Cc: Jens Axboe <axboe@kernel.dk> Cc: James Bottomley <JBottomley@parallels.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 12 1月, 2012 4 次提交
-
-
由 Amit Shah 提交于
Delete the vq and flush any pending requests from the block queue on the freeze callback to prepare for hibernation. Re-create the vq in the restore callback to resume normal function. Signed-off-by: NAmit Shah <amit.shah@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Amit Shah 提交于
The probe and PM restore functions will share this code. Signed-off-by: NAmit Shah <amit.shah@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Michael S. Tsirkin 提交于
Fix a theoretical race related to config work handler: a config interrupt might happen after we flush config work but before we reset the device. It will then cause the config work to run during or after reset. Two problems with this: - if this runs after device is gone we will get use after free - access of config while reset is in progress is racy (as layout is changing). As a solution 1. flush after reset when we know there will be no more interrupts 2. add a flag to disable config access before reset Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Rusty Russell 提交于
Remove wrapper functions. This makes the allocation type explicit in all callers; I used GPF_KERNEL where it seemed obvious, left it at GFP_ATOMIC otherwise. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Reviewed-by: NChristoph Hellwig <hch@lst.de>
-
- 02 11月, 2011 1 次提交
-
-
由 Michael S. Tsirkin 提交于
Based on a patch by Mark Wu <dwu@redhat.com> Current index allocation in virtio-blk is based on a monotonically increasing variable "index". This means we'll run out of numbers after a while. It also could cause confusion about the disk name in the case of hot-plugging disks. Change virtio-blk to use ida to allocate index, instead. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 01 11月, 2011 1 次提交
-
-
由 Paul Gortmaker 提交于
We want to remove the implicit everywhere presence of module.h so fix up the people relying on that implicit presence in advance. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 31 10月, 2011 1 次提交
-
-
由 Michael S. Tsirkin 提交于
Based on a patch by Mark Wu <dwu@redhat.com> Current index allocation in virtio-blk is based on a monotonically increasing variable "index". This means we'll run out of numbers after a while. It also could cause confusion about the disk name in the case of hot-plugging disks. Change virtio-blk to use ida to allocate index, instead. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
- 30 5月, 2011 2 次提交
-
-
由 Liu Yuan 提交于
It is easier to figure out the context by reading SCSI_SENSE_BUFFERSIZE instead of plain '96'. Signed-off-by: NLiu Yuan <tailai.ly@taobao.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Christoph Hellwig 提交于
Wire up the virtio_driver config_changed method to get notified about config changes raised by the host. For now we just re-read the device size to support online resizing of devices, but once we add more attributes that might be changeable they could be added as well. Note that the config_changed method is called from irq context, so we'll have to use the workqueue infrastructure to provide us a proper user context for our changes. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 21 10月, 2010 1 次提交
-
-
由 Christoph Hellwig 提交于
Remove the BKL usage added in "block: push down BKL into .locked_ioctl". Virtio-blk doesn't use the BKL for anything, and doesn't implement any ioctl command by itself, but only uses the generic scsi_cmd_ioctl which is fine without the BKL. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 10 10月, 2010 1 次提交
-
-
由 Mike Snitzer 提交于
Must drop reference taken by blk_make_request(). Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Cc: stable@kernel.org # .35.x Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 10 9月, 2010 3 次提交
-
-
由 Tejun Heo 提交于
Remove now unused REQ_HARDBARRIER support. virtio_blk already supports REQ_FLUSH and the usefulness of REQ_FUA for virtio_blk is questionable at this point, so there's nothing else to do to support new REQ_FLUSH/FUA interface. Signed-off-by: NTejun Heo <tj@kernel.org> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Tejun Heo 提交于
Barrier is deemed too heavy and will soon be replaced by FLUSH/FUA requests. Deprecate barrier. All REQ_HARDBARRIERs are failed with -EOPNOTSUPP and blk_queue_ordered() is replaced with simpler blk_queue_flush(). blk_queue_flush() takes combinations of REQ_FLUSH and FUA. If a device has write cache and can flush it, it should set REQ_FLUSH. If the device can handle FUA writes, it should also set REQ_FUA. All blk_queue_ordered() users are converted. * ORDERED_DRAIN is mapped to 0 which is the default value. * ORDERED_DRAIN_FLUSH is mapped to REQ_FLUSH. * ORDERED_DRAIN_FLUSH_FUA is mapped to REQ_FLUSH | REQ_FUA. Signed-off-by: NTejun Heo <tj@kernel.org> Acked-by: NBoaz Harrosh <bharrosh@panasas.com> Cc: Christoph Hellwig <hch@infradead.org> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Chris Wright <chrisw@sous-sol.org> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Cc: David S. Miller <davem@davemloft.net> Cc: Alasdair G Kergon <agk@redhat.com> Cc: Pierre Ossman <drzeus@drzeus.cx> Cc: Stefan Weinhuber <wein@de.ibm.com> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Tejun Heo 提交于
Nobody is making meaningful use of ORDERED_BY_TAG now and queue draining for barrier requests will be removed soon which will render the advantage of tag ordering moot. Kill ORDERED_BY_TAG. The following users are affected. * brd: converted to ORDERED_DRAIN. * virtio_blk: ORDERED_TAG path was already marked deprecated. Removed. * xen-blkfront: ORDERED_TAG case dropped. Signed-off-by: NTejun Heo <tj@kernel.org> Cc: Christoph Hellwig <hch@infradead.org> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Chris Wright <chrisw@sous-sol.org> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
- 08 8月, 2010 5 次提交
-
-
由 Arnd Bergmann 提交于
As a preparation for the removal of the big kernel lock in the block layer, this removes the BKL from the common ioctl handling code, moving it into every single driver still using it. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 FUJITA Tomonori 提交于
This removes q->prepare_flush_fn completely (changes the blk_queue_ordered API). Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 FUJITA Tomonori 提交于
use REQ_FLUSH flag instead. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Rusty Russell <rusty@rustcorp.com.au> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Jens Axboe 提交于
On compilation, gcc correctly detects that we do not handle all types: In function ‘blk_done’: warning: enumeration value ‘REQ_TYPE_FS’ not handled in switch warning: enumeration value ‘REQ_TYPE_SENSE’ not handled in switch warning: enumeration value ‘REQ_TYPE_PM_SUSPEND’ not handled in switch warning: enumeration value ‘REQ_TYPE_PM_RESUME’ not handled in switch warning: enumeration value ‘REQ_TYPE_PM_SHUTDOWN’ not handled in switch warning: enumeration value ‘REQ_TYPE_LINUX_BLOCK’ not handled in switch warning: enumeration value ‘REQ_TYPE_ATA_TASKFILE’ not handled in switch warning: enumeration value ‘REQ_TYPE_ATA_PC’ not handled in switch which is a bit pointless since this is at the end of the request processessing. Add a default case that just breaks out. Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Christoph Hellwig 提交于
Remove all the trivial wrappers for the cmd_type and cmd_flags fields in struct requests. This allows much easier grepping for different request types instead of unwinding through macros. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
- 05 8月, 2010 2 次提交
-
-
由 Ryan Harper 提交于
With the availablility of a sysfs device attribute for examining disk serial numbers the ioctl is no longer needed. The user-space changes for this aren't upstream yet so we don't have any users to worry about. Signed-off-by: NRyan Harper <ryanh@us.ibm.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Ryan Harper 提交于
Create a new attribute for virtio-blk devices that will fetch the serial number of the block device. This attribute can be used by udev to create disk/by-id symlinks for devices that don't have a UUID (filesystem) associated with them. ATA_IDENTIFY strings are special in that they can be up to 20 chars long and aren't required to be nul-terminated. The buffer is also zero-padded meaning that if the serial is 19 chars or less that we get a nul-terminated string. When copying this value into a string buffer, we must be careful to copy up to the nul (if it present) and only 20 if it is longer and not to attempt to nul terminate; this isn't needed. Changes since v1: - Added BUILD_BUG_ON() for PAGE_SIZE check - Removed min() since BUILD_BUG_ON() handles the check - Replaced serial_sysfs() by copying id directly to buffer Signed-off-by: NRyan Harper <ryanh@us.ibm.com> Signed-off-by: Njohn cooper <john.cooper@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-