- 21 4月, 2017 16 次提交
-
-
由 Jan Kara 提交于
Allocate struct backing_dev_info separately instead of embedding it inside the superblock. This unifies handling of bdi among users. CC: Miklos Szeredi <miklos@szeredi.hu> CC: linux-fsdevel@vger.kernel.org Acked-by: NMiklos Szeredi <mszeredi@redhat.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Jan Kara 提交于
Allocate struct backing_dev_info separately instead of embedding it inside the superblock. This unifies handling of bdi among users. CC: Boaz Harrosh <ooo@electrozaur.com> CC: Benny Halevy <bhalevy@primarydata.com> Acked-by: NBoaz Harrosh <ooo@electrozaur.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Jan Kara 提交于
Allocate struct backing_dev_info separately instead of embedding it inside the superblock. This unifies handling of bdi among users. CC: Jan Harkes <jaharkes@cs.cmu.edu> CC: coda@cs.cmu.edu CC: codalist@coda.cs.cmu.edu Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Jan Kara 提交于
MTD already allocates backing_dev_info dynamically. Convert it to use generic infrastructure for this including proper refcounting. We drop mtd->backing_dev_info as its only use was to pass mtd_bdi pointer from one file into another and if we wanted to keep that in a clean way, we'd have to make mtd hold and drop bdi reference as needed which seems pointless for passing one global pointer... CC: David Woodhouse <dwmw2@infradead.org> CC: Brian Norris <computersforpeace@gmail.com> CC: linux-mtd@lists.infradead.org Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Jan Kara 提交于
Allocate struct backing_dev_info separately instead of embedding it inside the superblock. This unifies handling of bdi among users. CC: David Howells <dhowells@redhat.com> CC: linux-afs@lists.infradead.org Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Jan Kara 提交于
Allocate struct backing_dev_info separately instead of embedding it inside the superblock. This unifies handling of bdi among users. CC: Tyler Hicks <tyhicks@canonical.com> CC: ecryptfs@vger.kernel.org Acked-by: NTyler Hicks <tyhicks@canonical.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Jan Kara 提交于
Allocate struct backing_dev_info separately instead of embedding it inside superblock. This unifies handling of bdi among users. CC: Steve French <sfrench@samba.org> CC: linux-cifs@vger.kernel.org Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Jan Kara 提交于
Allocate struct backing_dev_info separately instead of embedding it inside client structure. This unifies handling of bdi among users. CC: Ilya Dryomov <idryomov@gmail.com> CC: "Yan, Zheng" <zyan@redhat.com> CC: Sage Weil <sage@redhat.com> CC: ceph-devel@vger.kernel.org Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Jan Kara 提交于
Allocate struct backing_dev_info separately instead of embedding it inside superblock. This unifies handling of bdi among users. CC: Chris Mason <clm@fb.com> CC: Josef Bacik <jbacik@fb.com> CC: David Sterba <dsterba@suse.com> CC: linux-btrfs@vger.kernel.org Reviewed-by: NLiu Bo <bo.li.liu@oracle.com> Reviewed-by: NDavid Sterba <dsterba@suse.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Jan Kara 提交于
Allocate struct backing_dev_info separately instead of embedding it inside session. This unifies handling of bdi among users. CC: Eric Van Hensbergen <ericvh@gmail.com> CC: Ron Minnich <rminnich@sandia.gov> CC: Latchesar Ionkov <lucho@ionkov.net> CC: v9fs-developer@lists.sourceforge.net Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Jan Kara 提交于
Allocate struct backing_dev_info separately instead of embedding it inside superblock. This unifies handling of bdi among users. CC: Oleg Drokin <oleg.drokin@intel.com> CC: Andreas Dilger <andreas.dilger@intel.com> CC: James Simmons <jsimmons@infradead.org> CC: lustre-devel@lists.lustre.org Reviewed-by: NAndreas Dilger <andreas.dilger@intel.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Jan Kara 提交于
So far we just relied on block device to hold a bdi reference for us while the filesystem is mounted. While that works perfectly fine, it is a bit awkward that we have a pointer to a refcounted structure in the superblock without proper reference. So make s_bdi hold a proper reference to block device's BDI. No filesystem using mount_bdev() actually changes s_bdi so this is safe and will make bdev filesystems work the same way as filesystems needing to set up their private bdi. Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Jan Kara 提交于
Provide helper functions for setting up dynamically allocated backing_dev_info structures for filesystems and cleaning them up on superblock destruction. CC: linux-mtd@lists.infradead.org CC: linux-nfs@vger.kernel.org CC: Petr Vandrovec <petr@vandrovec.name> CC: linux-nilfs@vger.kernel.org CC: cluster-devel@redhat.com CC: osd-dev@open-osd.org CC: codalist@coda.cs.cmu.edu CC: linux-afs@lists.infradead.org CC: ecryptfs@vger.kernel.org CC: linux-cifs@vger.kernel.org CC: ceph-devel@vger.kernel.org CC: linux-btrfs@vger.kernel.org CC: v9fs-developer@lists.sourceforge.net CC: lustre-devel@lists.lustre.org Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Jan Kara 提交于
MTD will want to call bdi_alloc_node() and bdi_put() directly. Export these functions. Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Jan Kara 提交于
Most users will want to unregister bdi when dropping last reference to a bdi. Only a few users (like block devices) want to play more complex tricks with bdi registration and unregistration. So unregister bdi when the last reference to bdi is dropped and just make sure we don't unregister the bdi the second time if it is already unregistered. Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Jan Kara 提交于
Add function that registers bdi and takes va_list instead of variable number of arguments. Add bdi_alloc() as simple wrapper for NUMA-unaware users allocating BDI. Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 20 4月, 2017 13 次提交
-
-
由 Jens Axboe 提交于
We trigger this warning: block/blk-throttle.c: In function ‘blk_throtl_bio’: block/blk-throttle.c:2042:6: warning: variable ‘ret’ set but not used [-Wunused-but-set-variable] int ret; ^~~ since we only assign 'ret' if BLK_DEV_THROTTLING_LOW is off, we never check it. Reported-by: NBart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: NBart Van Assche <bart.vanassche@sandisk.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Jens Axboe 提交于
If we don't have CGROUPS enabled, the compile ends in the following misery: In file included from ../block/bfq-iosched.c:105:0: ../block/bfq-iosched.h:819:22: error: array type has incomplete element type extern struct cftype bfq_blkcg_legacy_files[]; ^ ../block/bfq-iosched.h:820:22: error: array type has incomplete element type extern struct cftype bfq_blkg_files[]; ^ Move the declarations under the right ifdef. Reported-by: NRandy Dunlap <rdunlap@infradead.org> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Colin Ian King 提交于
The call to bfq_check_ioprio_change will dereference bic, however, the null check for bic is after this call. Move the the null check on bic to before the call to avoid any potential null pointer dereference issues. Detected by CoverityScan, CID#1430138 ("Dereference before null check") Signed-off-by: NColin Ian King <colin.king@canonical.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Rakesh Pandit 提交于
On an error path in NVM_DEV_CREATE ioctl blk_put_queue is being called twice: one via blk_cleanup_queue and another via put_disk. Straight fix seems to remove queue pointer so that disk_release never ends up caling blk_put_queue again. [ 391.808827] WARNING: CPU: 1 PID: 1250 at lib/refcount.c:128 refcount_sub_and_test+0x70/0x80 [ 391.808830] refcount_t: underflow; use-after-free. [ 391.808832] Modules linked in: nf_conntrack_netbios_ns............ [ 391.809052] CPU: 1 PID: 1250 Comm: nvme Not tainted......... [ 391.809057] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014 [ 391.809060] Call Trace: [ 391.809079] dump_stack+0x63/0x86 [ 391.809094] __warn+0xcb/0xf0 [ 391.809103] warn_slowpath_fmt+0x5f/0x80 [ 391.809118] refcount_sub_and_test+0x70/0x80 [ 391.809125] refcount_dec_and_test+0x11/0x20 [ 391.809136] kobject_put+0x1f/0x60 [ 391.809149] blk_put_queue+0x15/0x20 [ 391.809159] disk_release+0xae/0xf0 [ 391.809172] device_release+0x32/0x90 [ 391.809184] kobject_release+0x6a/0x170 [ 391.809196] kobject_put+0x2f/0x60 [ 391.809206] put_disk+0x17/0x20 [ 391.809219] nvm_ioctl_dev_create.isra.16+0x897/0xa30 [ 391.809236] nvm_ctl_ioctl+0x23c/0x4c0 [ 391.809248] do_vfs_ioctl+0xa3/0x5f0 [ 391.809258] SyS_ioctl+0x79/0x90 [ 391.809271] entry_SYSCALL_64_fastpath+0x1a/0xa9 [ 391.809280] RIP: 0033:0x7f5d3ef363c7 [ 391.809286] RSP: 002b:00007ffc72ed8d78 EFLAGS: 00000206 ORIG_RAX: 0000000000000010 [ 391.809296] RAX: ffffffffffffffda RBX: 00007ffc72edb552 RCX: 00007f5d3ef363c7 [ 391.809301] RDX: 00007ffc72ed8d90 RSI: 0000000040804c22 RDI: 0000000000000003 [ 391.809306] RBP: 0000000000000001 R08: 0000000000000020 R09: 0000000000000001 [ 391.809311] R10: 000000000000053f R11: 0000000000000206 R12: 0000000000000000 [ 391.809316] R13: 0000000000000000 R14: 00007ffc72edb58d R15: 00007ffc72edb581 Signed-off-by: NRakesh Pandit <rakesh@tuxera.com> Reviewed-by: NMatias Bjørling <matias@cnexlabs.com> Fixes: 7d1ef2f4 "lightnvm: fix cleanup order of disk on init error" Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Bart Van Assche 提交于
Since ioprio_best() translates IOPRIO_CLASS_NONE into IOPRIO_CLASS_BE and since lower numerical priority values represent a higher priority a simple numerical comparison is sufficient. Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: NAdam Manzanares <adam.manzanares@wdc.com> Tested-by: NAdam Manzanares <adam.manzanares@wdc.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Cc: Matias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Bart Van Assche 提交于
Since only a single caller remains, inline blk_rq_set_prio(). Initialize req->ioprio even if no I/O priority has been set in the bio nor in the I/O context. Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: NAdam Manzanares <adam.manzanares@wdc.com> Tested-by: NAdam Manzanares <adam.manzanares@wdc.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Cc: Matias Bjørling <m@bjorling.me> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Bart Van Assche 提交于
This patch changes the behavior of the lightnvm driver as follows: * REQ_FAILFAST_MASK is set for read-ahead requests. * If no I/O priority has been set in the bio, the I/O priority is copied from the I/O context. * The rq_disk member is initialized if bio->bi_bdev != NULL. * The bio sector offset is copied into req->__sector instead of retaining the value -1 set by blk_mq_alloc_request(). * req->errors is initialized to zero. Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Cc: Matias Bjørling <m@bjorling.me> Cc: Adam Manzanares <adam.manzanares@wdc.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Bart Van Assche 提交于
This patch changes the behavior of the null_blk driver for the LightNVM mode as follows: * REQ_FAILFAST_MASK is set for read-ahead requests. * If no I/O priority has been set in the bio, the I/O priority is copied from the I/O context. * The rq_disk member is initialized if bio->bi_bdev != NULL. * req->errors is initialized to zero. Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Cc: Matias Bjørling <m@bjorling.me> Cc: Adam Manzanares <adam.manzanares@wdc.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Bart Van Assche 提交于
Export this function such that it becomes available to block drivers. Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Cc: Matias Bjørling <m@bjorling.me> Cc: Adam Manzanares <adam.manzanares@wdc.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Arnd Bergmann 提交于
The driver uses both u64 and sector_t to refer to offsets, and assigns between the two. This causes one harmless warning when sector_t is 32-bit: drivers/lightnvm/pblk-rb.c: In function 'pblk_rb_write_entry_gc': include/linux/lightnvm.h:215:20: error: large integer implicitly truncated to unsigned type [-Werror=overflow] drivers/lightnvm/pblk-rb.c:324:22: note: in expansion of macro 'ADDR_EMPTY' As the driver is already doing this inconsistently, changing the type won't make it worse and is an easy way to avoid the warning. Fixes: a4bd217b ("lightnvm: physical block device (pblk) target") Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Christoph Hellwig 提交于
blk_insert_flush should be using __blk_end_request to start with. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Christoph Hellwig 提交于
This function is not used anywhere in the kernel. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Christoph Hellwig 提交于
Both functions are entirely unused. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 19 4月, 2017 11 次提交
-
-
由 Christoph Hellwig 提交于
This was just a proof of concept user for the SCSI OSD library, and never had any real users. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NBoaz Harrosh <ooo@electrozaur.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Jan Kara 提交于
When CFQ is used as an elevator, it disables writeback throttling because they don't play well together. Later when a different elevator is chosen for the device, writeback throttling doesn't get enabled again as it should. Make sure CFQ enables writeback throttling (if it should be enabled by default) when we switch from it to another IO scheduler. Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Paolo Valente 提交于
The BFQ I/O scheduler features an optimal fair-queuing (proportional-share) scheduling algorithm, enriched with several mechanisms to boost throughput and reduce latency for interactive and real-time applications. This makes BFQ a large and complex piece of code. This commit addresses this issue by splitting BFQ into three main, independent components, and by moving each component into a separate source file: 1. Main algorithm: handles the interaction with the kernel, and decides which requests to dispatch; it uses the following two further components to achieve its goals. 2. Scheduling engine (Hierarchical B-WF2Q+ scheduling algorithm): computes the schedule, using weights and budgets provided by the above component. 3. cgroups support: handles group operations (creation, destruction, move, ...). Signed-off-by: NPaolo Valente <paolo.valente@linaro.org> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Paolo Valente 提交于
When a bfq queue is set in service and when it is merged, a reference to the I/O context associated with the queue is taken. This reference is then released when the queue is deselected from service or split. More precisely, the release of the reference is postponed to when the scheduler lock is released, to avoid nesting between the scheduler and the I/O-context lock. In fact, such nesting would lead to deadlocks, because of other code paths that take the same locks in the opposite order. This postponing of I/O-context releases does complicate code. This commit addresses these issue by modifying involved operations in such a way to not need to get the above I/O-context references any more. Then it also removes any get and release of these references. Signed-off-by: NPaolo Valente <paolo.valente@linaro.org> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Arianna Avanzini 提交于
Many popular I/O-intensive services or applications spawn or reactivate many parallel threads/processes during short time intervals. Examples are systemd during boot or git grep. These services or applications benefit mostly from a high throughput: the quicker the I/O generated by their processes is cumulatively served, the sooner the target job of these services or applications gets completed. As a consequence, it is almost always counterproductive to weight-raise any of the queues associated to the processes of these services or applications: in most cases it would just lower the throughput, mainly because weight-raising also implies device idling. To address this issue, an I/O scheduler needs, first, to detect which queues are associated with these services or applications. In this respect, we have that, from the I/O-scheduler standpoint, these services or applications cause bursts of activations, i.e., activations of different queues occurring shortly after each other. However, a shorter burst of activations may be caused also by the start of an application that does not consist in a lot of parallel I/O-bound threads (see the comments on the function bfq_handle_burst for details). In view of these facts, this commit introduces: 1) an heuristic to detect (only) bursts of queue activations caused by services or applications consisting in many parallel I/O-bound threads; 2) the prevention of device idling and weight-raising for the queues belonging to these bursts. Signed-off-by: NArianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: NPaolo Valente <paolo.valente@linaro.org> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Paolo Valente 提交于
This patch is basically the counterpart, for NCQ-capable rotational devices, of the previous patch. Exactly as the previous patch does on flash-based devices and for any workload, this patch disables device idling on rotational devices, but only for random I/O. In fact, only with these queues disabling idling boosts the throughput on NCQ-capable rotational devices. To not break service guarantees, idling is disabled for NCQ-enabled rotational devices only when the same symmetry conditions considered in the previous patches hold. Signed-off-by: NPaolo Valente <paolo.valente@linaro.org> Signed-off-by: NArianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Paolo Valente 提交于
This patch boosts the throughput on NCQ-capable flash-based devices, while still preserving latency guarantees for interactive and soft real-time applications. The throughput is boosted by just not idling the device when the in-service queue remains empty, even if the queue is sync and has a non-null idle window. This helps to keep the drive's internal queue full, which is necessary to achieve maximum performance. This solution to boost the throughput is a port of commits a68bbddb and f7d7b7a7 for CFQ. As already highlighted in a previous patch, allowing the device to prefetch and internally reorder requests trivially causes loss of control on the request service order, and hence on service guarantees. Fortunately, as discussed in detail in the comments on the function bfq_bfqq_may_idle(), if every process has to receive the same fraction of the throughput, then the service order enforced by the internal scheduler of a flash-based device is relatively close to that enforced by BFQ. In particular, it is close enough to let service guarantees be substantially preserved. Things change in an asymmetric scenario, i.e., if not every process has to receive the same fraction of the throughput. In this case, to guarantee the desired throughput distribution, the device must be prevented from prefetching requests. This is exactly what this patch does in asymmetric scenarios. Signed-off-by: NPaolo Valente <paolo.valente@linaro.org> Signed-off-by: NArianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Arianna Avanzini 提交于
A seeky queue (i..e, a queue containing random requests) is assigned a very small device-idling slice, for throughput issues. Unfortunately, given the process associated with a seeky queue, this behavior causes the following problem: if the process, say P, performs sync I/O and has a higher weight than some other processes doing I/O and associated with non-seeky queues, then BFQ may fail to guarantee to P its reserved share of the throughput. The reason is that idling is key for providing service guarantees to processes doing sync I/O [1]. This commit addresses this issue by allowing the device-idling slice to be reduced for a seeky queue only if the scenario happens to be symmetric, i.e., if all the queues are to receive the same share of the throughput. [1] P. Valente, A. Avanzini, "Evolution of the BFQ Storage I/O Scheduler", Proceedings of the First Workshop on Mobile System Technologies (MST-2015), May 2015. http://algogroup.unimore.it/people/paolo/disk_sched/mst-2015.pdfSigned-off-by: NArianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: NRiccardo Pizzetti <riccardo.pizzetti@gmail.com> Signed-off-by: NSamuele Zecchini <samuele.zecchini92@gmail.com> Signed-off-by: NPaolo Valente <paolo.valente@linaro.org> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Arianna Avanzini 提交于
A set of processes may happen to perform interleaved reads, i.e., read requests whose union would give rise to a sequential read pattern. There are two typical cases: first, processes reading fixed-size chunks of data at a fixed distance from each other; second, processes reading variable-size chunks at variable distances. The latter case occurs for example with QEMU, which splits the I/O generated by a guest into multiple chunks, and lets these chunks be served by a pool of I/O threads, iteratively assigning the next chunk of I/O to the first available thread. CFQ denotes as 'cooperating' a set of processes that are doing interleaved I/O, and when it detects cooperating processes, it merges their queues to obtain a sequential I/O pattern from the union of their I/O requests, and hence boost the throughput. Unfortunately, in the following frequent case, the mechanism implemented in CFQ for detecting cooperating processes and merging their queues is not responsive enough to handle also the fluctuating I/O pattern of the second type of processes. Suppose that one process of the second type issues a request close to the next request to serve of another process of the same type. At that time the two processes would be considered as cooperating. But, if the request issued by the first process is to be merged with some other already-queued request, then, from the moment at which this request arrives, to the moment when CFQ controls whether the two processes are cooperating, the two processes are likely to be already doing I/O in distant zones of the disk surface or device memory. CFQ uses however preemption to get a sequential read pattern out of the read requests performed by the second type of processes too. As a consequence, CFQ uses two different mechanisms to achieve the same goal: boosting the throughput with interleaved I/O. This patch introduces Early Queue Merge (EQM), a unified mechanism to get a sequential read pattern with both types of processes. The main idea is to immediately check whether a newly-arrived request lets some pair of processes become cooperating, both in the case of actual request insertion and, to be responsive with the second type of processes, in the case of request merge. Both types of processes are then handled by just merging their queues. Signed-off-by: NArianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: NMauro Andreolini <mauro.andreolini@unimore.it> Signed-off-by: NPaolo Valente <paolo.valente@linaro.org> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Paolo Valente 提交于
This patch introduces an heuristic that reduces latency when the I/O-request pool is saturated. This goal is achieved by disabling device idling, for non-weight-raised queues, when there are weight- raised queues with pending or in-flight requests. In fact, as explained in more detail in the comment on the function bfq_bfqq_may_idle(), this reduces the rate at which processes associated with non-weight-raised queues grab requests from the pool, thereby increasing the probability that processes associated with weight-raised queues get a request immediately (or at least soon) when they need one. Along the same line, if there are weight-raised queues, then this patch halves the service rate of async (write) requests for non-weight-raised queues. Signed-off-by: NPaolo Valente <paolo.valente@linaro.org> Signed-off-by: NArianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Paolo Valente 提交于
I/O schedulers typically allow NCQ-capable drives to prefetch I/O requests, as NCQ boosts the throughput exactly by prefetching and internally reordering requests. Unfortunately, as discussed in detail and shown experimentally in [1], this may cause fairness and latency guarantees to be violated. The main problem is that the internal scheduler of an NCQ-capable drive may postpone the service of some unlucky (prefetched) requests as long as it deems serving other requests more appropriate to boost the throughput. This patch addresses this issue by not disabling device idling for weight-raised queues, even if the device supports NCQ. This allows BFQ to start serving a new queue, and therefore allows the drive to prefetch new requests, only after the idling timeout expires. At that time, all the outstanding requests of the expired queue have been most certainly served. [1] P. Valente and M. Andreolini, "Improving Application Responsiveness with the BFQ Disk I/O Scheduler", Proceedings of the 5th Annual International Systems and Storage Conference (SYSTOR '12), June 2012. Slightly extended version: http://algogroup.unimore.it/people/paolo/disk_sched/bfq-v1-suite- results.pdf Signed-off-by: NPaolo Valente <paolo.valente@linaro.org> Signed-off-by: NArianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-