- 08 11月, 2010 2 次提交
-
-
由 Shaohua Li 提交于
A queue is idle at cfq_dispatch_requests(), but it gets noidle later. Unless other task explictly does unplug or all requests are drained, we will not deliever requests to the disk even cfq_arm_slice_timer doesn't make the queue idle. For example, cfq_should_idle() returns true because of service_tree->count == 1, and then other queues are added. Note, I didn't see obvious performance impacts so far with the patch, but just thought this could be a problem. Signed-off-by: NShaohua Li <shaohua.li@intel.com> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Shaohua Li 提交于
Some functions should return boolean. Signed-off-by: NShaohua Li <shaohua.li@intel.com> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
- 15 10月, 2010 1 次提交
-
-
由 FUJITA Tomonori 提交于
bsg incorrectly returns sg's masked_status value for device_status. [jejb: fix up expression logic] Reported-by: NDouglas Gilbert <dgilbert@interlog.com> Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Stable Tree <stable@kernel.org> Signed-off-by: NJames Bottomley <James.Bottomley@suse.de>
-
- 07 10月, 2010 1 次提交
-
-
由 Jens Axboe 提交于
2.6.36 introduces an API for drivers to switch the IO scheduler instead of manually calling the elevator exit and init functions. This API was added since q->elevator must be cleared in between those two calls. And since we already have this functionality directly from use by the sysfs interface to switch schedulers online, it was prudent to reuse it internally too. But this API needs the queue to be in a fully initialized state before it is called, or it will attempt to unregister elevator kobjects before they have been added. This results in an oops like this: BUG: unable to handle kernel NULL pointer dereference at 0000000000000051 IP: [<ffffffff8116f15e>] sysfs_create_dir+0x2e/0xc0 PGD 47ddfc067 PUD 47c6a1067 PMD 0 Oops: 0000 [#1] PREEMPT SMP last sysfs file: /sys/devices/pci0000:00/0000:00:02.0/0000:04:00.1/irq CPU 2 Modules linked in: t(+) loop hid_apple usbhid ahci ehci_hcd uhci_hcd libahci usbcore nls_base igb Pid: 7319, comm: modprobe Not tainted 2.6.36-rc6+ #132 QSSC-S4R/QSSC-S4R RIP: 0010:[<ffffffff8116f15e>] [<ffffffff8116f15e>] sysfs_create_dir+0x2e/0xc0 RSP: 0018:ffff88027da25d08 EFLAGS: 00010246 RAX: ffff88047c68c528 RBX: 00000000fffffffe RCX: 0000000000000000 RDX: 000000000000002f RSI: 000000000000002f RDI: ffff88047e196c88 RBP: ffff88027da25d38 R08: 0000000000000000 R09: d84156c5635688c0 R10: d84156c5635688c0 R11: 0000000000000000 R12: ffff88047e196c88 R13: 0000000000000000 R14: 0000000000000000 R15: ffff88047c68c528 FS: 00007fcb0b26f6e0(0000) GS:ffff880287400000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: 0000000000000051 CR3: 000000047e76e000 CR4: 00000000000006e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Process modprobe (pid: 7319, threadinfo ffff88027da24000, task ffff88027d377090) Stack: ffff88027da25d58 ffff88047c68c528 00000000fffffffe ffff88047e196c88 <0> ffff88047c68c528 ffff88047e05bd90 ffff88027da25d78 ffffffff8123fb77 <0> ffff88047e05bd90 0000000000000000 ffff88047e196c88 ffff88047c68c528 Call Trace: [<ffffffff8123fb77>] kobject_add_internal+0xe7/0x1f0 [<ffffffff8123fd98>] kobject_add_varg+0x38/0x60 [<ffffffff8123feb9>] kobject_add+0x69/0x90 [<ffffffff8116efe0>] ? sysfs_remove_dir+0x20/0xa0 [<ffffffff8103d48d>] ? sub_preempt_count+0x9d/0xe0 [<ffffffff8143de20>] ? _raw_spin_unlock+0x30/0x50 [<ffffffff8116efe0>] ? sysfs_remove_dir+0x20/0xa0 [<ffffffff8116eff4>] ? sysfs_remove_dir+0x34/0xa0 [<ffffffff81224204>] elv_register_queue+0x34/0xa0 [<ffffffff81224aad>] elevator_change+0xfd/0x250 [<ffffffffa007e000>] ? t_init+0x0/0x361 [t] [<ffffffffa007e000>] ? t_init+0x0/0x361 [t] [<ffffffffa007e0a8>] t_init+0xa8/0x361 [t] [<ffffffff810001de>] do_one_initcall+0x3e/0x170 [<ffffffff8108c3fd>] sys_init_module+0xbd/0x220 [<ffffffff81002f2b>] system_call_fastpath+0x16/0x1b Code: e5 41 56 41 55 41 54 49 89 fc 53 48 83 ec 10 48 85 ff 74 52 48 8b 47 18 49 c7 c5 00 46 61 81 48 85 c0 74 04 4c 8b 68 30 45 31 f6 <41> 80 7d 51 00 74 0e 49 8b 44 24 28 4c 89 e7 ff 50 20 49 89 c6 RIP [<ffffffff8116f15e>] sysfs_create_dir+0x2e/0xc0 RSP <ffff88027da25d08> CR2: 0000000000000051 ---[ end trace a6541d3bf07945df ]--- Fix this by adding a registered bit to the elevator queue, which is set when the sysfs kobjects have been registered. Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
- 25 9月, 2010 1 次提交
-
-
由 Adrian Hunter 提交于
Add logic to prevent two I/O requests being merged if only one of them is a discard. Ditto secure discard. Without this fix, it is possible for write requests to transform into discard requests. For example: Submit bio 1 to discard 8 sectors from sector n Submit bio 2 to write 8 sectors from sector n + 16 Submit bio 3 to write 8 sectors from sector n + 8 Bio 1 becomes request 1. Bio 2 becomes request 2. Bio 3 is merged with request 2, and then subsequently request 2 is merged with request 1 resulting in just one I/O request which discards all 24 sectors. Signed-off-by: NAdrian Hunter <adrian.hunter@nokia.com> (Moved the checks above the position checks /Jens) Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
- 21 9月, 2010 2 次提交
-
-
由 Vivek Goyal 提交于
Mike reported a kernel crash when a usb key hotplug is performed while all kernel thrads are not in a root cgroup and are running in one of the child cgroups of blkio controller. BUG: unable to handle kernel NULL pointer dereference at 0000002c IP: [<c11c7b08>] cfq_get_queue+0x232/0x412 *pde = 00000000 Oops: 0000 [#1] PREEMPT last sysfs file: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/host3/scsi_host/host3/uevent [..] Pid: 30039, comm: scsi_scan_3 Not tainted 2.6.35.2-fg.roam #1 Volvi2 /Aspire 4315 EIP: 0060:[<c11c7b08>] EFLAGS: 00010086 CPU: 0 EIP is at cfq_get_queue+0x232/0x412 EAX: f705f9c0 EBX: e977abac ECX: 00000000 EDX: 00000000 ESI: f00da400 EDI: f00da4ec EBP: e977a800 ESP: dff8fd00 DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068 Process scsi_scan_3 (pid: 30039, ti=dff8e000 task=f6b6c9a0 task.ti=dff8e000) Stack: 00000000 00000000 00000001 01ff0000 f00da508 00000000 f00da524 f00da540 <0> e7994940 dd631750 f705f9c0 e977a820 e977ac44 f00da4d0 00000001 f6b6c9a0 <0> 00000010 00008010 0000000b 00000000 00000001 e977a800 dd76fac0 00000246 Call Trace: [<c11c7f10>] ? cfq_set_request+0x228/0x34c [<c11c7ce8>] ? cfq_set_request+0x0/0x34c [<c11bb3b9>] ? elv_set_request+0xf/0x1c [<c11bdd51>] ? get_request+0x1ad/0x22f [<c11bddf2>] ? get_request_wait+0x1f/0x11a [<c11d013b>] ? kvasprintf+0x33/0x3b [<c127b537>] ? scsi_execute+0x1d/0x103 [<c127b675>] ? scsi_execute_req+0x58/0x83 [<c127c391>] ? scsi_probe_and_add_lun+0x188/0x7c2 [<c12718c6>] ? attribute_container_add_device+0x15/0xfa [<c11c95d1>] ? kobject_get+0xf/0x13 [<c126d1db>] ? get_device+0x10/0x14 [<c127be93>] ? scsi_alloc_target+0x217/0x24d [<c127cbd8>] ? __scsi_scan_target+0x95/0x480 [<c10204eb>] ? dequeue_entity+0x14/0x1fe [<c1020491>] ? update_curr+0x165/0x1ab [<c1020491>] ? update_curr+0x165/0x1ab [<c127d00d>] ? scsi_scan_channel+0x4a/0x76 [<c127d0b0>] ? scsi_scan_host_selected+0x77/0xad [<c127d13c>] ? do_scan_async+0x0/0x11a [<c127d137>] ? do_scsi_scan_host+0x51/0x56 [<c127d13c>] ? do_scan_async+0x0/0x11a [<c127d14a>] ? do_scan_async+0xe/0x11a [<c127d13c>] ? do_scan_async+0x0/0x11a [<c10354c5>] ? kthread+0x5e/0x63 [<c1035467>] ? kthread+0x0/0x63 [<c1002af6>] ? kernel_thread_helper+0x6/0x10 Code: 44 24 1c 54 83 44 24 18 54 83 fa 03 75 94 8b 06 c7 86 64 02 00 00 01 00 00 00 83 e0 03 09 f0 89 06 8b 44 24 28 8b 90 58 01 00 00 <8b> 42 2c 85 c0 75 03 8b 42 08 8d 54 24 48 52 8d 4c 24 50 51 68 EIP: [<c11c7b08>] cfq_get_queue+0x232/0x412 SS:ESP 0068:dff8fd00 CR2: 000000000000002c ---[ end trace 9a88306573f69b12 ]--- The problem here is that we don't have bdi->dev information available when thread does some IO. Hence when dev_name() tries to access bdi->dev, it crashes. This problem does not happen if kernel threads are in root group as root group is statically allocated at device initialization time and we don't hit this piece of code. Fix it by delaying the filling of major and minor number information of device in blk_group. Initially a blk_group is created with 0 as device information and this information is filled later once some more IO comes in from same group. Reported-by: NMike Kazantsev <mk.fraggod@gmail.com> Signed-off-by: NVivek Goyal <vgoyal@redhat.com> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Benny Halevy 提交于
This bug was introduced in 7b6d91da "block: unify flags for struct bio and struct request" Cc: Boaz Harrosh <bharrosh@panasas.com> Signed-off-by: NBenny Halevy <bhalevy@panasas.com> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
- 10 9月, 2010 1 次提交
-
-
由 Brian King 提交于
While testing CPU DLPAR, the following problem was discovered. We were DLPAR removing the first CPU, which in this case was logical CPUs 0-3. CPUs 0-2 were already marked offline and we were in the process of offlining CPU 3. After marking the CPU inactive and offline in cpu_disable, but before the cpu was completely idle (cpu_die), we ended up in __make_request on CPU 3. There we looked at the topology map to see which CPU to complete the I/O on and found no CPUs in the cpu_sibling_map. This resulted in the block layer setting the completion cpu to be NR_CPUS, which then caused an oops when we tried to complete the I/O. Fix this by sanity checking the value we return from blk_cpu_to_group to be a valid cpu value. Signed-off-by: NBrian King <brking@linux.vnet.ibm.com> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
- 23 8月, 2010 8 次提交
-
-
由 Jens Axboe 提交于
Currently drivers must do an elevator_exit() + elevator_init() to switch IO schedulers. There are a few problems with this: - Since commit 1abec4fd, elevator_init() requires a zeroed out q->elevator pointer. The two existing in-kernel users don't do that. - It will only work at initialization time, since using the above two-staged construct does not properly quisce the queue. So add elevator_change() which takes care of this, and convert the elv_iosched_store() sysfs interface to use this helper as well. Reported-by: NPeter Oberparleiter <oberpar@linux.vnet.ibm.com> Reported-by: NKevin Vigor <kevin@vigor.nu> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Jiri Slaby 提交于
Return of the bi_rw tests is no longer bool after commit 74450be1. But results of such tests are stored in bools. This doesn't fit in there for some compilers (gcc 4.5 here), so either use !! magic to get real bools or use ulong where the result is assigned somewhere. Signed-off-by: NJiri Slaby <jslaby@suse.cz> Cc: Christoph Hellwig <hch@lst.de> Reviewed-by: NJeff Moyer <jmoyer@redhat.com> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Xiaotian Feng 提交于
kernel needs to kobject_put on dev->kobj if elv_register_queue fails. Signed-off-by: NXiaotian Feng <dfeng@redhat.com> Cc: "Martin K. Petersen" <martin.petersen@oracle.com> Cc: Stephen Hemminger <shemminger@vyatta.com> Cc: Nikanth Karthikesan <knikanth@suse.de> Cc: David Teigland <teigland@redhat.com> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Vivek Goyal 提交于
o Divyesh had gotten rid of this code in the past. I want to re-introduce it back as it helps me a lot during debugging. Reviewed-by: NJeff Moyer <jmoyer@redhat.com> Reviewed-by: NDivyesh Shah <dpshah@google.com> Signed-off-by: NVivek Goyal <vgoyal@redhat.com> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Vivek Goyal 提交于
o Implement a new tunable group_idle, which allows idling on the group instead of a cfq queue. Hence one can set slice_idle = 0 and not idle on the individual queues but idle on the group. This way on fast storage we can get fairness between groups at the same time overall throughput improves. Signed-off-by: NVivek Goyal <vgoyal@redhat.com> Acked-by: NJeff Moyer <jmoyer@redhat.com> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Vivek Goyal 提交于
o Implement another CFQ mode where we charge group in terms of number of requests dispatched instead of measuring the time. Measuring in terms of time is not possible when we are driving deeper queue depths and there are requests from multiple cfq queues in the request queue. o This mode currently gets activated if one sets slice_idle=0 and associated disk supports NCQ. Again the idea is that on an NCQ disk with idling disabled most of the queues will dispatch 1 or more requests and then cfq queue expiry happens and we don't have a way to measure time. So start providing fairness in terms of IOPS. o Currently IOPS mode works only with cfq group scheduling. CFQ is following different scheduling algorithms for queue and group scheduling. These IOPS stats are used only for group scheduling hence in non-croup mode nothing should change. o For CFQ group scheduling one can disable slice idling so that we don't idle on queue and drive deeper request queue depths (achieving better throughput), at the same time group idle is enabled so one should get service differentiation among groups. Signed-off-by: NVivek Goyal <vgoyal@redhat.com> Acked-by: NJeff Moyer <jmoyer@redhat.com> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Vivek Goyal 提交于
Do not idle either on cfq queue or service tree if slice_idle=0. User does not want any queue or service tree idling. Currently even if slice_idle=0, we were waiting for request to finish before expiring the queue and that can lead to lower queue depths. Acked-by: NJeff Moyer <jmoyer@redhat.com> Signed-off-by: NVivek Goyal <vgoyal@redhat.com> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Ciju Rajan K 提交于
If the cgroup hierarchy for blkio control groups is deeper than two levels, kernel should not allow the creation of further levels. mkdir system call does not except EINVAL as a return value. This patch replaces EINVAL with more appropriate EPERM Signed-off-by: NCiju Rajan K <ciju@linux.vnet.ibm.com> Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
- 12 8月, 2010 1 次提交
-
-
由 Adrian Hunter 提交于
Secure discard is the same as discard except that all copies of the discarded sectors (perhaps created by garbage collection) must also be erased. Signed-off-by: NAdrian Hunter <adrian.hunter@nokia.com> Acked-by: NJens Axboe <axboe@kernel.dk> Cc: Kyungmin Park <kmpark@infradead.org> Cc: Madhusudhan Chikkature <madhu.cr@ti.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Ben Gardiner <bengardiner@nanometrics.ca> Cc: <linux-mmc@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 8月, 2010 2 次提交
-
-
由 Dmitry Monakhov 提交于
- If function called without barrier option retvalue is incorrect Signed-off-by: NDmitry Monakhov <dmonakhov@openvz.org> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 ike Snitzer 提交于
Propagate REQ_DISCARD in cmd_flags when cloning a discard request. Skip blk_rq_check_limits's existing checks for discard requests because discard limits will have already been checked in blkdev_issue_discard. Signed-off-by: NMike Snitzer <snitzer@redhat.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
- 08 8月, 2010 19 次提交
-
-
由 FUJITA Tomonori 提交于
q->bar_rq.rq_disk is NULL. Use the rq_disk of the original request instead. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 FUJITA Tomonori 提交于
the block layer doesn't set rq->cmd_type on flush requests. By definition, it should be REQ_TYPE_FS (the lower layers build a command and interpret the result of it, that is, the block layer doesn't know the details). Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Jens Axboe 提交于
If the queue doesn't have a limit set, or it just set UINT_MAX like we default to, we coud be sending down a discard request that isn't of the correct granularity if the block size is > 512b. Fix this by adjusting max_discard_sectors down to the proper alignment. Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Dave Chinner 提交于
Issuing a blkdev_issue_flush() on an unconfigured loop device causes a panic as q->make_request_fn is not configured. This can occur when trying to mount the unconfigured loop device as an XFS filesystem. There are no guards that catch the bio before the request function is called because we don't add a payload to the bio. Instead, manually check this case as soon as we have a pointer to the queue to flush. Signed-off-by: NDave Chinner <dchinner@redhat.com> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Arnd Bergmann 提交于
The blkpg_ioctl and blkdev_reread_part access fields of the bdev and gendisk structures, yet they always do so under the protection of bdev->bd_mutex, which seems sufficient. Signed-off-by: NArnd Bergmann <arnd@arndb.de> cked-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Arnd Bergmann 提交于
We only call the functions set_device_ro(), invalidate_bdev(), sync_filesystem() and sync_blockdev() while holding the BKL in these commands. All of these are also done in other code paths without the BKL, which leads me to the conclusion that the BKL is not needed here either. The reason we hold it here is that it was originally pushed down into the ioctl function from vfs_ioctl. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Arnd Bergmann 提交于
The blktrace driver currently needs the BKL, but we should not need to take that in the block layer, so just push it down into the driver itself. It is quite likely that the BKL is not actually required in blktrace code and could be removed in a follow-on patch. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Arnd Bergmann 提交于
As a preparation for the removal of the big kernel lock in the block layer, this removes the BKL from the common ioctl handling code, moving it into every single driver still using it. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 FUJITA Tomonori 提交于
This removes q->prepare_flush_fn completely (changes the blk_queue_ordered API). Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 FUJITA Tomonori 提交于
This is preparation for removing q->prepare_flush_fn. Temporarily, blk_queue_ordered() permits QUEUE_ORDERED_DO_PREFLUSH and QUEUE_ORDERED_DO_POSTFLUSH without prepare_flush_fn. Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 FUJITA Tomonori 提交于
SCSI-ml needs a way to mark a request as flush request in q->prepare_flush_fn because it needs to identify them later (e.g. in q->request_fn or prep_rq_fn). queue_flush sets REQ_HARDBARRIER in rq->cmd_flags however the block layer also sends normal REQ_TYPE_FS requests with REQ_HARDBARRIER. So SCSI-ml can't use REQ_HARDBARRIER to identify flush requests. We could change the block layer to clear REQ_HARDBARRIER bit before sending non flush requests to the lower layers. However, intorudcing the new flag looks cleaner (surely easier). Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: James Bottomley <James.Bottomley@suse.de> Cc: David S. Miller <davem@davemloft.net> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Alasdair G Kergon <agk@redhat.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 James Bottomley 提交于
Reviewed-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Jens Axboe 提交于
Didn't cause a merge conflict, so fixed this one up manually post merge. Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Andi Kleen 提交于
Just some dead code. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Christoph Hellwig 提交于
Allocating a fixed payload for discard requests always was a horrible hack, and it's not coming to byte us when adding support for discard in DM/MD. So change the code to leave the allocation of a payload to the lowlevel driver. Unfortunately that means we'll need another hack, which allows us to update the various block layer length fields indicating that we have a payload. Instead of hiding this in sd.c, which we already partially do for UNMAP support add a documented helper in the core block layer for it. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Christoph Hellwig 提交于
Remove the current bio flags and reuse the request flags for the bio, too. This allows to more easily trace the type of I/O from the filesystem down to the block driver. There were two flags in the bio that were missing in the requests: BIO_RW_UNPLUG and BIO_RW_AHEAD. Also I've renamed two request flags that had a superflous RW in them. Note that the flags are in bio.h despite having the REQ_ name - as blkdev.h includes bio.h that is the only way to go for now. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Christoph Hellwig 提交于
Remove all the trivial wrappers for the cmd_type and cmd_flags fields in struct requests. This allows much easier grepping for different request types instead of unwinding through macros. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Jens Axboe 提交于
The code for nonrot, random, and io stats are completely identical. Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
由 Jens Axboe 提交于
There are two reasons for doing this: - On SSD disks, the completion times aren't as random as they are for rotational drives. So it's questionable whether they should contribute to the random pool in the first place. - Calling add_disk_randomness() has a lot of overhead. This adds /sys/block/<dev>/queue/add_random that will allow you to switch off on a per-device basis. The default setting is on, so there should be no functional changes from this patch. Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
- 24 6月, 2010 1 次提交
-
-
由 Tao Ma 提交于
In submit_bio, we count vm events by check READ/WRITE. But actually DISCARD_NOBARRIER also has the WRITE flag set. It looks as if in blkdev_issue_discard, we also add a page as the payload and the bio_has_data check isn't enough. So add another check for discard bio. Signed-off-by: NTao Ma <tao.ma@oracle.com> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
- 21 6月, 2010 1 次提交
-
-
由 Jens Axboe 提交于
e98ef89b has a typo, causing cfq_blkiocg_update_completion_stats() to call itself instead of blkiocg_update_completion_stats(). Reported-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-