- 06 6月, 2018 1 次提交
-
-
由 Ulf Hansson 提交于
To support devices being partitioned across multiple PM domains, let's begin with extending genpd to cope with these kind of configurations. Therefore, add a new exported function genpd_dev_pm_attach_by_id(), which is similar to the existing genpd_dev_pm_attach(), but with the difference that it allows its callers to provide an index to the PM domain that it wants to attach. Note that, genpd_dev_pm_attach_by_id() shall only be called by the driver core / PM core, similar to how the existing dev_pm_domain_attach() makes use of genpd_dev_pm_attach(). However, this is implemented by following changes on top. Because, only one PM domain can be attached per device, genpd needs to create a virtual device that it can attach/detach instead. More precisely, let the new function genpd_dev_pm_attach_by_id() register a virtual struct device via calling device_register(). Then let it attach this device to the corresponding PM domain, rather than the one that is provided by the caller. The actual attaching is done via re-using the existing genpd OF functions. At successful attachment, genpd_dev_pm_attach_by_id() returns the created virtual device, which allows the caller to operate on it to deal with power management. Following changes on top, provides more details in this regards. To deal with detaching of a PM domain for the multiple PM domains case, let's also extend the existing genpd_dev_pm_detach() function, to cover the cleanup of the created virtual device, via make it call device_unregister() on it. In this way, there is no need to introduce a new function to deal with detach for the multiple PM domain case, but instead the existing one is re-used. Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Acked-by: NJon Hunter <jonathanh@nvidia.com> Tested-by: NJon Hunter <jonathanh@nvidia.com> Reviewed-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 05 6月, 2018 1 次提交
-
-
由 Linus Torvalds 提交于
We already earlier discouraged people from using this interface in commit 88796e7e ("sched/swait: Document it clearly that the swait facilities are special and shouldn't be used"), but I just got a pull request with a new broken user. So make the comment *really* clear. The swait interfaces are bad, and should not be used unless you have some *very* strong reasons that include tons of hard performance numbers on just why you want to use them, and you show that you actually understand that they aren't at all like the normal wait/wakeup interfaces. So far, every single user has been suspect. The main user is KVM, which is completely pointless (there is only ever one waiter, which avoids the interface subtleties, but also means that having a queue instead of a pointer is counter-productive and certainly not an "optimization"). So make the comments much stronger. Not that anybody likely reads them anyway, but there's always some slight hope that it will cause somebody to think twice. I'd like to remove this interface entirely, but there is the theoretical possibility that it's actually the right thing to use in some situation, most likely some deep RT use. Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 6月, 2018 1 次提交
-
-
由 Jens Axboe 提交于
If we end up splitting a bio and the queue goes away between the initial submission and the later split submission, then we can block forever in blk_queue_enter() waiting for the reference to drop to zero. This will never happen, since we already hold a reference. Mark a split bio as already having entered the queue, so we can just use the live non-blocking queue enter variant. Thanks to Tetsuo Handa for the analysis. Reported-by: syzbot+c4f9cebf9d651f6e54de@syzkaller.appspotmail.com Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
- 01 6月, 2018 5 次提交
-
-
由 Javier González 提交于
If the namespace is unregistered before the LightNVM target is removed (e.g., on hot unplug) it is too late for the target to store any metadata on the device - any attempt to write to the device will fail. In this case, pass on a "gracefull teardown" flag to the target to let it know when this happens. In the case of pblk, we pad the open line (close all open chunks) to improve data retention. In the event of an ungraceful shutdown, avoid this part and just clean up. Signed-off-by: NJavier González <javier@cnexlabs.com> Signed-off-by: NMatias Bjørling <mb@lightnvm.io> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Christoph Hellwig 提交于
These are only used by the block core. Also move the declarations to block/blk.h. Reported-by: NDamien Le Moal <Damien.LeMoal@wdc.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NDamien Le Moal <damien.lemoal@wdc.com> Tested-by: NDamien Le Moal <damien.lemoal@wdc.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Hannes Reinecke 提交于
Signed-off-by: NHannes Reinecke <hare@suse.com> [hch: split from a larger patch] Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
-
由 Christoph Hellwig 提交于
Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
-
由 Christoph Hellwig 提交于
Stop including the event type in the definitions for the notice type. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
-
- 31 5月, 2018 6 次提交
-
-
由 Kent Overstreet 提交于
All users have been converted to bioset_init(), kill off the old API. Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NKent Overstreet <kent.overstreet@gmail.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Kent Overstreet 提交于
Convert pktcdvd to embedded bio sets. Signed-off-by: NKent Overstreet <kent.overstreet@gmail.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Kent Overstreet 提交于
Convert the core block functionality to embedded bio sets. Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NKent Overstreet <kent.overstreet@gmail.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Rafael J. Wysocki 提交于
There is some code duplication related to the PM QoS handling between the existing cpuidle governors, so move that code to a common helper function and call that from the governors. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Gwendal Grignou 提交于
Move to_cros_ec_dev macro to cros_ec.h and use it when the private ec object is needed from device object. Signed-off-by: NGwendal Grignou <gwendal@chromium.org> Reviewed-by: NEnric Balletbo i Serra <enric.balletbo@collabora.com> Signed-off-by: NBenson Leung <bleung@chromium.org>
-
由 Jens Axboe 提交于
No functional changes in this patch, just a prep patch for utilizing this in an IO scheduler. Signed-off-by: NJens Axboe <axboe@kernel.dk> Reviewed-by: NOmar Sandoval <osandov@fb.com>
-
- 30 5月, 2018 5 次提交
-
-
由 Ulf Hansson 提交于
There is no need to pass a genpd struct to pm_genpd_remove_device(), as we already have the information about the PM domain (genpd) through the device structure. Additionally, we don't allow to remove a PM domain from a device, other than the one it may have assigned to it, so really it does not make sense to have a separate in-param for it. For these reason, drop it and update the current only call to pm_genpd_remove_device() from amdgpu_acp. Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Ulf Hansson 提交于
There are still a few non-DT existing users of genpd, however neither of them uses __pm_genpd_add_device(), hence let's drop it. Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Ulf Hansson 提交于
Using "extern" to declare a function in a public header file is somewhat pointless, but also doesn't hurt. However, to make all the function declarations in pm_domain.h to be consistent, let's drop the use of "extern". Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
There are macros for static initializer for the three out of four possible notifier types, that are: ATOMIC_NOTIFIER_HEAD() BLOCKING_NOTIFIER_HEAD() RAW_NOTIFIER_HEAD() This patch provides a static initilizer for the forth type to make it complete. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Tested-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Christoph Hellwig 提交于
Bsg holding a reference to the parent device may result in a crash if a bsg file handle is closed after the parent device driver has unloaded. Holding a reference is not really needed: the parent device must exist between bsg_register_queue and bsg_unregister_queue. Before the device goes away the caller does blk_cleanup_queue so that all in-flight requests to the device are gone and all new requests cannot pass beyond the queue. The queue itself is a refcounted object and it will stay alive with a bsg file. Based on analysis, previous patch and changelog from Anatoliy Glagolev. Reported-by: NAnatoliy Glagolev <glagolig@gmail.com> Reviewed-by: NJames E.J. Bottomley <jejb@linux.vnet.ibm.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
- 29 5月, 2018 8 次提交
-
-
由 Christoph Hellwig 提交于
The information about a size change in this case just creates confusion. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Jens Axboe 提交于
After the recent timeout handling changes, we have two holes in the struct. Move the timeout near the deadline, killing both, and moving related members closer together. On my config on x86-64, this shrinks struct request from 312 to 304 bytes. Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Christoph Hellwig 提交于
Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NHannes Reinecke <hare@suse.com> Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Christoph Hellwig 提交于
Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NHannes Reinecke <hare@suse.com> Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Christoph Hellwig 提交于
The BLK_EH_NOT_HANDLED implies nothing happen, but very often that is not what is happening - instead the driver already completed the command. Fix the symbolic name to reflect that a little better. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NHannes Reinecke <hare@suse.com> Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Keith Busch 提交于
This patch simplifies the timeout handling by relying on the request reference counting to ensure the iterator is operating on an inflight and truly timed out request. Since the reference counting prevents the tag from being reallocated, the block layer no longer needs to prevent drivers from completing their requests while the timeout handler is operating on it: a driver completing a request is allowed to proceed to the next state without additional syncronization with the block layer. This also removes any need for generation sequence numbers since the request lifetime is prevented from being reallocated as a new sequence while timeout handling is operating on it. To enables this a refcount is added to struct request so that request users can be sure they're operating on the same request without it changing while they're processing it. The request's tag won't be released for reuse until both the timeout handler and the completion are done with it. Signed-off-by: NKeith Busch <keith.busch@intel.com> [hch: slight cleanups, added back submission side hctx lock, use cmpxchg for completions] Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Christoph Hellwig 提交于
As far as I can tell this function can't even be called any more, given that ATA implements its own eh_strategy_handler with ata_scsi_error, which never calls ->eh_timed_out. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Michal Hocko 提交于
Although the api is documented in the source code Ted has pointed out that there is no mention in the core-api Documentation and there are people looking there to find answers how to use a specific API. Requested-by: N"Theodore Y. Ts'o" <tytso@mit.edu> Reviewed-by: NDave Chinner <dchinner@redhat.com> Signed-off-by: NMichal Hocko <mhocko@suse.com> Signed-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 28 5月, 2018 2 次提交
-
-
由 Christoph Hellwig 提交于
Instead of globally disabling > 32bit DMA using the arch_dma_supported hook walk the PCI bus under the actually affected bridge and mark every device with the dma_32bit_limit flag. This also gets rid of the arch_dma_supported hook entirely. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Christoph Hellwig 提交于
Various PCI bridges (VIA PCI, Xilinx PCIe) limit DMA to only 32-bits even if the device itself supports more. Add a single bit flag to struct device (to be moved into the dma extension once we get to it) to flag such devices and reject larger DMA to them. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 27 5月, 2018 2 次提交
-
-
由 Ulf Hansson 提交于
In the driver core, before it invokes really_probe() it runtime resumes the suppliers for the device via calling pm_runtime_get_suppliers(), which also increases the runtime PM usage count for each of the available supplier. This makes sense, as to be able to allow the consumer device to be probed by its driver. However, if the driver decides to add a new supplier link during ->probe(), hence updating the list of suppliers, the following call to pm_runtime_put_suppliers(), invoked after really_probe() in the driver core, we get into trouble. More precisely, pm_runtime_put() gets called also for the new supplier(s), which is wrong as the driver core, didn't trigger pm_runtime_get_sync() to be called for it in the first place. In other words, the new supplier may be runtime suspended even in cases when it shouldn't. Fix this behaviour, by runtime resume suppliers according to the same conditions as managed by the runtime PM core, when runtime resume callbacks are being invoked. Additionally, don't try to runtime suspend any of the suppliers after really_probe(), but instead rely on that to happen via the consumer device, when it becomes runtime suspended. Fixes: 21d5c57b (PM / runtime: Use device links) Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Thomas Gleixner 提交于
timekeeping suspend/resume calls read_persistent_clock() which takes rtc_lock. That results in might sleep warnings because at that point we run with interrupts disabled. We cannot convert rtc_lock to a raw spinlock as that would trigger other might sleep warnings. As a workaround we disable the might sleep warnings by setting system_state to SYSTEM_SUSPEND before calling sysdev_suspend() and restoring it to SYSTEM_RUNNING afer sysdev_resume(). There is no lock contention because hibernate / suspend to RAM is single-CPU at this point. In s2idle's case the system_state is set to SYSTEM_SUSPEND before timekeeping_suspend() which is invoked by the last CPU. In the resume case it set back to SYSTEM_RUNNING after timekeeping_resume() which is invoked by the first CPU in the resume case. The other CPUs will block on tick_freeze_lock. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> [bigeasy: cover s2idle in tick_freeze() / tick_unfreeze()] Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 26 5月, 2018 9 次提交
-
-
由 Christoph Hellwig 提交于
Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Christoph Hellwig 提交于
The socket file operations still implement ->poll until all protocols are switched over. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
No need to pass the key field to lookup_iocb to compare it with KIOCB_KEY, as we can do that right after retrieving it from userspace. Also move the KIOCB_KEY definition to aio.c as it is an internal value not used by any other place in the kernel. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Christoph Hellwig 提交于
->get_poll_head returns the waitqueue that the poll operation is going to sleep on. Note that this means we can only use a single waitqueue for the poll, unlike some current drivers that use two waitqueues for different events. But now that we have keyed wakeups and heavily use those for poll there aren't that many good reason left to keep the multiple waitqueues, and if there are any ->poll is still around, the driver just won't support aio poll. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
-
由 Christoph Hellwig 提交于
These abstract out calls to the poll method in preparation for changes in how we poll. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
-
由 Christoph Hellwig 提交于
No users outside of select.c. Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
-
由 Jonathan Cameron 提交于
The case of a new numa node got missed in avoiding using the node info from page_struct during hotplug. In this path we have a call to register_mem_sect_under_node (which allows us to specify it is hotplug so don't change the node), via link_mem_sections which unfortunately does not. Fix is to pass check_nid through link_mem_sections as well and disable it in the new numa node path. Note the bug only 'sometimes' manifests depending on what happens to be in the struct page structures - there are lots of them and it only needs to match one of them. The result of the bug is that (with a new memory only node) we never successfully call register_mem_sect_under_node so don't get the memory associated with the node in sysfs and meminfo for the node doesn't report it. It came up whilst testing some arm64 hotplug patches, but appears to be universal. Whilst I'm triggering it by removing then reinserting memory to a node with no other elements (thus making the node disappear then appear again), it appears it would happen on hotplugging memory where there was none before and it doesn't seem to be related the arm64 patches. These patches call __add_pages (where most of the issue was fixed by Pavel's patch). If there is a node at the time of the __add_pages call then all is well as it calls register_mem_sect_under_node from there with check_nid set to false. Without a node that function returns having not done the sysfs related stuff as there is no node to use. This is expected but it is the resulting path that fails... Exact path to the problem is as follows: mm/memory_hotplug.c: add_memory_resource() The node is not online so we enter the 'if (new_node)' twice, on the second such block there is a call to link_mem_sections which calls into drivers/node.c: link_mem_sections() which calls drivers/node.c: register_mem_sect_under_node() which calls get_nid_for_pfn and keeps trying until the output of that matches the expected node (passed all the way down from add_memory_resource) It is effectively the same fix as the one referred to in the fixes tag just in the code path for a new node where the comments point out we have to rerun the link creation because it will have failed in register_new_memory (as there was no node at the time). (actually that comment is wrong now as we don't have register_new_memory any more it got renamed to hotplug_memory_register in Pavel's patch). Link: http://lkml.kernel.org/r/20180504085311.1240-1-Jonathan.Cameron@huawei.com Fixes: fc44f7f9 ("mm/memory_hotplug: don't read nid from struct page during hotplug") Signed-off-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: NPavel Tatashin <pasha.tatashin@oracle.com> Acked-by: NMichal Hocko <mhocko@suse.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michal Hocko 提交于
Oscar has noticed that we splat WARNING: CPU: 0 PID: 64 at ./include/linux/gfp.h:467 vmemmap_alloc_block+0x4e/0xc9 [...] CPU: 0 PID: 64 Comm: kworker/u4:1 Tainted: G W E 4.17.0-rc5-next-20180517-1-default+ #66 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.0.0-prebuilt.qemu-project.org 04/01/2014 Workqueue: kacpi_hotplug acpi_hotplug_work_fn Call Trace: vmemmap_populate+0xf2/0x2ae sparse_mem_map_populate+0x28/0x35 sparse_add_one_section+0x4c/0x187 __add_pages+0xe7/0x1a0 add_pages+0x16/0x70 add_memory_resource+0xa3/0x1d0 add_memory+0xe4/0x110 acpi_memory_device_add+0x134/0x2e0 acpi_bus_attach+0xd9/0x190 acpi_bus_scan+0x37/0x70 acpi_device_hotplug+0x389/0x4e0 acpi_hotplug_work_fn+0x1a/0x30 process_one_work+0x146/0x340 worker_thread+0x47/0x3e0 kthread+0xf5/0x130 ret_from_fork+0x35/0x40 when adding memory to a node that is currently offline. The VM_WARN_ON is just too loud without a good reason. In this particular case we are doing alloc_pages_node(node, GFP_KERNEL|__GFP_RETRY_MAYFAIL|__GFP_NOWARN, order) so we do not insist on allocating from the given node (it is more a hint) so we can fall back to any other populated node and moreover we explicitly ask to not warn for the allocation failure. Soften the warning only to cases when somebody asks for the given node explicitly by __GFP_THISNODE. Link: http://lkml.kernel.org/r/20180523125555.30039-3-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com> Reported-by: NOscar Salvador <osalvador@techadventures.net> Tested-by: NOscar Salvador <osalvador@techadventures.net> Reviewed-by: NPavel Tatashin <pasha.tatashin@oracle.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Reza Arbab <arbab@linux.vnet.ibm.com> Cc: Igor Mammedov <imammedo@redhat.com> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Srinivas Kandagatla 提交于
For some reason the devm variant of slimbus init is not added into the header eventhough this __devm_regmap_init_slimbus() is an exported function. This patch adds this. This also fixes below warning in regmap-slimbus.c regmap-slimbus.c:65:15: warning: symbol '__devm_regmap_init_slimbus' was not declared. Should it be static? regmap-slimbus.c:65:16: warning: no previous prototype for '__devm_regmap_init_slimbus' [-Wmissing-prototypes] Fixes: 7d6f7fb0 ("regmap: add SLIMbus support") Signed-off-by: NSrinivas Kandagatla <srinivas.kandagatla@linaro.org> Signed-off-by: NMark Brown <broonie@kernel.org>
-