- 09 7月, 2014 4 次提交
-
-
由 Dmitry Kasatkin 提交于
There is no need to read attr because inode structure contains size of the file. Use i_size_read() instead. Signed-off-by: NDmitry Kasatkin <d.kasatkin@samsung.com> Acked-by: NMing Lei <ming.lei@canonical.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Takashi Iwai 提交于
[The patch was originally proposed by Tom Gundersen, and rewritten afterwards by me; most of changelogs below borrowed from Tom's original patch -- tiwai] Currently (at least) the dell-rbu driver selects FW_LOADER_USER_HELPER, which means that distros can't really stop loading firmware through udev without breaking other users (though some have). Ideally we would remove/disable the udev firmware helper in both the kernel and in udev, but if we were to disable it in udev and not the kernel, the result would be (seemingly) hung kernels as no one would be around to cancel firmware requests. This patch allows udev firmware loading to be disabled while still allowing non-udev firmware loading, as done by the dell-rbu driver, to continue working. This is achieved by only using the fallback mechanism when the uevent is suppressed. The patch renames the user-selectable Kconfig from FW_LOADER_USER_HELPER to FW_LOADER_USER_HELPER_FALLBACK, and the former is reverse-selected by the latter or the drivers that need userhelper like dell-rbu. Also, the "default y" is removed together with this change, since it's been deprecated in udev upstream, thus rather better to disable it nowadays. Tested with FW_LOADER_USER_HELPER=n LATTICE_ECP3_CONFIG=y DELL_RBU=y and udev without the firmware loading support, but I don't have the hardware to test the lattice/dell drivers, so additional testing would be appreciated. Reviewed-by: NTom Gundersen <teg@jklm.no> Cc: Ming Lei <ming.lei@canonical.com> Cc: Abhay Salunke <Abhay_Salunke@dell.com> Cc: Stefan Roese <sr@denx.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Kay Sievers <kay@vrfy.org> Tested-by: NBalaji Singh <B_B_Singh@DELL.com> Signed-off-by: NTakashi Iwai <tiwai@suse.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Maarten Lankhorst 提交于
A fence can be attached to a buffer which is being filled or consumed by hw, to allow userspace to pass the buffer without waiting to another device. For example, userspace can call page_flip ioctl to display the next frame of graphics after kicking the GPU but while the GPU is still rendering. The display device sharing the buffer with the GPU would attach a callback to get notified when the GPU's rendering-complete IRQ fires, to update the scan-out address of the display, without having to wake up userspace. A driver must allocate a fence context for each execution ring that can run in parallel. The function for this takes an argument with how many contexts to allocate: + fence_context_alloc() A fence is transient, one-shot deal. It is allocated and attached to one or more dma-buf's. When the one that attached it is done, with the pending operation, it can signal the fence: + fence_signal() To have a rough approximation whether a fence is fired, call: + fence_is_signaled() The dma-buf-mgr handles tracking, and waiting on, the fences associated with a dma-buf. The one pending on the fence can add an async callback: + fence_add_callback() The callback can optionally be cancelled with: + fence_remove_callback() To wait synchronously, optionally with a timeout: + fence_wait() + fence_wait_timeout() When emitting a fence, call: + trace_fence_emit() To annotate that a fence is blocking on another fence, call: + trace_fence_annotate_wait_on(fence, on_fence) A default software-only implementation is provided, which can be used by drivers attaching a fence to a buffer when they have no other means for hw sync. But a memory backed fence is also envisioned, because it is common that GPU's can write to, or poll on some memory location for synchronization. For example: fence = custom_get_fence(...); if ((seqno_fence = to_seqno_fence(fence)) != NULL) { dma_buf *fence_buf = seqno_fence->sync_buf; get_dma_buf(fence_buf); ... tell the hw the memory location to wait ... custom_wait_on(fence_buf, seqno_fence->seqno_ofs, fence->seqno); } else { /* fall-back to sw sync * / fence_add_callback(fence, my_cb); } On SoC platforms, if some other hw mechanism is provided for synchronizing between IP blocks, it could be supported as an alternate implementation with it's own fence ops in a similar way. enable_signaling callback is used to provide sw signaling in case a cpu waiter is requested or no compatible hardware signaling could be used. The intention is to provide a userspace interface (presumably via eventfd) later, to be used in conjunction with dma-buf's mmap support for sw access to buffers (or for userspace apps that would prefer to do their own synchronization). v1: Original v2: After discussion w/ danvet and mlankhorst on #dri-devel, we decided that dma-fence didn't need to care about the sw->hw signaling path (it can be handled same as sw->sw case), and therefore the fence->ops can be simplified and more handled in the core. So remove the signal, add_callback, cancel_callback, and wait ops, and replace with a simple enable_signaling() op which can be used to inform a fence supporting hw->hw signaling that one or more devices which do not support hw signaling are waiting (and therefore it should enable an irq or do whatever is necessary in order that the CPU is notified when the fence is passed). v3: Fix locking fail in attach_fence() and get_fence() v4: Remove tie-in w/ dma-buf.. after discussion w/ danvet and mlankorst we decided that we need to be able to attach one fence to N dma-buf's, so using the list_head in dma-fence struct would be problematic. v5: [ Maarten Lankhorst ] Updated for dma-bikeshed-fence and dma-buf-manager. v6: [ Maarten Lankhorst ] I removed dma_fence_cancel_callback and some comments about checking if fence fired or not. This is broken by design. waitqueue_active during destruction is now fatal, since the signaller should be holding a reference in enable_signalling until it signalled the fence. Pass the original dma_fence_cb along, and call __remove_wait in the dma_fence_callback handler, so that no cleanup needs to be performed. v7: [ Maarten Lankhorst ] Set cb->func and only enable sw signaling if fence wasn't signaled yet, for example for hardware fences that may choose to signal blindly. v8: [ Maarten Lankhorst ] Tons of tiny fixes, moved __dma_fence_init to header and fixed include mess. dma-fence.h now includes dma-buf.h All members are now initialized, so kmalloc can be used for allocating a dma-fence. More documentation added. v9: Change compiler bitfields to flags, change return type of enable_signaling to bool. Rework dma_fence_wait. Added dma_fence_is_signaled and dma_fence_wait_timeout. s/dma// and change exports to non GPL. Added fence_is_signaled and fence_enable_sw_signaling calls, add ability to override default wait operation. v10: remove event_queue, use a custom list, export try_to_wake_up from scheduler. Remove fence lock and use a global spinlock instead, this should hopefully remove all the locking headaches I was having on trying to implement this. enable_signaling is called with this lock held. v11: Use atomic ops for flags, lifting the need for some spin_lock_irqsaves. However I kept the guarantee that after fence_signal returns, it is guaranteed that enable_signaling has either been called to completion, or will not be called any more. Add contexts and seqno to base fence implementation. This allows you to wait for less fences, by testing for seqno + signaled, and then only wait on the later fence. Add FENCE_TRACE, FENCE_WARN, and FENCE_ERR. This makes debugging easier. An CONFIG_DEBUG_FENCE will be added to turn off the FENCE_TRACE spam, and another runtime option can turn it off at runtime. v12: Add CONFIG_FENCE_TRACE. Add missing documentation for the fence->context and fence->seqno members. v13: Fixup CONFIG_FENCE_TRACE kconfig description. Move fence_context_alloc to fence. Simplify fence_later. Kill priv member to fence_cb. v14: Remove priv argument from fence_add_callback, oops! v15: Remove priv from documentation. Explicitly include linux/atomic.h. v16: Add trace events. Import changes required by android syncpoints. v17: Use wake_up_state instead of try_to_wake_up. (Colin Cross) Fix up commit description for seqno_fence. (Rob Clark) v18: Rename release_fence to fence_release. Move to drivers/dma-buf/. Rename __fence_is_signaled and __fence_signal to *_locked. Rename __fence_init to fence_init. Make fence_default_wait return a signed long, and fix wait ops too. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Signed-off-by: Thierry Reding <thierry.reding@gmail.com> #use smp_mb__before_atomic() Acked-by: NSumit Semwal <sumit.semwal@linaro.org> Acked-by: NDaniel Vetter <daniel@ffwll.ch> Reviewed-by: NRob Clark <robdclark@gmail.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Maarten Lankhorst 提交于
Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Acked-by: NSumit Semwal <sumit.semwal@linaro.org> Acked-by: NDaniel Vetter <daniel@ffwll.ch> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 05 7月, 2014 1 次提交
-
-
由 Russell King 提交于
Sachin Kamat reports that "component: add support for component match array" broke Exynos DRM due to a NULL pointer deref. Fix this. Reported-by: NSachin Kamat <sachin.kamat@samsung.com> Tested-by: NSachin Kamat <sachin.kamat@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 7月, 2014 3 次提交
-
-
由 Russell King 提交于
Add support for generating a set of component matches at master probe time, and submitting them to the component layer. This allows the component layer to perform the matches internally without needing to call into the master driver, and allows for further restructuring of the component helper. Acked-by: NLaurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Permit masters to call component_master_add_child() and match the same child multiple times. This may happen if there's multiple connections to a single component device from other devices. In such scenarios, we should not return a failure, but instead ignore the attempt. Acked-by: NLaurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
In try_to_bring_up_master(), we tear down the master's component list for each error case, except for devres group failure. Fix this oversight by making the code less prone to such mistakes. Acked-by: NLaurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 6月, 2014 1 次提交
-
-
由 Joonsoo Kim 提交于
We should free memory for bitmap when we find zone mismatch, otherwise this memory will leak. Additionally, I copy code comment from PPC KVM's CMA code to inform why we need to check zone mis-match. Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: NZhang Yanfei <zhangyanfei@cn.fujitsu.com> Reviewed-by: NMichal Nazarewicz <mina86@mina86.com> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: NMinchan Kim <minchan@kernel.org> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Michal Nazarewicz <mina86@mina86.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Gleb Natapov <gleb@kernel.org> Cc: Alexander Graf <agraf@suse.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 6月, 2014 1 次提交
-
-
由 Todd E Brandt 提交于
Adds two trace events which supply the same info that initcall_debug provides, but via ftrace instead of dmesg. The existing initcall_debug calls require the pm_print_times_enabled var to be set (either via sysfs or via the kernel cmd line). The new trace events provide all the same info as the initcall_debug prints but with less overhead, and also with coverage of device prepare and complete device callbacks. These events replace the device_pm_report_time event (which has been removed). device_pm_callback_start is called first and provides the device and callback info. device_pm_callback_end is called after with the device name and error info. The time and pid are gathered from the trace data headers. Signed-off-by: NTodd Brandt <todd.e.brandt@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 07 6月, 2014 1 次提交
-
-
由 Todd E Brandt 提交于
Adds trace events that give finer resolution into suspend/resume. These events are graphed in the timelines generated by the analyze_suspend.py script. They represent large areas of time consumed that are typical to suspend and resume. The event is triggered by calling the function "trace_suspend_resume" with three arguments: a string (the name of the event to be displayed in the timeline), an integer (case specific number, such as the power state or cpu number), and a boolean (where true is used to denote the start of the timeline event, and false to denote the end). The suspend_resume trace event reproduces the data that the machine_suspend trace event did, so the latter has been removed. Signed-off-by: NTodd Brandt <todd.e.brandt@intel.com> Acked-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 05 6月, 2014 3 次提交
-
-
由 Marc Carino 提交于
Some systems require a larger maximum PAGE_SIZE order for CMA allocations. To accommodate such systems, increase the upper-bound of the CMA_ALIGNMENT range to 12 (which ends up being 16MB on systems with 4K pages). Signed-off-by: NMarc Carino <marc.ceeeee@gmail.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Li Zhong 提交于
Seems we all agree that information about SECTION, e.g. section size, sections per memory block should be kept as kernel internals, and not exposed to userspace. This patch updates Documentation/memory-hotplug.txt to refer to memory blocks instead of memory sections where appropriate and added a paragraph to explain that memory blocks are made of memory sections. The documentation update is mostly provided by Nathan. Also, as end_phys_index in code is actually not the end section id, but the end memory block id, which should always be the same as phys_index. So it is removed here. Signed-off-by: NLi Zhong <zhong@linux.vnet.ibm.com> Reviewed-by: NZhang Yanfei <zhangyanfei@cn.fujitsu.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Akinobu Mita 提交于
Currently, "cma=" kernel parameter is used to specify the size of CMA, but we can't specify where it is located. We want to locate CMA below 4GB for devices only supporting 32-bit addressing on 64-bit systems without iommu. This enables to specify the placement of CMA by extending "cma=" kernel parameter. Examples: 1. locate 64MB CMA below 4GB by "cma=64M@0-4G" 2. locate 64MB CMA exact at 512MB by "cma=64M@512M" Note that the DMA contiguous memory allocator on x86 assumes that page_address() works for the pages to allocate. So this change requires to limit end address of contiguous memory area upto max_pfn_mapped to prevent from locating it on highmem area by the argument of dma_contiguous_reserve(). Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Don Dutile <ddutile@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Yinghai Lu <yinghai@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 30 5月, 2014 1 次提交
-
-
由 Zhang Rui 提交于
When enabling a device' wakeup capability, a wakeup source is created for the device automatically. But the wakeup source is not unregistered when disabling the device' wakeup capability. This results in zombie wakeup sources, after devices/drivers are unregistered. Signed-off-by: NZhang Rui <rui.zhang@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 29 5月, 2014 1 次提交
-
-
由 Joonsoo Kim 提交于
'cma: Remove potential deadlock situation' introduces per cma area mutex for bitmap management. It is good, but there is one mistake. When we can't find appropriate area in bitmap, we release cma_mutex global lock rather than cma->lock and this is a bug. So fix it. Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 28 5月, 2014 4 次提交
-
-
由 Jean Delvare 提交于
dev_set_drvdata and dev_get_drvdata are now simple enough again that we can inline them as they used to be before commit b4028437. Signed-off-by: NJean Delvare <jdelvare@suse.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Jean Delvare 提交于
There is no point in calling dev_get_drvdata without a valid device. So checking for dev == NULL is pointless. If such a check is ever needed - which I doubt - the driver should do it before calling dev_get_drvdata. We were returning NULL if dev was NULL, which the caller certainly did not expect anyway, so that was only delaying the crash if the caller is not paying attention. Signed-off-by: NJean Delvare <jdelvare@suse.de> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Jean Delvare 提交于
dev_set_drvdata can no longer fail, so it could return void. All callers have hopefully been updated to no longer check for the return value. Signed-off-by: NJean Delvare <jdelvare@suse.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Jean Delvare 提交于
Having to allocate memory as part of dev_set_drvdata() is a problem because that memory may never get freed if the device itself is not created. So move driver_data back to struct device. This is a partial revert of commit b4028437. Signed-off-by: NJean Delvare <jdelvare@suse.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 27 5月, 2014 1 次提交
-
-
由 Chander Kashyap 提交于
We don't have any protection against addition of duplicate OPPs currently and in case some code tries to add them, it will end up corrupting OPP tables. We need to handle some duplication cases separately as returning error might not be the right thing always. The new list of return values for dev_pm_opp_add() are: 0: On success OR Duplicate OPPs (both freq and volt are same) and opp->available -EEXIST: Freq are same and volt are different OR Duplicate OPPs (both freq and volt are same) and !opp->available -ENOMEM: Memory allocation failure Acked-by: NNishanth Menon <nm@ti.com> Signed-off-by: NChander Kashyap <k.chander@samsung.com> Signed-off-by: NInderpal Singh <inderpal.s@samsung.com> Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 26 5月, 2014 2 次提交
-
-
由 Philipp Zabel 提交于
Commit 93258040 "regmap: mmio: Add support for 1/2/8 bytes wide register address." broke regmap_mmio_write for uneven counts, for example 32-bit register addresses with no padding and 8-byte values (count = 5). Fix this by allowing all counts large enough to include some value. This check was BUG_ON(count < 4) before the last change. Signed-off-by: NPhilipp Zabel <p.zabel@pengutronix.de> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Xiubo Li 提交于
Since we cannot make sure the 'chip->num_regs' will always be none zero from the users, and then if 'chip->num_regs' equals to zero by mistake or other reasons, the kzalloc() will return ZERO_SIZE_PTR, which equals to ((void *)16). So this patch fix this with just checking the 'chip->num_regs' before calling kzalloc(). This also sorts the header files in alphabetical order at the same time. Signed-off-by: NXiubo Li <Li.Xiubo@freescale.com> Signed-off-by: NMark Brown <broonie@linaro.org>
-
- 24 5月, 2014 1 次提交
-
-
由 Eli Billauer 提交于
devm_get_free_pages() and devm_free_pages() are the managed counterparts for __get_free_pages() and free_pages(). Signed-off-by: NEli Billauer <eli.billauer@gmail.com> Acked-by: NTejun Heo <tj@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 23 5月, 2014 1 次提交
-
-
由 Grygorii Strashko 提交于
The commit 9ec36caf "of/irq: do irq resolution in platform_get_irq" from Rob Herring - moves resolving of the interrupt resources in platform_get_irq(). But this solution isn't complete because platform_get_irq_byname() need to be modified the same way. Hence, fix it by adding interrupt resolution code at the platform_get_irq_byname() function too. Cc: Russell King <linux@arm.linux.org.uk> Cc: Rob Herring <robh@kernel.org> Cc: Tony Lindgren <tony@atomide.com> Cc: Grant Likely <grant.likely@linaro.org> Cc: Thierry Reding <thierry.reding@gmail.com> Signed-off-by: NGrygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: NGrant Likely <grant.likely@linaro.org>
-
- 22 5月, 2014 1 次提交
-
-
由 Gioh Kim 提交于
fix erratum get_dev_cma_area into dev_get_cma_area Signed-off-by: NGioh Kim <gioh.kim@lge.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 21 5月, 2014 1 次提交
-
-
由 Bjorn Helgaas 提交于
dma_declare_coherent_memory() takes two addresses for a region of memory: a "bus_addr" and a "device_addr". I think the intent is that "bus_addr" is the physical address a *CPU* would use to access the region, and "device_addr" is the bus address the *device* would use to address the region. Rename "bus_addr" to "phys_addr" and change its type to phys_addr_t. Most callers already supply a phys_addr_t for this argument. The others supply a 32-bit integer (a constant, unsigned int, or __u32) and need no change. Use "unsigned long", not phys_addr_t, to hold PFNs. No functional change (this could theoretically fix a truncation in a config with 32-bit dma_addr_t and 64-bit phys_addr_t, but I don't think there are any such cases involving this code). Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Acked-by: NJames Bottomley <jbottomley@Parallels.com> Acked-by: NRandy Dunlap <rdunlap@infradead.org>
-
- 20 5月, 2014 1 次提交
-
-
由 Chander Kashyap 提交于
In of_init_opp_table function, if a failure to add an OPP is detected, the count of OPPs, yet to be added is not updated. Fix this by decrementing this count on failure as well. Signed-off-by: NChander Kashyap <k.chander@samsung.com> Signed-off-by: NInderpal Singh <inderpal.s@samsung.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Cc: 3.7+ <stable@vger.kernel.org> # 3.7+ Acked-by: NNishanth Menon <nm@ti.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 17 5月, 2014 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Currently, some subsystems (e.g. PCI and the ACPI PM domain) have to resume all runtime-suspended devices during system suspend, mostly because those devices may need to be reprogrammed due to different wakeup settings for system sleep and for runtime PM. For some devices, though, it's OK to remain in runtime suspend throughout a complete system suspend/resume cycle (if the device was in runtime suspend at the start of the cycle). We would like to do this whenever possible, to avoid the overhead of extra power-up and power-down events. However, problems may arise because the device's descendants may require it to be at full power at various points during the cycle. Therefore the most straightforward way to do this safely is if the device and all its descendants can remain runtime suspended until the complete stage of system resume. To this end, introduce a new device PM flag, power.direct_complete and modify the PM core to use that flag as follows. If the ->prepare() callback of a device returns a positive number, the PM core will regard that as an indication that it may leave the device runtime-suspended. It will then check if the system power transition in progress is a suspend (and not hibernation in particular) and if the device is, indeed, runtime-suspended. In that case, the PM core will set the device's power.direct_complete flag. Otherwise it will clear power.direct_complete for the device and it also will later clear it for the device's parent (if there's one). Next, the PM core will not invoke the ->suspend() ->suspend_late(), ->suspend_irq(), ->resume_irq(), ->resume_early(), or ->resume() callbacks for all devices having power.direct_complete set. It will invoke their ->complete() callbacks, however, and those callbacks are then responsible for resuming the devices as appropriate, if necessary. For example, in some cases they may need to queue up runtime resume requests for the devices using pm_request_resume(). Changelog partly based on an Alan Stern's description of the idea (http://marc.info/?l=linux-pm&m=139940466625569&w=2). Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NAlan Stern <stern@rowland.harvard.edu>
-
- 15 5月, 2014 1 次提交
-
-
由 Laura Abbott 提交于
CMA locking is currently very coarse. The cma_mutex protects both the bitmap and avoids concurrency with alloc_contig_range. There are several situations which may result in a deadlock on the CMA mutex currently, mostly involving AB/BA situations with alloc and free. Fix this issue by protecting the bitmap with a mutex per CMA region and use the existing mutex for protecting against concurrency with alloc_contig_range. Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 07 5月, 2014 2 次提交
-
-
由 Nishanth Menon 提交于
CPUFreq specific helper functions for OPP (Operating Performance Points) now use generic OPP functions that allow CPUFreq to be be moved back into CPUFreq framework. This allows for independent modifications or future enhancements as needed isolated to just CPUFreq framework alone. Here, we just move relevant code and documentation to make this part of CPUFreq infrastructure. Cc: Kevin Hilman <khilman@deeprootsystems.com> Signed-off-by: NNishanth Menon <nm@ti.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Nishanth Menon 提交于
CPUFREQ custom functions for OPP (Operating Performance Points) currently exist inside the OPP library. These custom functions currently depend on internal data structures to pick up OPP information to create the cpufreq table. For example, the cpufreq table is created precisely in the same order of how OPP entries are stored inside the list implementation. This kind of tight interdependency is purely artificial since the same functionality can be achieved using the generic OPP functions meant to do the same. This interdependency also limits the independent modification of cpufreq and OPP library. So use the generic dev_pm_opp_find_freq_ceil function that achieves the table organization as we currently use. As a result of this, we dont need to use the internal device_opp structure anymore, and we hence we can switch over to rcu lock instead of the mutex holding the internal list lock. This breaking of dependency on internal data structure imposes no change to usage of these. NOTE: This change is a precursor to moving this cpufreq specific logic out of the generic library into cpufreq. Cc: Kevin Hilman <khilman@deeprootsystems.com> Signed-off-by: NNishanth Menon <nm@ti.com> Acked-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 05 5月, 2014 1 次提交
-
-
由 Javier Martinez Canillas 提交于
Signed-off-by: NJavier Martinez Canillas <javier.martinez@collabora.co.uk> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 01 5月, 2014 2 次提交
-
-
由 Geert Uytterhoeven 提交于
drivers/base/regmap/regmap.c: In function ‘_regmap_range_multi_paged_reg_write’: drivers/base/regmap/regmap.c:1665: warning: ‘this_page’ may be used uninitialized in this function Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NMark Brown <broonie@linaro.org>
-
由 Xiubo Li 提交于
Since we cannot make sure the 'len = pair_size * num_regs' will always be none zero from the users, and then if 'num_regs' equals to zero by mistake or other reasons, the kzalloc() will return ZERO_SIZE_PTR, which equals to ((void *)16). So this patch fix this with just doing the 'len' zero check before calling kzalloc(). Signed-off-by: NXiubo Li <Li.Xiubo@freescale.com> Signed-off-by: NMark Brown <broonie@linaro.org>
-
- 30 4月, 2014 1 次提交
-
-
由 Srinivas Pandruvada 提交于
Introduce devm_kmemdup, which uses resource managed kmalloc. There are several request from maintainers to add this instead of using kmemdup. Signed-off-by: NSrinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Signed-off-by: NJonathan Cameron <jic23@kernel.org>
-
- 29 4月, 2014 1 次提交
-
-
由 Grant Likely 提交于
When the kernel is built with CONFIG_PREEMPT it is possible to reach a state when all modules loaded but some driver still stuck in the deferred list and there is a need for external event to kick the deferred queue to probe these drivers. The issue has been observed on embedded systems with CONFIG_PREEMPT enabled, audio support built as modules and using nfsroot for root filesystem. The following log fragment shows such sequence when all audio modules were loaded but the sound card is not present since the machine driver has failed to probe due to missing dependency during it's probe. The board is am335x-evmsk (McASP<->tlv320aic3106 codec) with davinci-evm machine driver: ... [ 12.615118] davinci-mcasp 4803c000.mcasp: davinci_mcasp_probe: ENTER [ 12.719969] davinci_evm sound.3: davinci_evm_probe: ENTER [ 12.725753] davinci_evm sound.3: davinci_evm_probe: snd_soc_register_card [ 12.753846] davinci-mcasp 4803c000.mcasp: davinci_mcasp_probe: snd_soc_register_component [ 12.922051] davinci-mcasp 4803c000.mcasp: davinci_mcasp_probe: snd_soc_register_component DONE [ 12.950839] davinci_evm sound.3: ASoC: platform (null) not registered [ 12.957898] davinci_evm sound.3: davinci_evm_probe: snd_soc_register_card DONE (-517) [ 13.099026] davinci-mcasp 4803c000.mcasp: Kicking the deferred list [ 13.177838] davinci-mcasp 4803c000.mcasp: really_probe: probe_count = 2 [ 13.194130] davinci_evm sound.3: snd_soc_register_card failed (-517) [ 13.346755] davinci_mcasp_driver_init: LEAVE [ 13.377446] platform sound.3: Driver davinci_evm requests probe deferral [ 13.592527] platform sound.3: really_probe: probe_count = 0 In the log the machine driver enters it's probe at 12.719969 (this point it has been removed from the deferred lists). McASP driver already executing it's probing (since 12.615118). The machine driver tries to construct the sound card (12.950839) but did not found one of the components so it fails. After this McASP driver registers all the ASoC components (the machine driver still in it's probe function after it failed to construct the card) and the deferred work is prepared at 13.099026 (note that this time the machine driver is not in the lists so it is not going to be handled when the work is executing). Lastly the machine driver exit from it's probe and the core places it to the deferred list but there will be no other driver going to load and the deferred queue is not going to be kicked again - till we have external event like connecting USB stick, etc. The proposed solution is to try the deferred queue once more when the last driver is asking for deferring and we had drivers loaded while this last driver was probing. This way we can avoid drivers stuck in the deferred queue. Signed-off-by: NGrant Likely <grant.likely@linaro.org> Reviewed-by: NPeter Ujfalusi <peter.ujfalusi@ti.com> Tested-by: NPeter Ujfalusi <peter.ujfalusi@ti.com> Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Mark Brown <broonie@kernel.org> Cc: Stable <stable@vger.kernel.org> # v3.4+
-
- 26 4月, 2014 1 次提交
-
-
由 Michael Marineau 提交于
Support for uevent_helper, aka hotplug, is not required on many systems these days but it can still be enabled via sysfs or sysctl. Reported-by: NDarren Shepherd <darren.s.shepherd@gmail.com> Signed-off-by: NMichael Marineau <mike@marineau.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 25 4月, 2014 1 次提交
-
-
由 Rob Herring 提交于
Currently we get the following kind of errors if we try to use interrupt phandles to irqchips that have not yet initialized: irq: no irq domain found for /ocp/pinmux@48002030 ! ------------[ cut here ]------------ WARNING: CPU: 0 PID: 1 at drivers/of/platform.c:171 of_device_alloc+0x144/0x184() Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.12.0-00038-g42a9708 #1012 (show_stack+0x14/0x1c) (dump_stack+0x6c/0xa0) (warn_slowpath_common+0x64/0x84) (warn_slowpath_null+0x1c/0x24) (of_device_alloc+0x144/0x184) (of_platform_device_create_pdata+0x44/0x9c) (of_platform_bus_create+0xd0/0x170) (of_platform_bus_create+0x12c/0x170) (of_platform_populate+0x60/0x98) This is because we're wrongly trying to populate resources that are not yet available. It's perfectly valid to create irqchips dynamically, so let's fix up the issue by resolving the interrupt resources when platform_get_irq is called. And then we also need to accept the fact that some irqdomains do not exist that early on, and only get initialized later on. So we can make the current WARN_ON into just into a pr_debug(). We still attempt to populate irq resources when we create the devices. This allows current drivers which don't use platform_get_irq to continue to function. Once all drivers are fixed, this code can be removed. Suggested-by: NRussell King <linux@arm.linux.org.uk> Signed-off-by: NRob Herring <robh@kernel.org> Signed-off-by: NTony Lindgren <tony@atomide.com> Tested-by: NTony Lindgren <tony@atomide.com> Cc: stable@vger.kernel.org # v3.10+ Signed-off-by: NGrant Likely <grant.likely@linaro.org>
-
- 22 4月, 2014 1 次提交
-
-
由 Boris BREZILLON 提交于
Some I2C adapters are only compatible with the SMBus protocol and do not support standard I2C transfers. Fallback to SMBus transfers if we encounter such kind of adapters. The transfer type is chosen according to the val_bits field in the regmap config. Signed-off-by: NBoris BREZILLON <boris.brezillon@free-electrons.com> Signed-off-by: NMark Brown <broonie@linaro.org>
-