- 23 10月, 2015 18 次提交
-
-
由 Julien Grall 提交于
The PV block protocol is using 4KB page granularity. The goal of this patch is to allow a Linux using 64KB page granularity using block device on a non-modified Xen. The block API is using segment which should at least be the size of a Linux page. Therefore, the driver will have to break the page in chunk of 4K before giving the page to the backend. When breaking a 64KB segment in 4KB chunks, it is possible that some chunks are empty. As the PV protocol always require to have data in the chunk, we have to count the number of Xen page which will be in use and avoid sending empty chunks. Note that, a pre-defined number of grants are reserved before preparing the request. This pre-defined number is based on the number and the maximum size of the segments. If each segment contains a very small amount of data, the driver may reserve too many grants (16 grants is reserved per segment with 64KB page granularity). Furthermore, in the case of persistent grants we allocate one Linux page per grant although only the first 4KB of the page will be effectively in use. This could be improved by sharing the page with multiple grants. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Acked-by: NRoger Pau Monné <roger.pau@citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Julien Grall 提交于
The Xen interface is using 4KB page granularity. This means that each grant is 4KB. The current implementation allocates a Linux page per grant. On Linux using 64KB page granularity, only the first 4KB of the page will be used. We could decrease the memory wasted by sharing the page with multiple grant. It will require some care with the {Set,Clear}ForeignPage macro. Note that no changes has been made in the x86 code because both Linux and Xen will only use 4KB page granularity. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Julien Grall 提交于
Only use the first 4KB of the page to store the events channel info. It means that we will waste 60KB every time we allocate page for: * control block: a page is allocating per CPU * event array: a page is allocating everytime we need to expand it I think we can reduce the memory waste for the 2 areas by: * control block: sharing between multiple vCPUs. Although it will require some bookkeeping in order to not free the page when the CPU goes offline and the other CPUs sharing the page still there * event array: always extend the array event by 64K (i.e 16 4K chunk). That would require more care when we fail to expand the event channel. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Julien Grall 提交于
For ARM64 guests, Linux is able to support either 64K or 4K page granularity. Although, the hypercall interface is always based on 4K page granularity. With 64K page granularity, a single page will be spread over multiple Xen frame. To avoid splitting the page into 4K frame, take advantage of the extent_order field to directly allocate/free chunk of the Linux page size. Note that PVMMU is only used for PV guest (which is x86) and the page granularity is always 4KB. Some BUILD_BUG_ON has been added to ensure that because the code has not been modified. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Julien Grall 提交于
The console ring is always based on the page granularity of Xen. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Julien Grall 提交于
All the ring (xenstore, and PV rings) are always based on the page granularity of Xen. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Julien Grall 提交于
On ARM all dma-capable devices on a same platform may not be protected by an IOMMU. The DMA requests have to use the BFN (i.e MFN on ARM) in order to use correctly the device. While the DOM0 memory is allocated in a 1:1 fashion (PFN == MFN), grant mapping will screw this contiguous mapping. When Linux is using 64KB page granularitary, the page may be split accross multiple non-contiguous MFN (Xen is using 4KB page granularity). Therefore a DMA request will likely fail. Checking that a 64KB page is using contiguous MFN is tedious. For now, always says that biovec are not mergeable. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Julien Grall 提交于
Prepare the code to support 64KB page granularity. The first implementation will use a full Linux page per indirect and persistent grant. When non-persistent grant is used, each page of a bio request may be split in multiple grant. Furthermore, the field page of the grant structure is only used to copy data from persistent grant or indirect grant. Avoid to set it for other use case as it will have no meaning given the page will be split in multiple grant. Provide 2 functions, to setup indirect grant, the other for bio page. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Acked-by: NRoger Pau Monné <roger.pau@citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Julien Grall 提交于
All the usage of the field pfn are done using the same idiom: pfn_to_page(grant->pfn) This will return always the same page. Store directly the page in the grant to clean up the code. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Acked-by: NRoger Pau Monné <roger.pau@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Julien Grall 提交于
Currently, blkif_queue_request has 2 distinct execution path: - Send a discard request - Send a read/write request The function is also allocating grants to use for generating the request. Although, this is only used for read/write request. Rather than having a function with 2 distinct execution path, separate the function in 2. This will also remove one level of tabulation. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Reviewed-by: NRoger Pau Monné <roger.pau@citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Julien Grall 提交于
Currently, a grant is always based on the Xen page granularity (i.e 4KB). When Linux is using a different page granularity, a single page will be split between multiple grants. The new helpers will be in charge of splitting the Linux page into grants and call a function given by the caller on each grant. Also provide an helper to count the number of grants within a given contiguous region. Note that the x86/include/asm/xen/page.h is now including xen/interface/grant_table.h rather than xen/grant_table.h. It's necessary because xen/grant_table.h depends on asm/xen/page.h and will break the compilation. Furthermore, only definition in interface/grant_table.h is required. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Julien Grall 提交于
The skb doesn't change within the function. Therefore it's only necessary to check if we need GSO once at the beginning. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Acked-by: NWei Liu <wei.liu2@citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 David Vrabel 提交于
Pages returned by alloc_xenballooned_pages() will be used for grant mapping which will call set_phys_to_machine() (in PV guests). Ballooned pages are set as INVALID_P2M_ENTRY in the p2m and thus may be using the (shared) missing tables and a subsequent set_phys_to_machine() will need to allocate new tables. Since the grant mapping may be done from a context that cannot sleep, the p2m entries must already be allocated. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NDaniel Kiper <daniel.kiper@oracle.com>
-
由 David Vrabel 提交于
alloc_xenballooned_pages() is used to get ballooned pages to back foreign mappings etc. Instead of having to balloon out real pages, use (if supported) hotplugged memory. This makes more memory available to the guest and reduces fragmentation in the p2m. This is only enabled if the xen.balloon.hotplug_unpopulated sysctl is set to 1. This sysctl defaults to 0 in case the udev rules to automatically online hotplugged memory do not exist. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NDaniel Kiper <daniel.kiper@oracle.com> --- v3: - Add xen.balloon.hotplug_unpopulated sysctl to enable use of hotplug for unpopulated pages.
-
由 David Vrabel 提交于
All users of alloc_xenballoon_pages() wanted low memory pages, so remove the option for high memory. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NDaniel Kiper <daniel.kiper@oracle.com>
-
由 David Vrabel 提交于
Now that we track the total number of pages (included hotplugged regions), it is easy to determine if more memory needs to be hotplugged. Add a new BP_WAIT state to signal that the balloon process needs to wait until kicked by the memory add notifier (when the new section is onlined by userspace). Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NDaniel Kiper <daniel.kiper@oracle.com> --- v3: - Return BP_WAIT if enough sections are already hotplugged. v2: - New BP_WAIT status after adding new memory sections.
-
由 David Vrabel 提交于
The stats used for memory hotplug make no sense and are fiddled with in odd ways. Remove them and introduce total_pages to track the total number of pages (both populated and unpopulated) including those within hotplugged regions (note that this includes not yet onlined pages). This will be used in a subsequent commit (xen/balloon: only hotplug additional memory if required) when deciding whether additional memory needs to be hotplugged. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NDaniel Kiper <daniel.kiper@oracle.com>
-
由 David Vrabel 提交于
Instead of placing hotplugged memory at the end of RAM (which may conflict with PCI devices or reserved regions) use allocate_resource() to get a new, suitably aligned resource that does not conflict. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NDaniel Kiper <daniel.kiper@oracle.com> --- v3: - Remove stale comment.
-
- 18 10月, 2015 1 次提交
-
-
由 Mika Westerberg 提交于
ACPI SSCN/FMCN methods were originally added because then the platform can provide the most accurate HCNT/LCNT values to the driver. However, this seems not to be true for Dell Inspiron 7348 where using these causes the touchpad to fail in boot: i2c_hid i2c-DLL0675:00: failed to retrieve report from device. i2c_designware INT3433:00: i2c_dw_handle_tx_abort: lost arbitration i2c_hid i2c-DLL0675:00: failed to retrieve report from device. i2c_designware INT3433:00: controller timed out The values received from ACPI are (in fast mode): HCNT: 72 LCNT: 160 this translates to following timings (input clock is 100MHz on Broadwell): tHIGH: 720 ns (spec min 600 ns) tLOW: 1600 ns (spec min 1300 ns) Bus period: 2920 ns (assuming 300 ns tf and tr) Bus speed: 342.5 kHz Both tHIGH and tLOW are within the I2C specification. The calculated values when ACPI parameters are not used are (in fast mode): HCNT: 87 LCNT: 159 which translates to: tHIGH: 870 ns (spec min 600 ns) tLOW: 1590 ns (spec min 1300 ns) Bus period 3060 ns (assuming 300 ns tf and tr) Bus speed 326.8 kHz These values are also within the I2C specification. Since both ACPI and calculated values meet the I2C specification timing requirements it is hard to say why the touchpad does not function properly with the ACPI values except that the bus speed is higher in this case (but still well below the max 400kHz). Solve this by adding DMI quirk to the driver that disables using ACPI parameters on this particulare machine. Reported-by: NPavel Roskin <plroskin@gmail.com> Signed-off-by: NMika Westerberg <mika.westerberg@linux.intel.com> Tested-by: NPavel Roskin <plroskin@gmail.com> Signed-off-by: NWolfram Sang <wsa@the-dreams.de> Cc: stable@kernel.org
-
- 17 10月, 2015 1 次提交
-
-
由 Michal Hocko 提交于
Commit 6afdb859 ("mm: do not ignore mapping_gfp_mask in page cache allocation paths") has caught some users of hardcoded GFP_KERNEL used in the page cache allocation paths. This, however, wasn't complete and there were others which went unnoticed. Dave Chinner has reported the following deadlock for xfs on loop device: : With the recent merge of the loop device changes, I'm now seeing : XFS deadlock on my single CPU, 1GB RAM VM running xfs/073. : : The deadlocked is as follows: : : kloopd1: loop_queue_read_work : xfs_file_iter_read : lock XFS inode XFS_IOLOCK_SHARED (on image file) : page cache read (GFP_KERNEL) : radix tree alloc : memory reclaim : reclaim XFS inodes : log force to unpin inodes : <wait for log IO completion> : : xfs-cil/loop1: <does log force IO work> : xlog_cil_push : xlog_write : <loop issuing log writes> : xlog_state_get_iclog_space() : <blocks due to all log buffers under write io> : <waits for IO completion> : : kloopd1: loop_queue_write_work : xfs_file_write_iter : lock XFS inode XFS_IOLOCK_EXCL (on image file) : <wait for inode to be unlocked> : : i.e. the kloopd, with it's split read and write work queues, has : introduced a dependency through memory reclaim. i.e. that writes : need to be able to progress for reads make progress. : : The problem, fundamentally, is that mpage_readpages() does a : GFP_KERNEL allocation, rather than paying attention to the inode's : mapping gfp mask, which is set to GFP_NOFS. : : The didn't used to happen, because the loop device used to issue : reads through the splice path and that does: : : error = add_to_page_cache_lru(page, mapping, index, : GFP_KERNEL & mapping_gfp_mask(mapping)); This has changed by commit aa4d8616 ("block: loop: switch to VFS ITER_BVEC"). This patch changes mpage_readpage{s} to follow gfp mask set for the mapping. There are, however, other places which are doing basically the same. lustre:ll_dir_filler is doing GFP_KERNEL from the function which apparently uses GFP_NOFS for other allocations so let's make this consistent. cifs:readpages_get_pages is called from cifs_readpages and __cifs_readpages_from_fscache called from the same path obeys mapping gfp. ramfs_nommu_expand_for_mapping is hardcoding GFP_KERNEL as well regardless it uses mapping_gfp_mask for the page allocation. ext4_mpage_readpages is the called from the page cache allocation path same as read_pages and read_cache_pages As I've noticed in my previous post I cannot say I would be happy about sprinkling mapping_gfp_mask all over the place and it sounds like we should drop gfp_mask argument altogether and use it internally in __add_to_page_cache_locked that would require all the filesystems to use mapping gfp consistently which I am not sure is the case here. From a quick glance it seems that some file system use it all the time while others are selective. Signed-off-by: NMichal Hocko <mhocko@suse.com> Reported-by: NDave Chinner <david@fromorbit.com> Cc: "Theodore Ts'o" <tytso@mit.edu> Cc: Ming Lei <ming.lei@canonical.com> Cc: Andreas Dilger <andreas.dilger@intel.com> Cc: Oleg Drokin <oleg.drokin@intel.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 16 10月, 2015 5 次提交
-
-
由 Ilya Dryomov 提交于
This covers only the simplest case - an object size sized write, but it's still useful in tiering setups when EC is used for the base tier as writefull op can be proxied, saving an object promotion. Even though updating ceph_osdc_new_request() to allow writefull should just be a matter of fixing an assert, I didn't do it because its only user is cephfs. All other sites were updated. Reflects ceph.git commit 7bfb7f9025a8ee0d2305f49bf0336d2424da5b5b. Signed-off-by: NIlya Dryomov <idryomov@gmail.com> Reviewed-by: NAlex Elder <elder@linaro.org>
-
由 Ilya Dryomov 提交于
Commit 30e2bc08 ("Revert "block: remove artifical max_hw_sectors cap"") restored a clamp on max_sectors. It's now 2560 sectors instead of 1024, but it's not good enough: we set max_hw_sectors to rbd object size because we don't want object sized I/Os to be split, and the default object size is 4M. So, set max_sectors to max_hw_sectors in rbd at queue init time. Signed-off-by: NIlya Dryomov <idryomov@gmail.com> Reviewed-by: NAlex Elder <elder@linaro.org>
-
由 Marc Zyngier 提交于
When we create a generic MSI domain, that MSI_FLAG_USE_DEF_CHIP_OPS is set, and that any of .mask or .unmask are NULL in the irq_chip structure, we set them to pci_msi_[un]mask_irq. This is a bad idea for at least two reasons: - PCI_MSI might not be selected, kernel fails to build (yes, this is legitimate, at least on arm64!) - This may not be a PCI/MSI domain at all (platform MSI, for example) Either way, this looks wrong. Move the overriding of mask/unmask to the PCI counterpart, and panic is any of these two methods is not set in the core code (they really should be present). Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Cc: Jiang Liu <jiang.liu@linux.intel.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Link: http://lkml.kernel.org/r/1444760085-27857-1-git-send-email-marc.zyngier@arm.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Arnd Bergmann 提交于
The virtgpu driver prints the last_seq variable using the %ld or %lu format string, which does not work correctly on all architectures and causes this compiler warning on ARM: drivers/gpu/drm/virtio/virtgpu_fence.c: In function 'virtio_timeline_value_str': drivers/gpu/drm/virtio/virtgpu_fence.c:64:22: warning: format '%lu' expects argument of type 'long unsigned int', but argument 4 has type 'long long int' [-Wformat=] snprintf(str, size, "%lu", atomic64_read(&fence->drv->last_seq)); ^ drivers/gpu/drm/virtio/virtgpu_debugfs.c: In function 'virtio_gpu_debugfs_irq_info': drivers/gpu/drm/virtio/virtgpu_debugfs.c:37:16: warning: format '%ld' expects argument of type 'long int', but argument 3 has type 'long long int' [-Wformat=] seq_printf(m, "fence %ld %lld\n", ^ In order to avoid the warnings, this changes the format strings to %llu and adds a cast to u64, which makes it work the same way everywhere. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Srinivas Pandruvada 提交于
This is a workaround for KNL platform, where in some cases MPERF counter will not have updated value before next read of MSR_IA32_MPERF. In this case divide by zero will occur. This change ignores current sample for busy calculation in this case. Fixes: b34ef932 (intel_pstate: Knights Landing support) Signed-off-by: NSrinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Acked-by: NKristen Carlson Accardi <kristen@linux.intel.com> Cc: 4.1+ <stable@vger.kernel.org> # 4.1+ Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 15 10月, 2015 10 次提交
-
-
由 Michel Dänzer 提交于
This fixes flickering issues caused by prematurely firing pflip interrupts. v2 (chk): add commit message, fix DCE V10/V11 and DM as well v3: Re-enable pflip interrupt wherever we re-enable a CRTC v4: Enable pflip interrupt in DAL as well v5: drop DAL changes for upstream v6: (agd): only enable interrupts on crtcs that exist v7: (agd): integrate suggestions from Michel Signed-off-by: NMichel Dänzer <michel.daenzer@amd.com> Signed-off-by: NChristian König <christian.koenig@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
-
由 Alex Deucher 提交于
Set the default to 600Mhz if it's not set in the bios, and bump the default to 600Mhz if it's lower than that. Port of radeon commit: 9368931d v2: clean up the code a bit bug: https://bugs.freedesktop.org/show_bug.cgi?id=91896Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Wolfram Sang 提交于
The core may register clients attached to this master which may use funtionality from the master. So, RuntimePM must be enabled before, otherwise this will fail. Signed-off-by: NWolfram Sang <wsa+renesas@sang-engineering.com> Signed-off-by: NWolfram Sang <wsa@the-dreams.de> Acked-by: NMika Westerberg <mika.westerberg@linux.intel.com> Cc: stable@kernel.org
-
由 Wolfram Sang 提交于
The core may register clients attached to this master which may use funtionality from the master. So, RuntimePM must be enabled before, otherwise this will fail. While here, move drvdata, too. Signed-off-by: NWolfram Sang <wsa+renesas@sang-engineering.com> Tested-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com> Acked-by: NKukjin Kim <kgene@kernel.org> Signed-off-by: NWolfram Sang <wsa@the-dreams.de> Cc: stable@kernel.org
-
由 Wolfram Sang 提交于
The core may register clients attached to this master which may use funtionality from the master. So, RuntimePM must be enabled before, otherwise this will fail. While here, move drvdata, too. Reported-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NWolfram Sang <wsa+renesas@sang-engineering.com> Signed-off-by: NWolfram Sang <wsa@the-dreams.de> Cc: stable@kernel.org
-
由 Kieran Bingham 提交于
A change of return status was introduced in commit 3fffd128 ("i2c: allow specifying separate wakeup interrupt in device tree") The commit prevents the defer status being passed up the call stack appropriately when dev_pm_domain_attach returns -EPROBE_DEFER. Catch the PROBE_DEFER and clear up the IRQ wakeup status Signed-off-by: NKieran Bingham <kieranbingham@gmail.com> Fixes: 3fffd128 ("i2c: allow specifying separate wakeup interrupt in device tree") Reviewed-by: NDmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: NWolfram Sang <wsa@the-dreams.de>
-
由 Dave Airlie 提交于
This zeroes the msg so no random stack data ends up getting sent, it also limits the function to not accepting > 4 i2c msgs. Cc: stable@vger.kernel.org Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Dave Airlie 提交于
This allows tiled monitors to work with radeon once mst is enabled. Cc: stable@vger.kernel.org Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Lv Zheng 提交于
Some logics actually relying on the existence of FADT, currently relies on the number of loaded tables. This false dependency can easily trigger regressions. One of them has been introduced by commit 8ec3f459 (ACPICA: Tables: Fix global table list issues by removing fixed table). The commit changing the fixed table indexes results in the change of FADT table index, originally, it was 3 (thus the installed table count should be greater than 4), while currently it is 0 (and the installed table count may be 3). This patch fixes this regression by cleaning up the code. Lv Zheng. Link: https://bugzilla.kernel.org/show_bug.cgi?id=105351 Fixes: 8ec3f459 (ACPICA: Tables: Fix global table list issues by removing fixed table) Reported-and-tested-by: NMeelis Roos <mroos@linux.ee> Signed-off-by: NLv Zheng <lv.zheng@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Stephen Boyd 提交于
This partially reverts commit eca61c9f. Thomas reports that it causes regressions on Armada XP devices. This is because of_clk_get_parent_name() relies on the property 'clock-output-names' to resolve the name of a clock's parent, without trying to get the clock from the framework and call __clk_get_name(). Given that Armada XP devices don't have the 'clock-output-names' property, of_clk_get_parent_name() returns the name of the node which doesn't match the actual parent clock's name at all, causing CPU clocks to never link up with their parents. Reported-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
-
- 14 10月, 2015 5 次提交
-
-
由 Dudley Du 提交于
Fix the copy paste error on the electrodes_rx value set code which will cause the electrodes_rx value be always set to the value of electrodes_y. Reported-by: NSudip Mukherjee <sudipm.mukherjee@gmail.com> Signed-off-by: NDudley Du <dudl@cypress.com> Signed-off-by: NDmitry Torokhov <dmitry.torokhov@gmail.com>
-
由 Ulf Hansson 提交于
Commit ba2bbfbf (PM / Domains: Remove intermediate states from the power off sequence) changed the power off sequence in genpd. That also required some updates regarding the validation of latency constraints in the genpd governor. Unfortunate that wasn't covered, so let's fix this. From a runtime PM and latency point of view, we need to consider the worst case scenario while validating latency constraints. That's typically when a call to pm_runtime_get_sync() needs to wait for a ongoing runtime suspend operation to be carried out, as it then also needs to wait for the device to be runtime resumed again. The above mentioned commit made the genpd governor's ->stop_ok() callback responsible of validating genpd's device's runtime suspend/resume latency. In other words, the constraint needs to be validated towards the relevant latencies present in genpd's ->runtime_suspend|resume() callbacks. Earlier, that included latencies from the ->stop|start() callbacks, but as ->save|restore_state() are now also being invoked from genpd's ->runtime_suspend|resume() and to comply with the worst case scenario, let's take also those latencies into account. Fixes: ba2bbfbf (PM / Domains: Remove intermediate states from the power off sequence) Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Christoph Lameter 提交于
When we leave the multicast group on expiration of a neighbor we do not free the mcast structure. This results in a memory leak that causes ib_dealloc_pd to fail and print a WARN_ON message and backtrace. Fixes: bd99b2e0 (IB/ipoib: Expire sendonly multicast joins) Signed-off-by: NChristoph Lameter <cl@linux.com> Tested-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Mike Snitzer 提交于
Fixes: ac8c3f3d ("dm thin: generate event when metadata threshold passed") Signed-off-by: NMike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org # 3.10+
-
由 Sudip Mukherjee 提交于
If an unsupported option is given then the early return from persistent_ctr() leaked memory allocated for the 'pstore' and never destroyed the 'metadata_wq'. Fixes: b0d3cc01 ("dm snapshot: add new persistent store option to support overflow") Signed-off-by: NSudip Mukherjee <sudip@vectorindia.org> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-