- 23 1月, 2017 3 次提交
-
-
由 Eric Auger 提交于
We introduce a new field to differentiate the reserved region types and specialize the apply_resv_region implementation. Legacy direct mapped regions have IOMMU_RESV_DIRECT type. We introduce 2 new reserved memory types: - IOMMU_RESV_MSI will characterize MSI regions that are mapped - IOMMU_RESV_RESERVED characterize regions that cannot by mapped. Signed-off-by: NEric Auger <eric.auger@redhat.com> Tested-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Tested-by: NBharat Bhushan <bharat.bhushan@nxp.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Eric Auger 提交于
We want to extend the callbacks used for dm regions and use them for reserved regions. Reserved regions can be - directly mapped regions - regions that cannot be iommu mapped (PCI host bridge windows, ...) - MSI regions (because they belong to another address space or because they are not translated by the IOMMU and need special handling) So let's rename the struct and also the callbacks. Signed-off-by: NEric Auger <eric.auger@redhat.com> Acked-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Tested-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Tested-by: NBharat Bhushan <bharat.bhushan@nxp.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
IOMMU domain users such as VFIO face a similar problem to DMA API ops with regard to mapping MSI messages in systems where the MSI write is subject to IOMMU translation. With the relevant infrastructure now in place for managed DMA domains, it's actually really simple for other users to piggyback off that and reap the benefits without giving up their own IOVA management, and without having to reinvent their own wheel in the MSI layer. Allow such users to opt into automatic MSI remapping by dedicating a region of their IOVA space to a managed cookie, and extend the mapping routine to implement a trivial linear allocator in such cases, to avoid the needless overhead of a full-blown IOVA domain. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Tested-by: NEric Auger <eric.auger@redhat.com> Tested-by: NTomasz Nowicki <tomasz.nowicki@caviumnetworks.com> Tested-by: NBharat Bhushan <bharat.bhushan@nxp.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 05 1月, 2017 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Linus reported that commit 174cc718 "ACPICA: Tables: Back port acpi_get_table_with_size() and early_acpi_os_unmap_memory() from Linux kernel" added a new warning on his desktop system: ACPI Warning: Table ffffffff9fe6c0a0, Validation count is zero before decrement which turns out to come from the acpi_put_table() in detect_intel_iommu(). This happens if the DMAR table is not present in which case NULL is passed to acpi_put_table() which doesn't check against that and attempts to handle it regardless. For this reason, check the pointer passed to acpi_put_table() before invoking it. Reported-by: NLinus Torvalds <torvalds@linux-foundation.org> Tested-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Fixes: 6b11d1d6 ("ACPI / osl: Remove acpi_get_table_with_size()/early_acpi_os_unmap_memory() users") Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 04 1月, 2017 3 次提交
-
-
由 Huang Rui 提交于
The generic command buffer entry is 128 bits (16 bytes), so the offset of tail and head pointer should be 16 bytes aligned and increased with 0x10 per command. When cmd buf is full, head = (tail + 0x10) % CMD_BUFFER_SIZE. So when left space of cmd buf should be able to store only two command, we should be issued one COMPLETE_WAIT additionally to wait all older commands completed. Then the left space should be increased after IOMMU fetching from cmd buf. So left check value should be left <= 0x20 (two commands). Signed-off-by: NHuang Rui <ray.huang@amd.com> Fixes: ac0ea6e9 ('x86/amd-iommu: Improve handling of full command buffer') Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Jacob Pan 提交于
Different encodings are used to represent supported PASID bits and number of PASID table entries. The current code assigns ecap_pss directly to extended context table entry PTS which is wrong and could result in writing non-zero bits to the reserved fields. IOMMU fault reason 11 will be reported when reserved bits are nonzero. This patch converts ecap_pss to extend context entry pts encoding based on VT-d spec. Chapter 9.4 as follows: - number of PASID bits = ecap_pss + 1 - number of PASID table entries = 2^(pts + 5) Software assigned limit of pasid_max value is also respected to match the allocation limitation of PASID table. cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> cc: Ashok Raj <ashok.raj@intel.com> Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Tested-by: NMika Kuoppala <mika.kuoppala@intel.com> Fixes: 2f26e0a9 ('iommu/vt-d: Add basic SVM PASID support') Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Xunlei Pang 提交于
We met the DMAR fault both on hpsa P420i and P421 SmartArray controllers under kdump, it can be steadily reproduced on several different machines, the dmesg log is like: HP HPSA Driver (v 3.4.16-0) hpsa 0000:02:00.0: using doorbell to reset controller hpsa 0000:02:00.0: board ready after hard reset. hpsa 0000:02:00.0: Waiting for controller to respond to no-op DMAR: Setting identity map for device 0000:02:00.0 [0xe8000 - 0xe8fff] DMAR: Setting identity map for device 0000:02:00.0 [0xf4000 - 0xf4fff] DMAR: Setting identity map for device 0000:02:00.0 [0xbdf6e000 - 0xbdf6efff] DMAR: Setting identity map for device 0000:02:00.0 [0xbdf6f000 - 0xbdf7efff] DMAR: Setting identity map for device 0000:02:00.0 [0xbdf7f000 - 0xbdf82fff] DMAR: Setting identity map for device 0000:02:00.0 [0xbdf83000 - 0xbdf84fff] DMAR: DRHD: handling fault status reg 2 DMAR: [DMA Read] Request device [02:00.0] fault addr fffff000 [fault reason 06] PTE Read access is not set hpsa 0000:02:00.0: controller message 03:00 timed out hpsa 0000:02:00.0: no-op failed; re-trying After some debugging, we found that the fault addr is from DMA initiated at the driver probe stage after reset(not in-flight DMA), and the corresponding pte entry value is correct, the fault is likely due to the old iommu caches of the in-flight DMA before it. Thus we need to flush the old cache after context mapping is setup for the device, where the device is supposed to finish reset at its driver probe stage and no in-flight DMA exists hereafter. I'm not sure if the hardware is responsible for invalidating all the related caches allocated in the iommu hardware before, but seems not the case for hpsa, actually many device drivers have problems in properly resetting the hardware. Anyway flushing (again) by software in kdump kernel when the device gets context mapped which is a quite infrequent operation does little harm. With this patch, the problematic machine can survive the kdump tests. CC: Myron Stowe <myron.stowe@gmail.com> CC: Joseph Szczypek <jszczype@redhat.com> CC: Don Brace <don.brace@microsemi.com> CC: Baoquan He <bhe@redhat.com> CC: Dave Young <dyoung@redhat.com> Fixes: 091d42e4 ("iommu/vt-d: Copy translation tables from old kernel") Fixes: dbcd861f ("iommu/vt-d: Do not re-use domain-ids from the old kernel") Fixes: cf484d0e ("iommu/vt-d: Mark copied context entries") Signed-off-by: NXunlei Pang <xlpang@redhat.com> Tested-by: NDon Brace <don.brace@microsemi.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 21 12月, 2016 1 次提交
-
-
由 Lv Zheng 提交于
This patch removes the users of the deprectated APIs: acpi_get_table_with_size() early_acpi_os_unmap_memory() The following APIs should be used instead of: acpi_get_table() acpi_put_table() The deprecated APIs are invented to be a replacement of acpi_get_table() during the early stage so that the early mapped pointer will not be stored in ACPICA core and thus the late stage acpi_get_table() won't return a wrong pointer. The mapping size is returned just because it is required by early_acpi_os_unmap_memory() to unmap the pointer during early stage. But as the mapping size equals to the acpi_table_header.length (see acpi_tb_init_table_descriptor() and acpi_tb_validate_table()), when such a convenient result is returned, driver code will start to use it instead of accessing acpi_table_header to obtain the length. Thus this patch cleans up the drivers by replacing returned table size with acpi_table_header.length, and should be a no-op. Reported-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NLv Zheng <lv.zheng@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 02 12月, 2016 1 次提交
-
-
由 Anna-Maria Gleixner 提交于
Install the callbacks via the state machine. Signed-off-by: NAnna-Maria Gleixner <anna-maria@linutronix.de> Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Joerg Roedel <joro@8bytes.org> Cc: iommu@lists.linux-foundation.org Cc: rt@linutronix.de Cc: David Woodhouse <dwmw2@infradead.org> Link: http://lkml.kernel.org/r/20161126231350.10321-14-bigeasy@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 30 11月, 2016 2 次提交
-
-
由 Dan Carpenter 提交于
We should set "ret" to -EINVAL if iommu_group_get() fails. Fixes: 55c99a4d ("iommu/amd: Use iommu_attach_group()") Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Geliang Tang 提交于
Drop duplicate header pci.h from s390-iommu.c. Signed-off-by: NGeliang Tang <geliangtang@gmail.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 29 11月, 2016 12 次提交
-
-
由 Lorenzo Pieralisi 提交于
In ACPI based systems, in order to be able to create platform devices and initialize them for ARM SMMU components, the IORT kernel implementation requires a set of static functions to be used by the IORT kernel layer to configure platform devices for ARM SMMU components. Add static configuration functions to the IORT kernel layer for the ARM SMMU components, so that the ARM SMMU driver can initialize its respective platform device by relying on the IORT kernel infrastructure and by adding a corresponding ACPI device early probe section entry. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NTomasz Nowicki <tn@semihalf.com> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: NTomasz Nowicki <tn@semihalf.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Joerg Roedel <joro@8bytes.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Lorenzo Pieralisi 提交于
Current ARM SMMU probe functions intermingle HW and DT probing in the initialization functions to detect and programme the ARM SMMU driver features. In order to allow probing the ARM SMMU with other firmwares than DT, this patch splits the ARM SMMU init functions into DT and HW specific portions so that other FW interfaces (ie ACPI) can reuse the HW probing functions and skip the DT portion accordingly. This patch implements no functional change, only code reshuffling. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NTomasz Nowicki <tn@semihalf.com> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: NTomasz Nowicki <tn@semihalf.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Hanjun Guo <hanjun.guo@linaro.org> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Lorenzo Pieralisi 提交于
In ACPI bases systems, in order to be able to create platform devices and initialize them for ARM SMMU v3 components, the IORT kernel implementation requires a set of static functions to be used by the IORT kernel layer to configure platform devices for ARM SMMU v3 components. Add static configuration functions to the IORT kernel layer for the ARM SMMU v3 components, so that the ARM SMMU v3 driver can initialize its respective platform device by relying on the IORT kernel infrastructure and by adding a corresponding ACPI device early probe section entry. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NTomasz Nowicki <tn@semihalf.com> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: NTomasz Nowicki <tn@semihalf.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Joerg Roedel <joro@8bytes.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Lorenzo Pieralisi 提交于
Current ARM SMMUv3 probe functions intermingle HW and DT probing in the initialization functions to detect and programme the ARM SMMU v3 driver features. In order to allow probing the ARM SMMUv3 with other firmwares than DT, this patch splits the ARM SMMUv3 init functions into DT and HW specific portions so that other FW interfaces (ie ACPI) can reuse the HW probing functions and skip the DT portion accordingly. This patch implements no functional change, only code reshuffling. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NTomasz Nowicki <tn@semihalf.com> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: NTomasz Nowicki <tn@semihalf.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Hanjun Guo <hanjun.guo@linaro.org> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Joerg Roedel <joro@8bytes.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Lorenzo Pieralisi 提交于
Current ARM SMMU v3 driver rely on the struct device.of_node pointer for device look-up and iommu_ops retrieval. In preparation for ACPI probing enablement, convert the driver to use the struct device.fwnode member for device and iommu_ops look-up so that the driver infrastructure can be used also on systems that do not associate an of_node pointer to a struct device (eg ACPI), making the device look-up and iommu_ops retrieval firmware agnostic. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NTomasz Nowicki <tn@semihalf.com> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: NTomasz Nowicki <tn@semihalf.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Hanjun Guo <hanjun.guo@linaro.org> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Lorenzo Pieralisi 提交于
Current ARM SMMU driver rely on the struct device.of_node pointer for device look-up and iommu_ops retrieval. In preparation for ACPI probing enablement, convert the driver to use the struct device.fwnode member for device and iommu_ops look-up so that the driver infrastructure can be used also on systems that do not associate an of_node pointer to a struct device (eg ACPI), making the device look-up and iommu_ops retrieval firmware agnostic. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NTomasz Nowicki <tn@semihalf.com> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: NTomasz Nowicki <tn@semihalf.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Hanjun Guo <hanjun.guo@linaro.org> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Lorenzo Pieralisi 提交于
The of_iommu_{set/get}_ops() API is used to associate a device tree node with a specific set of IOMMU operations. The same kernel interface is required on systems booting with ACPI, where devices are not associated with a device tree node, therefore the interface requires generalization. The struct device fwnode member represents the fwnode token associated with the device and the struct it points at is firmware specific; regardless, it is initialized on both ACPI and DT systems and makes an ideal candidate to use it to associate a set of IOMMU operations to a given device, through its struct device.fwnode member pointer, paving the way for representing per-device iommu_ops (ie an iommu instance associated with a device). Convert the DT specific of_iommu_{set/get}_ops() interface to use struct device.fwnode as a look-up token, making the interface usable on ACPI systems and rename the data structures and the registration API so that they are made to represent their usage more clearly. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NTomasz Nowicki <tn@semihalf.com> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: NTomasz Nowicki <tn@semihalf.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Hanjun Guo <hanjun.guo@linaro.org> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Joerg Roedel <joro@8bytes.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Nipun Gupta 提交于
The SMTNMB_TLBEN in the Auxiliary Configuration Register (ACR) provides an option to enable the updation of TLB in case of bypass transactions due to no stream match in the stream match table. This reduces the latencies of the subsequent transactions with the same stream-id which bypasses the SMMU. This provides a significant performance benefit for certain networking workloads. With this change substantial performance improvement of ~9% is observed with DPDK l3fwd application (http://dpdk.org/doc/guides/sample_app_ug/l3_forward.html) on NXP's LS2088a platform. Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NNipun Gupta <nipun.gupta@nxp.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Bhumika Goyal 提交于
Check for iommu_gather_ops structures that are only stored in the tlb field of an io_pgtable_cfg structure. The tlb field is of type const struct iommu_gather_ops *, so iommu_gather_ops structures having this property can be declared as const. Also, replace __initdata with __initconst. Acked-by: NJulia Lawall <julia.lawall@lip6.fr> Signed-off-by: NBhumika Goyal <bhumirks@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Bhumika Goyal 提交于
Check for iommu_gather_ops structures that are only stored in the tlb field of an io_pgtable_cfg structure. The tlb field is of type const struct iommu_gather_ops *, so iommu_gather_ops structures having this property can be declared as const. Acked-by: NJulia Lawall <julia.lawall@lip6.fr> Signed-off-by: NBhumika Goyal <bhumirks@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Bhumika Goyal 提交于
Check for iommu_gather_ops structures that are only stored in the tlb field of an io_pgtable_cfg structure. The tlb field is of type const struct iommu_gather_ops *, so iommu_gather_ops structures having this property can be declared as const. Acked-by: NJulia Lawall <julia.lawall@lip6.fr> Signed-off-by: NBhumika Goyal <bhumirks@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Kefeng Wang 提交于
We can use for_each_set_bit() to simplify the code slightly in the ARM io-pgtable self tests. Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 20 11月, 2016 1 次提交
-
-
由 David Woodhouse 提交于
Somehow I ended up with an off-by-three error in calculating the size of the PASID and PASID State tables, which triggers allocations failures as those tables unfortunately have to be physically contiguous. In fact, even the *correct* maximum size of 8MiB is problematic and is wont to lead to allocation failures. Since I have extracted a promise that this *will* be fixed in hardware, I'm happy to limit it on the current hardware to a maximum of 0x20000 PASIDs, which gives us 1MiB tables — still not ideal, but better than before. Reported by Mika Kuoppala <mika.kuoppala@linux.intel.com> and also by Xunlei Pang <xlpang@redhat.com> who submitted a simpler patch to fix only the allocation (and not the free) to the "correct" limit... which was still problematic. Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org> Cc: stable@vger.kernel.org
-
- 15 11月, 2016 13 次提交
-
-
由 Robin Murphy 提交于
When searching for a free IOVA range, we optimise the tree traversal by starting from the cached32_node, instead of the last node, when limit_pfn is equal to dma_32bit_pfn. However, if limit_pfn happens to be smaller, then we'll go ahead and start from the top even though dma_32bit_pfn is still a more suitable upper bound. Since this is clearly a silly thing to do, adjust the lookup condition appropriately. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
For each subsequent device assigned to the m4u_group after its initial allocation, we need to take an additional reference. Otherwise, the caller of iommu_group_get_for_dev() will inadvertently remove the reference taken by iommu_group_add_device(), and the group will be freed prematurely if any device is removed. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
For each subsequent device assigned to the m4u_group after its initial allocation, we need to take an additional reference. Otherwise, the caller of iommu_group_get_for_dev() will inadvertently remove the reference taken by iommu_group_add_device(), and the group will be freed prematurely if any device is removed. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
If acpihid_device_group() finds an existing group for the relevant devid, it should be taking an additional reference on that group. Otherwise, the caller of iommu_group_get_for_dev() will inadvertently remove the reference taken by iommu_group_add_device(), and the group will be freed prematurely if any device is removed. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
When arm_smmu_device_group() finds an existing group due to Stream ID aliasing, it should be taking an additional reference on that group. Otherwise, the caller of iommu_group_get_for_dev() will inadvertently remove the reference taken by iommu_group_add_device(), and the group will be freed prematurely if any device is removed. Reported-by: NSricharan R <sricharan@codeaurora.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Robin Murphy 提交于
iommu_group_get_for_dev() expects that the IOMMU driver's device_group callback return a group with a reference held for the given device. Whilst allocating a new group is fine, and pci_device_group() correctly handles reusing an existing group, there is no general means for IOMMU drivers doing their own group lookup to take additional references on an existing group pointer without having to also store device pointers or resort to elaborate trickery. Add an IOMMU-driver-specific function to fill the hole. Acked-by: NSricharan R <sricharan@codeaurora.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Marek Szyprowski 提交于
This patch uses recently introduced device dependency links to track the runtime pm state of the master's device. The goal is to let SYSMMU controller device's runtime PM to follow the runtime PM state of the respective master's device. This way each SYSMMU controller is active only when its master's device is active and can properly restore or save its state instead on runtime PM transition of master's device. This approach replaces old behavior, when SYSMMU controller was set to runtime active once after attaching to the master device. In the new approach SYSMMU controllers no longer prevents respective power domains to be turned off when master's device is not being used. This patch reduces total power consumption of idle system, because most power domains can be finally turned off. For example, on Exynos 4412 based Odroid U3 this patch reduces power consuption from 136mA to 130mA at 5V (by 4.4%). The dependency links also enforce proper order of suspending/restoring devices during system sleep transition, so there is no more need to use LATE_SYSTEM_SLEEP_PM_OPS-based workaround for ensuring that SYSMMUs are suspended after their master devices. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Marek Szyprowski 提交于
This patch adds runtime pm implementation, which is based on previous suspend/resume code. SYSMMU controller is now being enabled/disabled mainly from the runtime pm callbacks. System sleep callbacks relies on generic pm_runtime_force_suspend/pm_runtime_force_resume helpers. To ensure internal state consistency, additional lock for runtime pm transitions was introduced. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Marek Szyprowski 提交于
This patch reworks locking in the exynos_iommu_attach/detach_device functions to ensure that all entries of the sysmmu_drvdata and exynos_iommu_owner structure are updated under the respective spinlocks, while runtime pm functions are called without any spinlocks held. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Marek Szyprowski 提交于
To avoid possible races, set master device pointer in each SYSMMU controller once on boot. Suspend/resume callbacks now properly relies on the configured iommu domain to enable or disable SYSMMU controller. While changing the code, also update the sleep debug messages and make them conditional. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Marek Szyprowski 提交于
Remove remaining leftovers of the ref-count related code in the __sysmmu_enable/disable functions inline __sysmmu_enable/disable_nocount to them. Suspend/resume callbacks now checks if master device is set for given SYSMMU controller instead of relying on the activation count. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Marek Szyprowski 提交于
__sysmmu_enable/disable functions were designed to do ref-count based operations, but current code always calls them only once, so the code for checking the conditions and invalid conditions can be simply removed without any influence to the driver operation. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Marek Szyprowski 提交于
Remove excessive, useless debug about skipping TLB invalidation, which is a normal situation when more aggressive power management is enabled. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 14 11月, 2016 2 次提交
-
-
由 Robin Murphy 提交于
With the new dma_{map,unmap}_resource() functions added to the DMA API for the benefit of cases like slave DMA, add suitable implementations to the arsenal of our generic layer. Since cache maintenance should not be a concern, these can both be standalone callback implementations without the need for arch code wrappers. CC: Joerg Roedel <joro@8bytes.org> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Marek Szyprowski 提交于
This patch add support for page access protection bits. Till now this feature was disabled and Exynos SYSMMU always mapped pages as read/write. Now page access bits are set according to the protection bits provided in iommu_map(), so Exynos SYSMMU is able to detect incorrect access to mapped pages. Exynos SYSMMU earlier than v5 doesn't support write-only mappings, so pages with such protection bits are mapped as read/write. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 10 11月, 2016 1 次提交
-
-
由 Lucas Stach 提交于
This will get rid of a lot false positives caused by kmemleak being unaware of the irq_remap_table. Based on a suggestion from Catalin Marinas. Signed-off-by: NLucas Stach <dev@lynxeye.de> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-