- 10 2月, 2017 3 次提交
-
-
由 Christopher Covington 提交于
The Qualcomm Datacenter Technologies Falkor v1 CPU may allocate TLB entries using an incorrect ASID when TTBRx_EL1 is being updated. When the erratum is triggered, page table entries using the new translation table base address (BADDR) will be allocated into the TLB using the old ASID. All circumstances leading to the incorrect ASID being cached in the TLB arise when software writes TTBRx_EL1[ASID] and TTBRx_EL1[BADDR], a memory operation is in the process of performing a translation using the specific TTBRx_EL1 being written, and the memory operation uses a translation table descriptor designated as non-global. EL2 and EL3 code changing the EL1&0 ASID is not subject to this erratum because hardware is prohibited from performing translations from an out-of-context translation regime. Consider the following pseudo code. write new BADDR and ASID values to TTBRx_EL1 Replacing the above sequence with the one below will ensure that no TLB entries with an incorrect ASID are used by software. write reserved value to TTBRx_EL1[ASID] ISB write new value to TTBRx_EL1[BADDR] ISB write new value to TTBRx_EL1[ASID] ISB When the above sequence is used, page table entries using the new BADDR value may still be incorrectly allocated into the TLB using the reserved ASID. Yet this will not reduce functionality, since TLB entries incorrectly tagged with the reserved ASID will never be hit by a later instruction. Based on work by Shanker Donthineni <shankerd@codeaurora.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NChristopher Covington <cov@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The SPE architecture requires each exception level to enable access to the SPE controls for the exception level below it, since additional context-switch logic may be required to handle the buffer safely. This patch allows EL1 (host) access to the SPE controls when entered at EL2. Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ding Tianhong 提交于
Now that we have a workaround for Hisilicon erratum 161010101, notes this in the arm64 silicon-errata document. The new config option is too long to fit in the existing kconfig column, so this is widened to accomodate it. At the same time, an existing whitespace error is corrected, and the existing pattern of a line space between vendors is enforced for recent additions. Signed-off-by: NDing Tianhong <dingtianhong@huawei.com> [Mark: split patch, reword commit message, rework table] Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 09 2月, 2017 4 次提交
-
-
由 Miles Chen 提交于
To is_vmalloc_addr() to check if an address is a vmalloc address instead of checking VMALLOC_START and VMALLOC_END manually. Signed-off-by: NMiles Chen <miles.chen@mediatek.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Miles Chen 提交于
Use linux/size.h to improve code readability. Signed-off-by: NMiles Chen <miles.chen@mediatek.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Currently in arm64's copy_{to,from}_user, we only check the source/destination object size if access_ok() tells us the user access is permissible. However, in copy_from_user() we'll subsequently zero any remainder on the destination object. If we failed the access_ok() check, that applies to the whole object size, which we didn't check. To ensure that we catch that case, this patch hoists check_object_size() to the start of copy_from_user(), matching __copy_from_user() and __copy_to_user(). To make all of our uaccess copy primitives consistent, the same is done to copy_to_user(). Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: NKees Cook <keescook@chromium.org> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Neil Leeder 提交于
Adds perf events support for L2 cache PMU. The L2 cache PMU driver is named 'l2cache_0' and can be used with perf events to profile L2 events such as cache hits and misses on Qualcomm Technologies processors. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NNeil Leeder <nleeder@codeaurora.org> [will: minimise nesting in l2_cache_associate_cpu_with_cluster] [will: use kstrtoul for unsigned long, remove redunant .owner setting] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 08 2月, 2017 2 次提交
-
-
由 Juri Lelli 提交于
The sysfs cpu_capacity entry for each CPU has nothing to do with PROC_FS, nor it's in /proc/sys path. Remove such ifdef. Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Reported-and-suggested-by: NSudeep Holla <sudeep.holla@arm.com> Fixes: be8f185d ('arm64: add sysfs cpu_capacity attribute') Signed-off-by: NJuri Lelli <juri.lelli@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Commit 680a0873 ("arm: kernel: Add SMC structure parameter") added a new "quirk" parameter to the SMC and HVC SMCCC backends, but only updated the comment for the SMC version. This patch adds the new paramater to the comment describing the HVC version too. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 07 2月, 2017 4 次提交
-
-
由 Pratyush Anand 提交于
Atomic operation function symbols are exported,when CONFIG_ARM64_LSE_ATOMICS is defined. Prefix them with notrace, so that an user can not trace these functions. Tracing these functions causes kernel crash. Signed-off-by: NPratyush Anand <panand@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dan Carpenter 提交于
The function iort_add_smmu_platform_device() accidentally returns 0 (ie PTR_ERR(pdev) where pdev == NULL) if platform_device_alloc() fails; fix the bug by returning a proper error value. Fixes: 846f0e9e ("ACPI/IORT: Add support for ARM SMMU platform devices creation") Acked-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> [lorenzo.pieralisi@arm.com: improved commit log] Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Lorenzo Pieralisi 提交于
Commit 618f535a ("ACPI/IORT: Add single mapping function") introduced a function (iort_node_get_id()) to retrieve ids for IORT named components. The iort_node_get_id() takes an index as input to refer to a specific mapping entry in the named component IORT node mapping array. For a mapping entry at a given index, iort_node_get_id() should return the id value (through the id_out function parameter) and the IORT node output_reference (through function return value) the given mapping entry refers to. Technically output_reference values may differ for different map entries, (see diagram below - mapped id values may refer to different eg IORT SMMU nodes; the kernel may not be able to handle different output_reference values for a given named component but the IORT kernel layer should still report the IORT mappings as reported by firmware) but current code in iort_node_get_id() fails to use the index function parameter to return the correct output_reference value (ie it always returns the output_reference value of the first entry in the mapping array whilst using the index correctly to retrieve the id value from the respective entry). |----------------------| | named component | |----------------------| | map entry[0] | |----------------------| | id value | | output_reference----------------> eg SMMU 1 |----------------------| | map entry[1] | |----------------------| | id value | | output_reference----------------> eg SMMU 2 |----------------------| . . . |----------------------| | map entry[N] | |----------------------| | id value | | output_reference----------------> eg SMMU 1 |----------------------| Consequently the iort_node_get_id() function always returns the IORT node pointed at by the output_reference value of the first named component mapping array entry, irrespective of the index parameter, which is a bug. Update the map array entry pointer computation in iort_node_get_id() to take into account the index value, fixing the issue. Fixes: 618f535a ("ACPI/IORT: Add single mapping function") Reported-by: NHanjun Guo <hanjun.guo@linaro.org> Reviewed-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Hanjun Guo <hanjun.guo@linaro.org> Cc: Sinan Kaya <okaya@codeaurora.org> Cc: Tomasz Nowicki <tn@semihalf.com> Cc: Nate Watterson <nwatters@codeaurora.org> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
The NUMA code may get confused by the presence of NOMAP regions within zones, resulting in spurious BUG() checks where the node id deviates from the containing zone's node id. Since the kernel has no business reasoning about node ids of pages it does not own in the first place, enable CONFIG_HOLES_IN_ZONE to ensure that such pages are disregarded. Acked-by: NRobert Richter <rrichter@cavium.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 04 2月, 2017 4 次提交
-
-
由 Stephen Boyd 提交于
I ran into a build error when I disabled CONFIG_ACPI and tried to compile this driver: drivers/perf/xgene_pmu.c:1242:1: warning: data definition has no type or storage class MODULE_DEVICE_TABLE(of, xgene_pmu_of_match); ^ drivers/perf/xgene_pmu.c:1242:1: error: type defaults to 'int' in declaration of 'MODULE_DEVICE_TABLE' [-Werror=implicit-int] Include module.h for the MODULE_DEVICE_TABLE macro that's implicitly included through ACPI. Tested-by: NTai Nguyen <ttnguyen@apm.com> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Geliang Tang 提交于
Use builtin_platform_driver() helper to simplify the code. Signed-off-by: NGeliang Tang <geliangtang@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Andy Gross 提交于
This patch adds a Qualcomm specific quirk to the arm_smccc_smc call. On Qualcomm ARM64 platforms, the SMC call can return before it has completed. If this occurs, the call can be restarted, but it requires using the returned session ID value from the interrupted SMC call. The quirk stores off the session ID from the interrupted call in the quirk structure so that it can be used by the caller. This patch folds in a fix given by Sricharan R: https://lkml.org/lkml/2016/9/28/272Signed-off-by: NAndy Gross <andy.gross@linaro.org> Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Andy Gross 提交于
This patch adds a quirk parameter to the arm_smccc_(smc/hvc) calls. The quirk structure allows for specialized SMC operations due to SoC specific requirements. The current arm_smccc_(smc/hvc) is renamed and macros are used instead to specify the standard arm_smccc_(smc/hvc) or the arm_smccc_(smc/hvc)_quirk function. This patch and partial implementation was suggested by Will Deacon. Signed-off-by: NAndy Gross <andy.gross@linaro.org> Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 2月, 2017 5 次提交
-
-
由 Ard Biesheuvel 提交于
When building with debugging symbols, take the absolute path to the vmlinux binary and add it to the special PE/COFF debug table entry. This allows a debug EFI build to find the vmlinux binary, which is very helpful in debugging, given that the offset where the Image is first loaded by EFI is highly unpredictable. On implementations of UEFI that choose to implement it, this information is exposed via the EFI debug support table, which is a UEFI configuration table that is accessible both by the firmware at boot time and by the OS at runtime, and lists all PE/COFF images loaded by the system. The format of the NB10 Codeview entry is based on the definition used by EDK2, which is our primary reference when it comes to the use of PE/COFF in the context of UEFI firmware. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> [will: use realpath instead of shell invocation, as discussed on list] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
We recently discovered that __raw_read_system_reg() erroneously mapped sysreg IDs to the wrong registers. To ensure that we don't get hit by a similar issue in future, this patch makes __raw_read_system_reg() use a macro for each case statement, ensuring that each case reads the correct register. To ensure that this patch hasn't introduced an issue, I've binary-diffed the object files before and after this patch. No code or data sections differ (though some debug section differ due to line numbering changing). Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Since it was introduced in commit da8d02d1 ("arm64/capabilities: Make use of system wide safe value"), __raw_read_system_reg() has erroneously mapped some sysreg IDs to other registers. For the fields in ID_ISAR5_EL1, our local feature detection will be erroneous. We may spuriously detect that a feature is uniformly supported, or may fail to detect when it actually is, meaning some compat hwcaps may be erroneous (or not enforced upon hotplug). This patch corrects the erroneous entries. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Fixes: da8d02d1 ("arm64/capabilities: Make use of system wide safe value") Reported-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The SPE buffer is virtually addressed, using the page tables of the CPU MMU. Unusually, this means that the EL0/1 page table may be live whilst we're executing at EL2 on non-VHE configurations. When VHE is in use, we can use the same property to profile the guest behind its back. This patch adds the relevant disabling and flushing code to KVM so that the host can make use of SPE without corrupting guest memory, and any attempts by a guest to use SPE will result in a trap. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Cc: Alex Bennée <alex.bennee@linaro.org> Cc: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dmitry Torokhov 提交于
Instead of open-coding the loop, let's use canned macro. Signed-off-by: NDmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 01 2月, 2017 2 次提交
-
-
由 Christopher Covington 提交于
During a TLB invalidate sequence targeting the inner shareable domain, Falkor may prematurely complete the DSB before all loads and stores using the old translation are observed. Instruction fetches are not subject to the conditions of this erratum. If the original code sequence includes multiple TLB invalidate instructions followed by a single DSB, onle one of the TLB instructions needs to be repeated to work around this erratum. While the erratum only applies to cases in which the TLBI specifies the inner-shareable domain (*IS form of TLBI) and the DSB is ISH form or stronger (OSH, SYS), this changes applies the workaround overabundantly-- to local TLBI, DSB NSH sequences as well--for simplicity. Based on work by Shanker Donthineni <shankerd@codeaurora.org> Signed-off-by: NChristopher Covington <cov@codeaurora.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Catalin Marinas 提交于
Commit cab15ce6 ("arm64: Introduce execute-only page access permissions") allowed a valid user PTE to have the PTE_USER bit clear. As a consequence, the pte_valid_not_user() macro in set_pte() was replaced with pte_valid_global() under the assumption that only user pages have the nG bit set. EFI mappings, however, also have the nG bit set and set_pte() wrongly ignores issuing the DSB+ISB. This patch reinstates the pte_valid_not_user() macro and adds the PTE_UXN bit check since all kernel mappings have this bit set. For clarity, pte_exec() is renamed to pte_user_exec() as it only checks for the absence of PTE_UXN. Consequently, the user executable check in set_pte_at() drops the pte_ng() test since pte_user_exec() is sufficient. Fixes: cab15ce6 ("arm64: Introduce execute-only page access permissions") Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 28 1月, 2017 1 次提交
-
-
由 Mark Rutland 提交于
If an EL0 instruction in the SYS class triggers an exception, do_sysintr looks for a sys64_hook matching the instruction, and if none is found, injects a SIGILL. This mirrors what we do for undefined instruction encodings in do_undefinstr, where we look for an undef_hook matching the instruction, and if none is found, inject a SIGILL. Over time, new SYS instruction encodings may be allocated. Prior to allocation, exceptions resulting from these would be handled by do_undefinstr, whereas after allocation these may be handled by do_sysintr. To ensure that we have consistent behaviour if and when this happens, it would be beneficial to have do_sysinstr fall back to do_undefinstr. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NSuzuki Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 1月, 2017 2 次提交
-
-
由 Christopher Covington 提交于
Refactor the KVM code to use the __tlbi macros, which will allow an errata workaround that repeats tlbi dsb sequences to only change one location. This is not intended to change the generated assembly and comparing before and after vmlinux objdump shows no functional changes. Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NChristopher Covington <cov@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Shanker Donthineni 提交于
Define the MIDR implementer and part number field values for the Qualcomm Datacenter Technologies Falkor processor version 1 in the usual manner. Signed-off-by: NShanker Donthineni <shankerd@codeaurora.org> Signed-off-by: NChristopher Covington <cov@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 26 1月, 2017 5 次提交
-
-
由 Robin Murphy 提交于
When bypassing SWIOTLB on small-memory systems, we need to avoid calling into swiotlb_dma_mapping_error() in exactly the same way as we avoid swiotlb_dma_supported(), because the former also relies on SWIOTLB state being initialised. Under the assumptions for which we skip SWIOTLB, dma_map_{single,page}() will only ever return the DMA-offset-adjusted physical address of the page passed in, thus we can report success unconditionally. Fixes: b67a8b29 ("arm64: mm: only initialize swiotlb when necessary") CC: stable@vger.kernel.org CC: Jisheng Zhang <jszhang@marvell.com> Reported-by: NAaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Kefeng Wang 提交于
Fix warning: "(COMPAT) selects COMPAT_BINFMT_ELF which has unmet direct dependencies (COMPAT && BINFMT_ELF)" Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
Memory regions marked as NOMAP should not be used for general allocation by the kernel, and should not even be covered by the linear mapping (hence the name). However, drivers or other subsystems (such as ACPI) that access the firmware directly may legally access them, which means it is also reasonable for such drivers to claim them by invoking request_resource(). Currently, this is prevented by the fact that arm64's request_standard_resources() marks reserved regions as IORESOURCE_BUSY. So drop the IORESOURCE_BUSY flag from these requests. Reported-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Geert Uytterhoeven 提交于
If CONFIG_DEBUG_VIRTUAL=y, during s2ram: virt_to_phys used for non-linear address: ffffff80085db280 (cpu_resume+0x0/0x20) ------------[ cut here ]------------ WARNING: CPU: 0 PID: 1628 at arch/arm64/mm/physaddr.c:14 __virt_to_phys+0x28/0x60 ... [<ffffff800809abb4>] __virt_to_phys+0x28/0x60 [<ffffff80084a0c38>] psci_system_suspend+0x20/0x44 [<ffffff8008095b28>] cpu_suspend+0x3c/0x68 [<ffffff80084a0b48>] psci_system_suspend_enter+0x18/0x20 [<ffffff80080ea3e0>] suspend_devices_and_enter+0x3f8/0x7e8 [<ffffff80080ead14>] pm_suspend+0x544/0x5f4 Fixes: 1a08e3d9 ("drivers: firmware: psci: Use __pa_symbol for kernel symbol") Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Geert Uytterhoeven 提交于
If CONFIG_DEBUG_VIRTUAL=y and CONFIG_ARM64_SW_TTBR0_PAN=y: virt_to_phys used for non-linear address: ffffff8008cc0000 (empty_zero_page+0x0/0x1000) WARNING: CPU: 0 PID: 0 at arch/arm64/mm/physaddr.c:14 __virt_to_phys+0x28/0x60 ... [<ffffff800809abb4>] __virt_to_phys+0x28/0x60 [<ffffff8008a02600>] setup_arch+0x46c/0x4d4 Fixes: 2077be67 ("arm64: Use __pa_symbol for kernel symbols") Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 23 1月, 2017 1 次提交
-
-
由 Will Deacon 提交于
The arm64 DMA-mapping implementation sets the DMA ops to the IOMMU DMA ops if we detect that an IOMMU is present for the master and the DMA ranges are valid. In the case when the IOMMU domain for the device is not of type IOMMU_DOMAIN_DMA, then we have no business swizzling the ops, since we're not in control of the underlying address space. This patch leaves the DMA ops alone for masters attached to non-DMA IOMMU domains. Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 18 1月, 2017 3 次提交
-
-
由 Mark Rutland 提交于
Some places in the kernel open-code sequences using ADRP for a symbol another instruction using a :lo12: relocation for that same symbol. These sequences are easy to get wrong, and more painful to read than is necessary. For these reasons, it is preferable to use the {adr,ldr,str}_l macros for these cases. This patch makes use of these in entry-ftrace.S, removing open-coded sequences using adrp. This results in a minor code change, since a temporary register is not used when generating the address for some symbols, but this is fine, as the value of the temporary register is not used elsewhere. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Some places in the kernel open-code sequences using ADRP for a symbol another instruction using a :lo12: relocation for that same symbol. These sequences are easy to get wrong, and more painful to read than is necessary. For these reasons, it is preferable to use the {adr,ldr,str}_l macros for these cases. This patch makes use of these in efi-entry.S, removing open-coded sequences using adrp. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Some places in the kernel open-code sequences using ADRP for a symbol another instruction using a :lo12: relocation for that same symbol. These sequences are easy to get wrong, and more painful to read than is necessary. For these reasons, it is preferable to use the {adr,ldr,str}_l macros for these cases. This patch makes use of adr_l these in head.S, removing an open-coded sequence using adrp. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 17 1月, 2017 2 次提交
-
-
由 Sudeep Holla 提交于
The cache hierarchy can be identified through Cache Level ID(CLIDR) architected system register. However in some cases it will provide only the number of cache levels that are integrated into the processor itself. In other words, it can't provide any information about the caches that are external and/or transparent. Some platforms require to export the information about all such external caches to the userspace applications via the sysfs interface. This patch adds support to override the cache levels using device tree to take such external non-architected caches into account. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Tested-by: NTan Xiaojun <tanxiaojun@huawei.com> Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Sudeep Holla 提交于
It is useful to have helper function just to get the number of cache levels for a given logical cpu. We can obtain the same by just checking the level at which the last cache is present. This patch adds support to find the level of the last cache for a given cpu. It will be used on ARM64 platform where the device tree provides the information for the additional non-architected/transparent/external last level caches that are not integrated with the processors. Cc: Mark Rutland <mark.rutland@arm.com> Suggested-by: NRob Herring <robh+dt@kernel.org> Acked-by: NRob Herring <robh+dt@kernel.org> Tested-by: NTan Xiaojun <tanxiaojun@huawei.com> Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> [will: use u32 instead of int for cache_level] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 13 1月, 2017 2 次提交
-
-
由 Robert Richter 提交于
Definition of cpu ranges are hard to read if the cpu variant is not zero. Provide MIDR_CPU_VAR_REV() macro to describe the full hardware revision of a cpu including variant and (minor) revision. Signed-off-by: NRobert Richter <rrichter@cavium.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Miles Chen 提交于
Cosmetic change to use phys_addr_t instead of unsigned long for the return value of __pa_symbol(). Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMiles Chen <miles.chen@mediatek.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-