- 11 5月, 2017 2 次提交
-
-
由 Florian Fainelli 提交于
When CONFIG_ARM_MODULE_PLTS is enabled, the first allocation using the module space fails, because the module is too big, and then the module allocation is attempted from vmalloc space. Silence the first allocation failure in that case by setting __GFP_NOWARN. Acked-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Florian Fainelli 提交于
If the caller has set __GFP_NOWARN don't print the following message: vmap allocation for size 15736832 failed: use vmalloc=<size> to increase size. This can happen with the ARM/Linux or ARM64/Linux module loader built with CONFIG_ARM{,64}_MODULE_PLTS=y which does a first attempt at loading a large module from module space, then falls back to vmalloc space. Acked-by: NMichal Hocko <mhocko@suse.com> Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 10 5月, 2017 10 次提交
-
-
由 Mark Rutland 提交于
Clang tries to warn when there's a mismatch between an operand's size, and the size of the register it is held in, as this may indicate a bug. Specifically, clang warns when the operand's type is less than 64 bits wide, and the register is used unqualified (i.e. %N rather than %xN or %wN). Unfortunately clang can generate these warnings for unreachable code. For example, for code like: do { \ typeof(*(ptr)) __v = (v); \ switch(sizeof(*(ptr))) { \ case 1: \ // assume __v is 1 byte wide \ asm ("{op}b %w0" : : "r" (v)); \ break; \ case 8: \ // assume __v is 8 bytes wide \ asm ("{op} %0" : : "r" (v)); \ break; \ } while (0) ... if op() were passed a char value and pointer to char, clang may produce a warning for the unreachable case where sizeof(*(ptr)) is 8. For the same reasons, clang produces warnings when __put_user_err() is used for types that are less than 64 bits wide. We could avoid this with a cast to a fixed-width type in each of the cases. However, GCC will then warn that pointer types are being cast to mismatched integer sizes (in unreachable paths). Another option would be to use the same union trickery as we do for __smp_store_release() and __smp_load_acquire(), but this is fairly invasive. Instead, this patch suppresses the clang warning by using an x modifier in the assembly for the 8 byte case of __put_user_err(). No additional work is necessary as the value has been cast to typeof(*(ptr)), so the compiler will have performed any necessary extension for the reachable case. For consistency, __get_user_err() is also updated to use the x modifier for its 8 byte case. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reported-by: NMatthias Kaehlcke <mka@chromium.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
The LSE atomic code uses asm register variables to ensure that parameters are allocated in specific registers. In the majority of cases we specifically ask for an x register when using 64-bit values, but in a couple of cases we use a w regsiter for a 64-bit value. For asm register variables, the compiler only cares about the register index, with wN and xN having the same meaning. The compiler determines the register size to use based on the type of the variable. Thus, this inconsistency is merely confusing, and not harmful to code generation. For consistency, this patch updates those cases to use the x register alias. There should be no functional change as a result of this patch. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Our compat swp emulation holds the compat user address in an unsigned int, which it passes to __user_swpX_asm(). When a 32-bit value is passed in a register, the upper 32 bits of the register are unknown, and we must extend the value to 64 bits before we can use it as a base address. This patch casts the address to unsigned long to ensure it has been suitably extended, avoiding the potential issue, and silencing a related warning from clang. Fixes: bd35a4ad ("arm64: Port SWP/SWPB emulation support from arm") Cc: <stable@vger.kernel.org> # 3.19.x- Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Our access_ok() simply hands its arguments over to __range_ok(), which implicitly assummes that the addr parameter is 64 bits wide. This isn't necessarily true for compat code, which might pass down a 32-bit address parameter. In these cases, we don't have a guarantee that the address has been zero extended to 64 bits, and the upper bits of the register may contain unknown values, potentially resulting in a suprious failure. Avoid this by explicitly casting the addr parameter to an unsigned long (as is done on other architectures), ensuring that the parameter is widened appropriately. Fixes: 0aea86a2 ("arm64: User access library functions") Cc: <stable@vger.kernel.org> # 3.7.x- Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
When an inline assembly operand's type is narrower than the register it is allocated to, the least significant bits of the register (up to the operand type's width) are valid, and any other bits are permitted to contain any arbitrary value. This aligns with the AAPCS64 parameter passing rules. Our __smp_store_release() implementation does not account for this, and implicitly assumes that operands have been zero-extended to the width of the type being stored to. Thus, we may store unknown values to memory when the value type is narrower than the pointer type (e.g. when storing a char to a long). This patch fixes the issue by casting the value operand to the same width as the pointer operand in all cases, which ensures that the value is zero-extended as we expect. We use the same union trickery as __smp_load_acquire and {READ,WRITE}_ONCE() to avoid GCC complaining that pointers are potentially cast to narrower width integers in unreachable paths. A whitespace issue at the top of __smp_store_release() is also corrected. No changes are necessary for __smp_load_acquire(). Load instructions implicitly clear any upper bits of the register, and the compiler will only consider the least significant bits of the register as valid regardless. Fixes: 47933ad4 ("arch: Introduce smp_load_acquire(), smp_store_release()") Fixes: 878a84d5 ("arm64: add missing data types in smp_load_acquire/smp_store_release") Cc: <stable@vger.kernel.org> # 3.14.x- Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Matthias Kaehlcke <mka@chromium.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
The inline assembly in __XCHG_CASE() uses a +Q constraint to hazard against other accesses to the memory location being exchanged. However, the pointer passed to the constraint is a u8 pointer, and thus the hazard only applies to the first byte of the location. GCC can take advantage of this, assuming that other portions of the location are unchanged, as demonstrated with the following test case: union u { unsigned long l; unsigned int i[2]; }; unsigned long update_char_hazard(union u *u) { unsigned int a, b; a = u->i[1]; asm ("str %1, %0" : "+Q" (*(char *)&u->l) : "r" (0UL)); b = u->i[1]; return a ^ b; } unsigned long update_long_hazard(union u *u) { unsigned int a, b; a = u->i[1]; asm ("str %1, %0" : "+Q" (*(long *)&u->l) : "r" (0UL)); b = u->i[1]; return a ^ b; } The linaro 15.08 GCC 5.1.1 toolchain compiles the above as follows when using -O2 or above: 0000000000000000 <update_char_hazard>: 0: d2800001 mov x1, #0x0 // #0 4: f9000001 str x1, [x0] 8: d2800000 mov x0, #0x0 // #0 c: d65f03c0 ret 0000000000000010 <update_long_hazard>: 10: b9400401 ldr w1, [x0,#4] 14: d2800002 mov x2, #0x0 // #0 18: f9000002 str x2, [x0] 1c: b9400400 ldr w0, [x0,#4] 20: 4a000020 eor w0, w1, w0 24: d65f03c0 ret This patch fixes the issue by passing an unsigned long pointer into the +Q constraint, as we do for our cmpxchg code. This may hazard against more than is necessary, but this is better than missing a necessary hazard. Fixes: 305d454a ("arm64: atomics: implement native {relaxed, acquire, release} atomics") Cc: <stable@vger.kernel.org> # 4.4.x- Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
Some kernel features don't currently work if a task puts a non-zero address tag in its stack pointer, frame pointer, or frame record entries (FP, LR). For example, with a tagged stack pointer, the kernel can't deliver signals to the process, and the task is killed instead. As another example, with a tagged frame pointer or frame records, perf fails to generate call graphs or resolve symbols. For now, just document these limitations, instead of finding and fixing everything that doesn't work, as it's not known if anyone needs to use tags in these places anyway. In addition, as requested by Dave Martin, generalize the limitations into a general kernel address tag policy, and refactor tagged-pointers.txt to include it. Fixes: d50240a5 ("arm64: mm: permit use of tagged pointers at EL0") Cc: <stable@vger.kernel.org> # 3.12.x- Reviewed-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
When handling a data abort from EL0, we currently zero the top byte of the faulting address, as we assume the address is a TTBR0 address, which may contain a non-zero address tag. However, the address may be a TTBR1 address, in which case we should not zero the top byte. This patch fixes that. The effect is that the full TTBR1 address is passed to the task's signal handler (or printed out in the kernel log). When handling a data abort from EL1, we leave the faulting address intact, as we assume it's either a TTBR1 address or a TTBR0 address with tag 0x00. This is true as far as I'm aware, we don't seem to access a tagged TTBR0 address anywhere in the kernel. Regardless, it's easy to forget about address tags, and code added in the future may not always remember to remove tags from addresses before accessing them. So add tag handling to the EL1 data abort handler as well. This also makes it consistent with the EL0 data abort handler. Fixes: d50240a5 ("arm64: mm: permit use of tagged pointers at EL0") Cc: <stable@vger.kernel.org> # 3.12.x- Reviewed-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
When we take a watchpoint exception, the address that triggered the watchpoint is found in FAR_EL1. We compare it to the address of each configured watchpoint to see which one was hit. The configured watchpoint addresses are untagged, while the address in FAR_EL1 will have an address tag if the data access was done using a tagged address. The tag needs to be removed to compare the address to the watchpoints. Currently we don't remove it, and as a result can report the wrong watchpoint as being hit (specifically, always either the highest TTBR0 watchpoint or lowest TTBR1 watchpoint). This patch removes the tag. Fixes: d50240a5 ("arm64: mm: permit use of tagged pointers at EL0") Cc: <stable@vger.kernel.org> # 3.12.x- Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Kristina Martsenko 提交于
When we emulate userspace cache maintenance in the kernel, we can currently send the task a SIGSEGV even though the maintenance was done on a valid address. This happens if the address has a non-zero address tag, and happens to not be mapped in. When we get the address from a user register, we don't currently remove the address tag before performing cache maintenance on it. If the maintenance faults, we end up in either __do_page_fault, where find_vma can't find the VMA if the address has a tag, or in do_translation_fault, where the tagged address will appear to be above TASK_SIZE. In both cases, the address is not mapped in, and the task is sent a SIGSEGV. This patch removes the tag from the address before using it. With this patch, the fault is handled correctly, the address gets mapped in, and the cache maintenance succeeds. As a second bug, if cache maintenance (correctly) fails on an invalid tagged address, the address gets passed into arm64_notify_segfault, where find_vma fails to find the VMA due to the tag, and the wrong si_code may be sent as part of the siginfo_t of the segfault. With this patch, the correct si_code is sent. Fixes: 7dd01aef ("arm64: trap userspace "dc cvau" cache operation on errata-affected core") Cc: <stable@vger.kernel.org> # 4.8.x- Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 05 5月, 2017 1 次提交
-
-
由 Catalin Marinas 提交于
While honouring the DMA_ATTR_FORCE_CONTIGUOUS on arm64 (commit 44176bb3: "arm64: Add support for DMA_ATTR_FORCE_CONTIGUOUS to IOMMU"), the existing uses of dma_mmap_attrs() and dma_get_sgtable() have been broken by passing a physically contiguous vm_struct with an invalid pages pointer through the common iommu API. Since the coherent allocation with DMA_ATTR_FORCE_CONTIGUOUS uses CMA, this patch simply reuses the existing swiotlb logic for mmap and get_sgtable. Note that the current implementation of get_sgtable (both swiotlb and iommu) is broken if dma_declare_coherent_memory() is used since such memory does not have a corresponding struct page. To be addressed in a subsequent patch. Fixes: 44176bb3 ("arm64: Add support for DMA_ATTR_FORCE_CONTIGUOUS to IOMMU") Reported-by: NAndrzej Hajda <a.hajda@samsung.com> Cc: Geert Uytterhoeven <geert+renesas@glider.be> Acked-by: NRobin Murphy <robin.murphy@arm.com> Tested-by: NAndrzej Hajda <a.hajda@samsung.com> Reviewed-by: NAndrzej Hajda <a.hajda@samsung.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 29 4月, 2017 1 次提交
-
-
由 Geert Uytterhoeven 提交于
On arm32, the machine model specified in the device tree is printed during boot-up, courtesy of of_flat_dt_match_machine(). On arm64, of_flat_dt_match_machine() is not called, and the machine model information is not available from the kernel log. Print the machine model to make it easier to derive the machine model from an arbitrary kernel boot log. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 28 4月, 2017 1 次提交
-
-
由 Florian Fainelli 提交于
Add missing L2 cache events: read/write accesses and misses, as well as the DTLB refills. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 26 4月, 2017 1 次提交
-
-
由 Ard Biesheuvel 提交于
The arm64 module PLT code allocates all PLT entries in a single core section, since the overhead of having a separate init PLT section is not justified by the small number of PLT entries usually required for init code. However, the core and init module regions are allocated independently, and there is a corner case where the core region may be allocated from the VMALLOC region if the dedicated module region is exhausted, but the init region, being much smaller, can still be allocated from the module region. This leads to relocation failures if the distance between those regions exceeds 128 MB. (In fact, this corner case is highly unlikely to occur on arm64, but the issue has been observed on ARM, whose module region is much smaller). So split the core and init PLT regions, and name the latter ".init.plt" so it gets allocated along with (and sufficiently close to) the .init sections that it serves. Also, given that init PLT entries may need to be emitted for branches that target the core module, modify the logic that disregards defined symbols to only disregard symbols that are defined in the same section as the relocated branch instruction. Since there may now be two PLT entries associated with each entry in the symbol table, we can no longer hijack the symbol::st_size fields to record the addresses of PLT entries as we emit them for zero-addend relocations. So instead, perform an explicit comparison to check for duplicate entries. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 4月, 2017 1 次提交
-
-
由 Mark Rutland 提交于
Commit f1b36dcb ("arm64: pmuv3: handle !PMUv3 when probing") is a little too restrictive, and prevents the use of of backwards compatible PMUv3 extenstions, which have a PMUver value other than 1. For instance, ARMv8.1 PMU extensions (as implemented by ThunderX2) are reported with PMUver value 4. Per the usual ID register principles, at least 0x1-0x7 imply a PMUv3-compatible PMU. It's not currently clear whether 0x8-0xe imply the same. For the time being, treat the value as signed, and with 0x1-0x7 treated as meaning PMUv3 is implemented. This may be relaxed by future patches. Reported-by: NJayachandran C <jnair@caviumnetworks.com> Tested-by: NJayachandran C <jnair@caviumnetworks.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 24 4月, 2017 1 次提交
-
-
由 Marc Zyngier 提交于
We now trap accesses to CNTVCT_EL0 when the counter is broken enough to require the kernel to mediate the access. But it turns out that some existing userspace (such as OpenMPI) do probe for the counter frequency, leading to an UNDEF exception as CNTVCT_EL0 and CNTFRQ_EL0 share the same control bit. The fix is to handle the exception the same way we do for CNTVCT_EL0. Fixes: a86bd139 ("arm64: arch_timer: Enable CNTVCT_EL0 trap if workaround is enabled") Reported-by: NHanjun Guo <guohanjun@huawei.com> Tested-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 12 4月, 2017 2 次提交
-
-
由 Catalin Marinas 提交于
* will/for-next/perf: arm64: pmuv3: use arm_pmu ACPI framework arm64: pmuv3: handle !PMUv3 when probing drivers/perf: arm_pmu: add ACPI framework arm64: add function to get a cpu's MADT GICC table drivers/perf: arm_pmu: split out platform device probe logic drivers/perf: arm_pmu: move irq request/free into probe drivers/perf: arm_pmu: split cpu-local irq request/free drivers/perf: arm_pmu: rename irq request/free functions drivers/perf: arm_pmu: handle no platform_device drivers/perf: arm_pmu: simplify cpu_pmu_request_irqs() drivers/perf: arm_pmu: factor out pmu registration drivers/perf: arm_pmu: fold init into alloc drivers/perf: arm_pmu: define armpmu_init_fn drivers/perf: arm_pmu: remove pointless PMU disabling perf: qcom: Add L3 cache PMU driver drivers/perf: arm_pmu: split irq request from enable drivers/perf: arm_pmu: manage interrupts per-cpu drivers/perf: arm_pmu: rework per-cpu allocation MAINTAINERS: Add file patterns for perf device tree bindings
-
由 Marc Zyngier 提交于
Since bbb56c27 ("arm64: Add detection code for broken .inst support in binutils"), running any make target that doesn't involve the cross compiler results in a spurious warning: $ make ARCH=arm64 menuconfig arch/arm64/Makefile:43: Detected assembler with broken .inst; disassembly will be unreliable while $ make ARCH=arm64 CROSS_COMPILE=aarch64-arm-linux- menuconfig is silent (assuming your compiler is not affected). That's because the code that tests for the workaround is always run, irrespective of the current configuration being available or not. An easy fix is to make the detection conditional on CONFIG_ARM64 being defined, which is only the case when actually building something. Fixes: bbb56c27 ("arm64: Add detection code for broken .inst support in binutils") Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 11 4月, 2017 14 次提交
-
-
由 Mark Rutland 提交于
Now that we have a framework to handle the ACPI bits, make the PMUv3 code use this. The framework is a little different to what was originally envisaged, and we can drop some unused support code in the process of moving over to it. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NJeremy Linton <jeremy.linton@arm.com> [will: make armv8_pmu_driver_init static] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
When probing via ACPI, we won't know up-front whether a CPU has a PMUv3 compatible PMU. Thus we need to consult ID registers during probe time. This patch updates our PMUv3 probing code to test for the presence of PMUv3 functionality before touching an PMUv3-specific registers, and before updating the struct arm_pmu with PMUv3 data. When a PMUv3-compatible PMU is not present, probing will return -ENODEV. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
This patch adds framework code to handle parsing PMU data out of the MADT, sanity checking this, and managing the association of CPUs (and their interrupts) with appropriate logical PMUs. For the time being, we expect that only one PMU driver (PMUv3) will make use of this, and we simply pass in a single probe function. This is based on an earlier patch from Jeremy Linton. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Currently the ACPI parking protocol code needs to parse each CPU's MADT GICC table to extract the mailbox address and so on. Each time we parse a GICC table, we call back to the parking protocol code to parse it. This has been fine so far, but we're about to have more code that needs to extract data from the GICC tables, and adding a callback for each user is going to get unwieldy. Instead, this patch ensures that we stash a copy of each CPU's GICC table at boot time, such that anything needing to parse it can later request it. This will allow for other parsers of GICC, and for simplification to the ACPI parking protocol code. Note that we must store a copy, rather than a pointer, since the core ACPI code temporarily maps/unmaps tables while iterating over them. Since we parse the MADT before we know how many CPUs we have (and hence before we setup the percpu areas), we must use an NR_CPUS sized array. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Now that we've split the pdev and DT probing logic from the runtime management, let's move the former into its own file. We gain a few lines due to the copyright header and includes, but this should keep the logic clearly separated, and paves the way for adding ACPI support in a similar fashion. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NJeremy Linton <jeremy.linton@arm.com> [will: rename nr_irqs to avoid conflict with global variable] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Currently we request (and potentially free) all IRQs for a given PMU in cpu_pmu_init(). This works for platform/DT probing today, but it doesn't fit ACPI well as we don't have all our affinity data up-front. In preparation for ACPI support, fold the IRQ request/free into arm_pmu_device_probe(), which will remain specific to platform/DT probing. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Currently we have functions to request/free all IRQs for a given PMU. While this works today, this won't work for ACPI, where we don't know the full set of IRQs up front, and need to request them separately. To enable supporting ACPI, this patch splits out the cpu-local request/free into new functions, allowing us to request/free individual IRQs. As this makes it possible/necessary to request a PPI once per cpu, an additional check is added to detect mismatched PPIs. This shouldn't matter for the DT / platform case, as we check this when parsing. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
For historical reasons, portions of the arm_pmu code use a cpu_pmu_ prefix rather than an armpmu_ prefix. While a minor annoyance, this hasn't been a problem thusfar. However, to enable ACPI support, we'll need to expose a few things in header files, and we should aim to keep those consistently namespaced. In preparation for exporting our IRQ request/free functions, rename these to have an armpmu_ prefix. For consistency, the 'cpu_pmu' parameter is also renamed to 'armpmu'. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
In armpmu_dispatch_irq() we look at arm_pmu::plat_device to acquire platdata, so that we can defer to platform-specific IRQ handling, required on some 32-bit parts. With the advent of ACPI we won't always have a platform_device, and so we must avoid trying to dereference fields from it. This patch fixes up armpmu_dispatch_irq() to avoid doing so, introducing a new armpmu_get_platdata() helper. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
The ARM PMU framework code always uses armpmu_dispatch_irq as its common IRQ handler. Passing this down from cpu_pmu_init() is somewhat pointless, and gets in the way of refactoring. This patch makes cpu_pmu_request_irqs() always use armpmu_dispatch_irq as the handler when requesting IRQs, and removes the handler parameter from its prototype. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Currently arm_pmu_device_probe contains probing logic specific to the platform_device infrastructure, and some logic required to safely register the PMU with various systems. This patch factors out the logic relating to the registration of the PMU. This makes arm_pmu_device_probe a little easier to read, and will make it easier to reuse the logic for an ACPI-specific probing mechanism. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Given we always want to initialise common fields on an allocated PMU, this patch folds this common initialisation into armpmu_alloc(). This will make it simpler to reuse this code for an ACPI-specific probe path. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
We expect an ARM PMU's init function to have a particular prototype, which we open-code in a few places. This is less than ideal, considering that we cast a void value to this type in one location, and a mismatch could easily be missed. Add a typedef so that we can ensure this is consistent. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
We currently disable the PMU temporarily in armpmu_add(). We may have required this historically, but the perf core always disables an event's PMU when calling event::pmu::add(), so this is not necessary. We don't do similarly in armpmu_del(), or elsewhere, so this is unnecessary and inconsistent, and only serves to confuse the reader. Remove the pointless disable, simplifying armpmu_add() in the process. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 08 4月, 2017 1 次提交
-
-
由 Catalin Marinas 提交于
Merge tag 'arch-timer-errata-prereq' of git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into for-next/core Pre-requisites for the arch timer errata workarounds: - Allow checking of a CPU-local erratum - Add CNTVCT_EL0 trap handler - Define Cortex-A73 MIDR - Allow an erratum to be match for all revisions of a core - Add capability to advertise Cortex-A73 erratum 858921 * tag 'arch-timer-errata-prereq' of git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms: arm64: cpu_errata: Add capability to advertise Cortex-A73 erratum 858921 arm64: cpu_errata: Allow an erratum to be match for all revisions of a core arm64: Define Cortex-A73 MIDR arm64: Add CNTVCT_EL0 trap handler arm64: Allow checking of a CPU-local erratum
-
- 07 4月, 2017 5 次提交
-
-
由 Marc Zyngier 提交于
In order to work around Cortex-A73 erratum 858921 in a subsequent patch, add the required capability that advertise the erratum. As the configuration option it depends on is not present yet, this has no immediate effect. Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
Some minor erratum may not be fixed in further revisions of a core, leading to a situation where the workaround needs to be updated each time an updated core is released. Introduce a MIDR_ALL_VERSIONS match helper that will work for all versions of that MIDR, once and for all. Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
As we're about to introduce a new workaround that is specific to Cortex-A73, let's define the coresponding MIDR. Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
Since people seem to make a point in breaking the userspace visible counter, we have no choice but to trap the access. Add the required handler. Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
this_cpu_has_cap() only checks the feature array, and not the errata one. In order to be able to check for a CPU-local erratum, allow it to inspect the latter as well. This is consistent with cpus_have_cap()'s behaviour, which includes errata already. Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-