- 26 4月, 2016 5 次提交
-
-
由 Ard Biesheuvel 提交于
When building a relocatable kernel, we currently rely on the fact that early 64-bit literal loads need to be deferred to after the relocation has been performed only if they involve symbol references, and not if they involve assemble time constants. While this is not an unreasonable assumption to make, it is better to switch to movk/movz sequences, since these are guaranteed to be resolved at link time, simply because there are no dynamic relocation types to describe them. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
Implement a macro mov_q that can be used to move an immediate constant into a 64-bit register, using between 2 and 4 movz/movk instructions (depending on the operand) Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
Refactor the relocation processing so that the code executes from the ID map while accessing the relocation tables via the virtual mapping. This way, we can use literals containing virtual addresses as before, instead of having to use convoluted absolute expressions. For symmetry with the secondary code path, the relocation code and the subsequent jump to the virtual entry point are implemented in a function called __primary_switch(), and __mmap_switched() is renamed to __primary_switched(). Also, the call sequence in stext() is aligned with the one in secondary_startup(), by replacing the awkward 'adr_l lr' and 'b cpu_setup' sequence with a simple branch and link. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
We can simply use a relocated 64-bit literal to store the address of __secondary_switched(), and the relocation code will ensure that it holds the correct value at secondary entry time, as long as we make sure that the literal is not dereferenced until after we have enabled the MMU. So jump via a small __secondary_switch() function covered by the ID map that performs the literal load and branch-to-register. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
This unexports some symbols from head.S that are only used locally. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 25 4月, 2016 9 次提交
-
-
由 Suzuki K Poulose 提交于
maxcpu=n sets the number of CPUs activated at boot time to a max of n, but allowing the remaining CPUs to be brought up later if the user decides to do so. However, on arm64 due to various reasons, we disallowed hotplugging CPUs beyond n, by marking them not present. Now that we have checks in place to make sure the hotplugged CPUs have compatible features with system and requires no new errata, relax the restriction. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
CPU Errata work arounds are detected and applied to the kernel code at boot time and the data is then freed up. If a new hotplugged CPU requires a work around which was not applied at boot time, there is nothing we can do but simply fail the booting. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Marc Zyngier 提交于
When introducing the whole CPU feature detection framework, we lost the capability to detect a mismatched GIC configuration (using the GICv2 MMIO interface, but having the system register interface enabled). In order to solve this, use the new this_cpu_has_cap() helper. Also move the check to the CPU interface path in order to catch systems where the first CPU has been correctly configured, but the secondaries are not. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Marc Zyngier 提交于
Now that the capabilities are only available once all the CPUs have booted, we're unable to check for a particular feature in any subsystem that gets initialized before then. In order to support this, introduce a local_cpu_has_cap() function that tests for the presence of a given capability independently of the whole framework. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> [ Added preemptible() check ] Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> [will: remove duplicate initialisation of caps in this_cpu_has_cap] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
Add scope parameter to the arm64_cpu_capabilities::matches(), so that this can be reused for checking the capability on a given CPU vs the system wide. The system uses the default scope associated with the capability for initialising the CPU_HWCAPs and ELF_HWCAPs. Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Improve the readability of dt_scan_depth1_nodes by removing the nested conditionals. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NStefano Stabellini <sstabellini@kernel.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Shannon Zhao 提交于
When it's a Xen domain0 booting with ACPI, it will supply a /chosen and a /hypervisor node in DT. So check if it needs to enable ACPI. Signed-off-by: NShannon Zhao <shannon.zhao@linaro.org> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: NJulien Grall <julien.grall@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
Annotate the KASAN shadow region with boundary markers, so that its mappings stand out in the page table dumper output. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
There is no need to initialize the vmemmap region boundaries dynamically, since they are compile time constants. So just add these constants to the global struct initializer, and drop the dynamic assignment and related code. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 22 4月, 2016 5 次提交
-
-
由 Ard Biesheuvel 提交于
The open coded conversion from struct page address to virtual address in lowmem_page_address() involves an intermediate conversion step to pfn number/physical address. Since the placement of the struct page array relative to the linear mapping may be completely independent from the placement of physical RAM (as is that case for arm64 after commit dfd55ad8 'arm64: vmemmap: use virtual projection of linear region'), the conversion to physical address and back again should factor out of the equation, but unfortunately, the shifting and pointer arithmetic involved prevent this from happening, and the resulting calculation essentially subtracts the address of the start of physical memory and adds it back again, in a way that prevents the compiler from optimizing it away. Since the start of physical memory is not a build time constant on arm64, the resulting conversion involves an unnecessary memory access, which we would like to get rid of. So replace the open coded conversion with a call to page_to_virt(), and use the open coded conversion as its default definition, to be overriden by the architecture, if desired. The existing arch specific definitions of page_to_virt are all equivalent to this default definition, so by itself this patch is a no-op. Acked-by: NAndrew Morton <akpm@linux-foundation.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
To align with generic code and other architectures that expect the macro page_to_virt to produce an expression whose type is 'void*', drop the arch specific definition, which is never referenced anyway. Acked-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
To align with other architectures, the expression produced by expanding the macro page_to_virt() should be of type void*, since it returns a virtual address. Fix that, and also fix up an instance where page_to_virt was expected to return 'unsigned long', and drop another instance that was entirely unused (page_to_bus) Acked-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
With the IOMMU core now taking care of default domains for groups regardless of bus type, we can gleefully rip out this stop-gap, as slight recompense for having to expand the other one. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
PCI devices now suffer the same hiccup as platform devices, in that they get their DMA ops configured before they have been added to their bus, and thus before we know whether they have successfully registered with an IOMMU or not. Until the necessary driver core changes to reorder calls during device creation have been worked out, extend our delayed notifier trick onto the PCI bus so as to avoid broken DMA ops once IOMMUs get plugged into the PCI code. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 20 4月, 2016 10 次提交
-
-
由 Suzuki K Poulose 提交于
Make sure we have AArch32 state available for running COMPAT binaries and also for switching the personality to PER_LINUX32. Signed-off-by: NYury Norov <ynorov@caviumnetworks.com> [ Added cap bit, checks for HWCAP, personality ] Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Tested-by: NYury Norov <ynorov@caviumnetworks.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
Add cpu_hwcap bit for keeping track of the support for 32bit EL0. Tested-by: NYury Norov <ynorov@caviumnetworks.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
On ARMv8 support for AArch32 state is optional. Hence it is not safe to check the AArch32 ID registers for sanity, which could lead to false warnings. This patch makes sure that the AArch32 state is implemented before we keep track of the 32bit ID registers. As per ARM ARM (D.1.21.2 - Support for Exception Levels and Execution States, DDI0487A.h), checking the support for AArch32 at EL0 is good enough to check the support for AArch32 (i.e, AArch32 at EL1 => AArch32 at EL0, but not vice versa). Tested-by: NYury Norov <ynorov@caviumnetworks.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
Adds a helper to extract the support for AArch32 at EL0 Tested-by: NYury Norov <ynorov@caviumnetworks.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
In order to handle systems which do not support 32bit at EL0, split the COMPAT HWCAP entries into a separate table which can be processed, only if the support is available. Tested-by: NYury Norov <ynorov@caviumnetworks.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
We use hwcaps for referring to ELF hwcaps capability information. However this can be confusing with 'cpu_hwcaps' which stands for the CPU capability bit field. This patch cleans up the names to make it a bit more readable. Tested-by: NYury Norov <ynorov@caviumnetworks.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
We haven't used the push/pop macros for a while now, as it's typically better to use immediate offsets for batches of accesses to the stack, as we now do in the entry assembly for the kernel and hyp code. Remove the unused macros. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Yang Shi 提交于
HAVE_ARCH_TRANSPARENT_HUGEPAGE has been defined in arch/Kconfig already, the ARM64 version is identical with it and the default value is Y. So remove the redundant definition and just select it under CONFIG_ARM64. Signed-off-by: NYang Shi <yang.shi@linaro.org> [will: sort into alphabetical order whilst I'm resolving conflicts] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Kefeng Wang 提交于
Show the bss segment information as with text and data in Virtual memory kernel layout. Acked-by: NJames Morse <james.morse@arm.com> Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Kefeng Wang 提交于
Each line with single pr_cont() in Virtual kernel memory layout, or the dump of the kernel memory layout in dmesg is not aligned when PRINTK_TIME enabled, due to the missing time stamps. Tested-by: NJames Morse <james.morse@arm.com> Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 19 4月, 2016 3 次提交
-
-
由 Arnd Bergmann 提交于
memblock_remove() takes a phys_addr_t, which may be narrower than 64 bits, causing a harmless warning: drivers/firmware/efi/arm-init.c: In function 'reserve_regions': include/linux/kernel.h:29:20: error: large integer implicitly truncated to unsigned type [-Werror=overflow] #define ULLONG_MAX (~0ULL) ^ drivers/firmware/efi/arm-init.c:152:21: note: in expansion of macro 'ULLONG_MAX' memblock_remove(0, ULLONG_MAX); This adds an explicit typecast to avoid the warning Fixes: 500899c2 ("efi: ARM/arm64: ignore DT memory nodes instead of removing them") Acked-by Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jan Glauber 提交于
When CPUs are stopped during an abnormal operation like panic for each CPU a line is printed and the stack trace is dumped. This information is only interesting for the aborting CPU and on systems with many CPUs it only makes it harder to debug if after the aborting CPU the log is flooded with data about all other CPUs too. Therefore remove the stack dump and printk of other CPUs and only print a single line that the other CPUs are going to be stopped and, in case any CPUs remain online list them. Signed-off-by: NJan Glauber <jglauber@cavium.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Huang Shijie 提交于
We already re-enable interrupts where necessary in the entry code, so there is no need to do it again in do_page fault. This patch removes the redundant code. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NHuang Shijie <shijie.huang@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 16 4月, 2016 8 次提交
-
-
由 Catalin Marinas 提交于
When hardware updates of the access and dirty states are enabled, the default ptep_set_access_flags() implementation based on calling set_pte_at() directly is potentially racy. This triggers the "racy dirty state clearing" warning in set_pte_at() because an existing writable PTE is overridden with a clean entry. There are two main scenarios for this situation: 1. The CPU getting an access fault does not support hardware updates of the access/dirty flags. However, a different agent in the system (e.g. SMMU) can do this, therefore overriding a writable entry with a clean one could potentially lose the automatically updated dirty status 2. A more complex situation is possible when all CPUs support hardware AF/DBM: a) Initial state: shareable + writable vma and pte_none(pte) b) Read fault taken by two threads of the same process on different CPUs c) CPU0 takes the mmap_sem and proceeds to handling the fault. It eventually reaches do_set_pte() which sets a writable + clean pte. CPU0 releases the mmap_sem d) CPU1 acquires the mmap_sem and proceeds to handle_pte_fault(). The pte entry it reads is present, writable and clean and it continues to pte_mkyoung() e) CPU1 calls ptep_set_access_flags() If between (d) and (e) the hardware (another CPU) updates the dirty state (clears PTE_RDONLY), CPU1 will override the PTR_RDONLY bit marking the entry clean again. This patch implements an arm64-specific ptep_set_access_flags() function to perform an atomic update of the PTE flags. Fixes: 2f4b829c ("arm64: Add support for hardware updates of the access and dirty pte bits") Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NMing Lei <tom.leiming@gmail.com> Tested-by: NJulien Grall <julien.grall@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> # 4.3+ [will: reworded comment] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ganapatrao Kulkarni 提交于
Enable NUMA balancing for arm64 platforms. Add pte, pmd protnone helpers for use by automatic NUMA balancing. Reviewed-by: NSteve Capper <steve.capper@arm.com> Reviewed-by: NRobert Richter <rrichter@cavium.com> Signed-off-by: NGanapatrao Kulkarni <gkulkarni@caviumnetworks.com> Signed-off-by: NDavid Daney <david.daney@cavium.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ganapatrao Kulkarni 提交于
Attempt to get the memory and CPU NUMA node via of_numa. If that fails, default the dummy NUMA node and map all memory and CPUs to node 0. Tested-by: NShannon Zhao <shannon.zhao@linaro.org> Reviewed-by: NRobert Richter <rrichter@cavium.com> Signed-off-by: NGanapatrao Kulkarni <gkulkarni@caviumnetworks.com> Signed-off-by: NDavid Daney <david.daney@cavium.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 David Daney 提交于
In order to extract NUMA information from the device tree, we need to have the tree in its unflattened form. Move the call to bootmem_init() in the tail of paging_init() into setup_arch, and adjust header files so that its declaration is visible. Move the unflatten_device_tree() call between the calls to paging_init() and bootmem_init(). Follow on patches add NUMA handling to bootmem_init(). Signed-off-by: NDavid Daney <david.daney@cavium.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 David Daney 提交于
Add device tree parsing for NUMA topology using device "numa-node-id" property in distance-map and cpu nodes. This is a complete rewrite of a previous patch by: Ganapatrao Kulkarni<gkulkarni@caviumnetworks.com> Signed-off-by: NDavid Daney <david.daney@cavium.com> Acked-by: NRob Herring <robh@kernel.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ganapatrao Kulkarni 提交于
Add DT bindings for numa mapping of memory, CPUs and IOs. Reviewed-by: NRobert Richter <rrichter@cavium.com> Signed-off-by: NGanapatrao Kulkarni <gkulkarni@caviumnetworks.com> Signed-off-by: NDavid Daney <david.daney@cavium.com> Acked-by: NRob Herring <robh@kernel.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Ard Biesheuvel 提交于
There are two problems with the UEFI stub DT memory node removal routine: - it deletes nodes as it traverses the tree, which happens to work but is not supported, as deletion invalidates the node iterator; - deleting memory nodes entirely may discard annotations in the form of additional properties on the nodes. Since the discovery of DT memory nodes occurs strictly before the UEFI init sequence, we can simply clear the memblock memory table before parsing the UEFI memory map. This way, it is no longer necessary to remove the nodes, so we can remove that logic from the stub as well. Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Acked-by: NSteve Capper <steve.capper@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NDavid Daney <david.daney@cavium.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Suzuki K Poulose 提交于
With a VHE capable CPU, kernel can run at EL2 and is a decided at early boot. If some of the CPUs didn't start it EL2 or doesn't have VHE, we could have CPUs running at different exception levels, all in the same kernel! This patch adds an early check for the secondary CPUs to detect such situations. For each non-boot CPU add a sanity check to make sure we don't have different run levels w.r.t the boot CPU. We save the information on whether the boot CPU is running in hyp mode or not and ensure the remaining CPUs match it. Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> [will: made boot_cpu_hyp_mode static] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-