- 31 8月, 2020 3 次提交
-
-
由 Andrew Pinski 提交于
hulk inclusion category: feature bugzilla: NA CVE: NA --------------------------- This patch adds the config option for ILP32. Signed-off-by: NAndrew Pinski <Andrew.Pinski@caviumnetworks.com> Signed-off-by: NPhilipp Tomsich <philipp.tomsich@theobroma-systems.com> Signed-off-by: NChristoph Muellner <christoph.muellner@theobroma-systems.com> Signed-off-by: NYury Norov <ynorov@caviumnetworks.com> Reviewed-by: NDavid Daney <ddaney@caviumnetworks.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com <mailto:guohanjun@huawei.com>> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yury Norov 提交于
hulk inclusion category: feature bugzilla: NA CVE: NA --------------------------- As we support more than one compat formats, it looks more reasonable to not use fs/compat_binfmt.c. Custom binfmt_elf32.c allows to move aarch32 specific definitions there and make code more maintainable and readable. Signed-off-by: NYury Norov <ynorov@caviumnetworks.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com <mailto:guohanjun@huawei.com>> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Andrew Pinski 提交于
hulk inclusion category: feature bugzilla: NA CVE: NA --------------------------- In this patchset ILP32 ABI support is added. Additionally to AARCH32, which is binary-compatible with ARM, ILP32 is (mostly) ABI-compatible. From now, AARCH32_EL0 (former COMPAT) config option means the support of AARCH32 userspace, and ARM64_ILP32 - support of ILP32 ABI (see following patches). COMPAT indicates that one of them or both is enabled. Where needed, CONFIG_COMPAT is changed over to use CONFIG_AARCH32_EL0 instead. Reviewed-by: NDavid Daney <ddaney@caviumnetworks.com> Signed-off-by: NAndrew Pinski <Andrew.Pinski@caviumnetworks.com> Signed-off-by: NYury Norov <ynorov@caviumnetworks.com> Signed-off-by: NPhilipp Tomsich <philipp.tomsich@theobroma-systems.com> Signed-off-by: NChristoph Muellner <christoph.muellner@theobroma-systems.com> Signed-off-by: NBamvor Jian Zhang <bamv2005@gmail.com> Conflicts: arch/arm64/kernel/cpufeature.c [wangxiongfeng: conflicts below 'arm64_cpu_capabilities compat_elf_hwcaps' because we have the follow commit. 119703e850 arm64: cpufeature: Set the FP/SIMD compat HWCAP bits properly Fix conflicts by only changing 'CONFIG_COMPAT' to 'CONFIG_AARCH32_EL0'] Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com <mailto:guohanjun@huawei.com>> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
- 01 6月, 2020 1 次提交
-
-
由 Wang ShaoBo 提交于
hulk inclusion category: feature bugzilla: 28055 CVE: NA ------------------------------ Renaming mpam_late_init() in arch/arm64/kernel/mpam.c to mpam_init(), traveling each MPAM ACPI cache / memory node and adding them to a list, with that, we use the numa node id it belongs to label cache node and proximity_domain to label memory node, once it ends, call mpam_init() to initialize all like before. Code was partially borrowed from James's: http://www.linux-arm.org/git?p=linux-jm.git;a=commit;h=10fe7d6363ae96b 25f584d4a91f9d0f2fd5faf3b,"ACPI / MPAM: Parse the (draft) MPAM table [dead]" v3->v5: mpam.c in drivers/acpi/arm64 should not be compiled when MPAM disabled, so we should add CONFIG_ACPI_MPAM macro and make CONFIG_MPAM select it. Not only that, as mpam init procedure is strong correlated to ACPI for Now (follow-up might be dependent on device tree), and CONFIG_ ACPI is not always selected under configuration, we should make CONFIG_ MPAM depends on CONFIG_ACPI before selecting CONFIG_ACPI_MPAM. Signed-off-by: NWang ShaoBo <bobo.shaobowang@huawei.com> Reviewed-By: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
- 27 12月, 2019 18 次提交
-
-
由 Mian Yousaf Kaukab 提交于
[ Upstream commit 61ae1321 ] Enable CPU vulnerabilty show functions for spectre_v1, spectre_v2, meltdown and store-bypass. Signed-off-by: NMian Yousaf Kaukab <ykaukab@suse.de> Signed-off-by: NJeremy Linton <jeremy.linton@arm.com> Reviewed-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NStefan Wahren <stefan.wahren@i2se.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Julien Thierry 提交于
mainline inclusion from mainline-5.3-rc1 commit 48ce8f80f590 category: bugfix bugzilla: 23209 CVE: NA ------------------------------------------------- Using IRQ priority masking to enable/disable interrupts is a bit sensitive as it requires to deal with both ICC_PMR_EL1 and PSR.I. Introduce some validity checks to both highlight the states in which functions dealing with IRQ enabling/disabling can (not) be called, and bark a warning when called in an unexpected state. Since these checks are done on hotpaths, introduce a build option to choose whether to do the checking. Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Wei Li 提交于
hulk inclusion category: feature bugzilla: 13227 CVE: NA ------------------------------------------------- Enabling CNA is controlled via a new configuration option (NUMA_AWARE_SPINLOCKS). Add it for arm64 support. Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Wei Li 提交于
hulk inclusion category: feature bugzilla: 13227 CVE: NA ------------------------------------------------- This reverts commit 57b8f63a0a21 ("qspinlock: numaware: Add ARM64 support"). Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Hanjun Guo 提交于
hulk inclusion category: feature bugzilla: 13227 CVE: NA ------------------------------------------------- Enabling CNA is controlled via a new configuration option (NUMA_AWARE_SPINLOCKS). Add it for arm64 support. Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NWei Li <liwei391@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ard Biesheuvel 提交于
mainline inclusion from mainline-4.20-rc1 commit: 7481cddf category: feature feature: accelerated crc32 routines bugzilla: 13702 CVE: NA -------------------------------------------------- Unlike crc32c(), which is wired up to the crypto API internally so the optimal driver is selected based on the platform's capabilities, crc32_le() is implemented as a library function using a slice-by-8 table based C implementation. Even though few of the call sites may be bottlenecks, calling a time variant implementation with a non-negligible D-cache footprint is a bit of a waste, given that ARMv8.1 and up mandates support for the CRC32 instructions that were optional in ARMv8.0, but are already widely available, even on the Cortex-A53 based Raspberry Pi. So implement routines that use these instructions if available, and fall back to the existing generic routines otherwise. The selection is based on alternatives patching. Note that this unconditionally selects CONFIG_CRC32 as a builtin. Since CRC32 is relied upon by core functionality such as CONFIG_OF_FLATTREE, this just codifies the status quo. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NXie XiuQi <xiexiuqi@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Will Deacon 提交于
mainline inclusion from mainline-4.20-rc1 commit: ace8cb75 category: feature feature: Reduce synchronous TLB invalidation on ARM64 bugzilla: NA CVE: NA -------------------------------------------------- By selecting HAVE_RCU_TABLE_INVALIDATE, we can rely on tlb_flush() being called if we fail to batch table pages for freeing. This in turn allows us to postpone walk-cache invalidation until tlb_finish_mmu(), which avoids lots of unnecessary DSBs and means we can shoot down the ASID if the range is large enough. Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NXuefeng Wang <wxf.wang@hisilicon.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Miles Chen 提交于
[ Upstream commit 0c1f14ed ] This change makes CONFIG_ZONE_DMA32 defuly y and allows users to overwrite it only when CONFIG_EXPERT=y. For the SoCs that do not need CONFIG_ZONE_DMA32, this is the first step to manage all available memory by a single zone(normal zone) to reduce the overhead of multiple zones. The change also fixes a build error when CONFIG_NUMA=y and CONFIG_ZONE_DMA32=n. arch/arm64/mm/init.c:195:17: error: use of undeclared identifier 'ZONE_DMA32' max_zone_pfns[ZONE_DMA32] = PFN_DOWN(max_zone_dma_phys()); Change since v1: 1. only expose CONFIG_ZONE_DMA32 when CONFIG_EXPERT=y 2. remove redundant IS_ENABLED(CONFIG_ZONE_DMA32) Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: NMiles Chen <miles.chen@mediatek.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jia He 提交于
hulk inclusion category: performance bugzilla: 11028 CVE: NA ------------------------------------------------- Make CONFIG_HAVE_MEMBLOCK_PFN_VALID a new config option so it can move memblock_next_valid_pfn to generic code file. All the latter optimizations are based on this config. The memblock initialization time on arm/arm64 can benefit from this. Signed-off-by: NJia He <jia.he@hxt-semitech.com> Reviewed-by: NPavel Tatashin <pasha.tatashin@oracle.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: Nzhong jiang <zhongjiang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Will Deacon 提交于
commit 969f5ea6 upstream. Revisions of the Cortex-A76 CPU prior to r4p0 are affected by an erratum that can prevent interrupts from being taken when single-stepping. This patch implements a software workaround to prevent userspace from effectively being able to disable interrupts. Cc: <stable@vger.kernel.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: arch/arm64/include/asm/cpucaps.h [yyl: adjust context] Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Wei Li 提交于
hulk inclusion category: bugfix bugzilla: 13794 CVE: NA ------------------------------------------------- Add new config CONFIG_PMU_WATCHDOG for watchdog implementation method configuration. Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Wei Li 提交于
hulk inclusion category: feature bugzilla: 12805 CVE: NA ------------------------------------------------- This patch is based on "arm64: perf: add nmi support for pmu" patch series. This feature is alternative to the SDEI NMI watchdog. As we can not get the cpu frequency, kernel cmdline parameter hardlockup_cpu_freq=xxx (in KHZ) need to be passed by user currently. If it is not be passed, the perf NMI watchdog will be disabled defaultly. Signed-off-by: NWei Li <liwei391@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Robin Murphy 提交于
mainline inclusion from mainline-5.0-rc1 commit 4ab215061554ae2a4b78744a5dd3b3c6639f16a7 category: feature bugzilla: NA CVE: NA -------------------------------- Wire up the basic support for hot-adding memory. Since memory hotplug is fairly tightly coupled to sparsemem, we tweak pfn_valid() to also cross-check the presence of a section in the manner of the generic implementation, before falling back to memblock to check for no-map regions within a present section as before. By having arch_add_memory(() create the linear mapping first, this then makes everything work in the way that __add_section() expects. We expect hotplug to be ACPI-driven, so the swapper_pg_dir updates should be safe from races by virtue of the global device hotplug lock. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJing xiangfeng <jingxiangfeng@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Julien Thierry 提交于
hulk inclusion category: feature bugzilla: 9291 CVE: NA ported from https://lore.kernel.org/patchwork/patch/1037480/ -------------------------------- Add a build option and a command line parameter to build and enable the support of pseudo-NMIs. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Suggested-by: NDaniel Thompson <daniel.thompson@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yang Yingliang 提交于
hulk inclusion category: feature bugzilla: 5510 CVE: NA Signed-off-by: NXie XiuQi <xiexiuqi@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: bugfix bugzilla: 5507 CVE: NA ------------------------------------------------- commit a257e025 ("arm64/kernel: don't ban ADRP to work around Cortex-A53 erratum #843419") implemented the #843419 workaround for modules in a way that allows us to switch back to the small C mode. we can use livepatch when CONFIG_ARM64_ERRATUM_843419. Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Cheng Jian 提交于
euler inclusion category: feature Bugzilla: 5507 CVE: N/A ---------------------------------------- support livepatch without ftrace for ARM64 supported now: livepatch relocation when init_patch after load_module; instruction patched when enable; activeness function check; enforcing the patch stacking principle; long jump (both livepatch relocation and insn patched) module plts request by livepatch-relocation Signed-off-by: NCheng Jian <cj.chengjian@huawei.com> Reviewed-by: NLi Bin <huawei.libin@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xie XiuQi 提交于
euler inclusion category: feature bugzilla: 5289 CVE: NA [PATCH 1/3] arm64/ras: support sea error recovery [PATCH 2/3] GHES: add a notify chain for process memory section [PATCH 3/3] arm64/ras: save error address from memory section for recovery Add RAS support. ---------------------------------------- James solution[1] has not be applied to mainline yet, and it can't resolve the following problem: 1) no action when failed to do memory_failure; 2) no processor error processing (L2, L3 ...) For matching the product team's requirement, we could apply our own solution now. When the mainline could resolve those problems, we can backport it then. [1] [v6,00/18] APEI in_nmi() rework https://patchwork.kernel.org/cover/10611001/Signed-off-by: NXie XiuQi <xiexiuqi@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
- 31 8月, 2018 1 次提交
-
-
由 James Morse 提交于
Commit 6d526ee2 ("arm64: mm: enable CONFIG_HOLES_IN_ZONE for NUMA") only enabled HOLES_IN_ZONE for NUMA systems because the NUMA code was choking on the missing zone for nomap pages. This problem doesn't just apply to NUMA systems. If the architecture doesn't set HAVE_ARCH_PFN_VALID, pfn_valid() will return true if the pfn is part of a valid sparsemem section. When working with multiple pages, the mm code uses pfn_valid_within() to test each page it uses within the sparsemem section is valid. On most systems memory comes in MAX_ORDER_NR_PAGES chunks which all have valid/initialised struct pages. In this case pfn_valid_within() is optimised out. Systems where this isn't true (e.g. due to nomap) should set HOLES_IN_ZONE and provide HAVE_ARCH_PFN_VALID so that mm tests each page as it works with it. Currently non-NUMA arm64 systems can't enable HOLES_IN_ZONE, leading to a VM_BUG_ON(): | page:fffffdff802e1780 is uninitialized and poisoned | raw: ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff | raw: ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff | page dumped because: VM_BUG_ON_PAGE(PagePoisoned(p)) | ------------[ cut here ]------------ | kernel BUG at include/linux/mm.h:978! | Internal error: Oops - BUG: 0 [#1] PREEMPT SMP [...] | CPU: 1 PID: 25236 Comm: dd Not tainted 4.18.0 #7 | Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 | pstate: 40000085 (nZcv daIf -PAN -UAO) | pc : move_freepages_block+0x144/0x248 | lr : move_freepages_block+0x144/0x248 | sp : fffffe0071177680 [...] | Process dd (pid: 25236, stack limit = 0x0000000094cc07fb) | Call trace: | move_freepages_block+0x144/0x248 | steal_suitable_fallback+0x100/0x16c | get_page_from_freelist+0x440/0xb20 | __alloc_pages_nodemask+0xe8/0x838 | new_slab+0xd4/0x418 | ___slab_alloc.constprop.27+0x380/0x4a8 | __slab_alloc.isra.21.constprop.26+0x24/0x34 | kmem_cache_alloc+0xa8/0x180 | alloc_buffer_head+0x1c/0x90 | alloc_page_buffers+0x68/0xb0 | create_empty_buffers+0x20/0x1ec | create_page_buffers+0xb0/0xf0 | __block_write_begin_int+0xc4/0x564 | __block_write_begin+0x10/0x18 | block_write_begin+0x48/0xd0 | blkdev_write_begin+0x28/0x30 | generic_perform_write+0x98/0x16c | __generic_file_write_iter+0x138/0x168 | blkdev_write_iter+0x80/0xf0 | __vfs_write+0xe4/0x10c | vfs_write+0xb4/0x168 | ksys_write+0x44/0x88 | sys_write+0xc/0x14 | el0_svc_naked+0x30/0x34 | Code: aa1303e0 90001a01 91296421 94008902 (d4210000) | ---[ end trace 1601ba47f6e883fe ]--- Remove the NUMA dependency. Link: https://www.spinics.net/lists/arm-kernel/msg671851.html Cc: <stable@vger.kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reported-by: NMikulas Patocka <mpatocka@redhat.com> Reviewed-by: NPavel Tatashin <pavel.tatashin@microsoft.com> Tested-by: NMikulas Patocka <mpatocka@redhat.com> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 23 8月, 2018 1 次提交
-
-
由 Ard Biesheuvel 提交于
Patch series "add support for relative references in special sections", v10. This adds support for emitting special sections such as initcall arrays, PCI fixups and tracepoints as relative references rather than absolute references. This reduces the size by 50% on 64-bit architectures, but more importantly, it removes the need for carrying relocation metadata for these sections in relocatable kernels (e.g., for KASLR) that needs to be fixed up at boot time. On arm64, this reduces the vmlinux footprint of such a reference by 8x (8 byte absolute reference + 24 byte RELA entry vs 4 byte relative reference) Patch #3 was sent out before as a single patch. This series supersedes the previous submission. This version makes relative ksymtab entries dependent on the new Kconfig symbol HAVE_ARCH_PREL32_RELOCATIONS rather than trying to infer from kbuild test robot replies for which architectures it should be blacklisted. Patch #1 introduces the new Kconfig symbol HAVE_ARCH_PREL32_RELOCATIONS, and sets it for the main architectures that are expected to benefit the most from this feature, i.e., 64-bit architectures or ones that use runtime relocations. Patch #2 add support for #define'ing __DISABLE_EXPORTS to get rid of ksymtab/kcrctab sections in decompressor and EFI stub objects when rebuilding existing C files to run in a different context. Patches #4 - #6 implement relative references for initcalls, PCI fixups and tracepoints, respectively, all of which produce sections with order ~1000 entries on an arm64 defconfig kernel with tracing enabled. This means we save about 28 KB of vmlinux space for each of these patches. [From the v7 series blurb, which included the jump_label patches as well]: For the arm64 kernel, all patches combined reduce the memory footprint of vmlinux by about 1.3 MB (using a config copied from Ubuntu that has KASLR enabled), of which ~1 MB is the size reduction of the RELA section in .init, and the remaining 300 KB is reduction of .text/.data. This patch (of 6): Before updating certain subsystems to use place relative 32-bit relocations in special sections, to save space and reduce the number of absolute relocations that need to be processed at runtime by relocatable kernels, introduce the Kconfig symbol and define it for some architectures that should be able to support and benefit from it. Link: http://lkml.kernel.org/r/20180704083651.24360-2-ard.biesheuvel@linaro.orgSigned-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NWill Deacon <will.deacon@arm.com> Acked-by: NIngo Molnar <mingo@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Kees Cook <keescook@chromium.org> Cc: Thomas Garnier <thgarnie@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "Serge E. Hallyn" <serge@hallyn.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Paul Mackerras <paulus@samba.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Petr Mladek <pmladek@suse.com> Cc: James Morris <jmorris@namei.org> Cc: Nicolas Pitre <nico@linaro.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>, Cc: James Morris <james.morris@microsoft.com> Cc: Jessica Yu <jeyu@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 8月, 2018 1 次提交
-
-
由 Palmer Dabbelt 提交于
It appears arm64 copied arm's GENERIC_IRQ_MULTI_HANDLER code, but made it unconditional. Converts the arm64 code to use the new generic code, which simply consists of deleting the arm64 code and setting MULTI_IRQ_HANDLER instead. Signed-off-by: NPalmer Dabbelt <palmer@sifive.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NChristoph Hellwig <hch@lst.de> Cc: linux@armlinux.org.uk Cc: catalin.marinas@arm.com Cc: Will Deacon <will.deacon@arm.com> Cc: jonas@southpole.se Cc: stefan.kristiansson@saunalahti.fi Cc: shorne@gmail.com Cc: jason@lakedaemon.net Cc: marc.zyngier@arm.com Cc: Arnd Bergmann <arnd@arndb.de> Cc: nicolas.pitre@linaro.org Cc: vladimir.murzin@arm.com Cc: keescook@chromium.org Cc: jinb.park7@gmail.com Cc: yamada.masahiro@socionext.com Cc: alexandre.belloni@bootlin.com Cc: pombredanne@nexb.com Cc: Greg KH <gregkh@linuxfoundation.org> Cc: kstewart@linuxfoundation.org Cc: jhogan@kernel.org Cc: mark.rutland@arm.com Cc: ard.biesheuvel@linaro.org Cc: james.morse@arm.com Cc: linux-arm-kernel@lists.infradead.org Cc: openrisc@lists.librecores.org Link: https://lkml.kernel.org/r/20180622170126.6308-4-palmer@sifive.com
-
- 02 8月, 2018 3 次提交
-
-
由 Christoph Hellwig 提交于
Almost all architectures include it. Add a ARCH_NO_PREEMPT symbol to disable preempt support for alpha, hexagon, non-coldfire m68k and user mode Linux. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
-
由 Christoph Hellwig 提交于
Move the source of lib/Kconfig.debug and arch/$(ARCH)/Kconfig.debug to the top-level Kconfig. For two architectures that means moving their arch-specific symbols in that menu into a new arch Kconfig.debug file, and for a few more creating a dummy file so that we can include it unconditionally. Also move the actual 'Kernel hacking' menu to lib/Kconfig.debug, where it belongs. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
-
由 Christoph Hellwig 提交于
Instead of duplicating the source statements in every architecture just do it once in the toplevel Kconfig file. Note that with this the inclusion of arch/$(SRCARCH/Kconfig moves out of the top-level Kconfig into arch/Kconfig so that don't violate ordering constraits while keeping a sensible menu structure. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
-
- 26 7月, 2018 1 次提交
-
-
由 Laura Abbott 提交于
This adds support for the STACKLEAK gcc plugin to arm64 by implementing stackleak_check_alloca(), based heavily on the x86 version, and adding the two helpers used by the stackleak common code: current_top_of_stack() and on_thread_stack(). The stack erasure calls are made at syscall returns. Additionally, this disables the plugin in hypervisor and EFI stub code, which are out of scope for the protection. Acked-by: NAlexander Popov <alex.popov@linux.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 24 7月, 2018 1 次提交
-
-
由 Arnd Bergmann 提交于
Kconfig reports a warning on x86 builds after the ARM64 dependency was added. drivers/acpi/Kconfig:6:error: recursive dependency detected! drivers/acpi/Kconfig:6: symbol ACPI depends on EFI This rephrases the dependency to keep the ARM64 details out of the shared Kconfig file, so Kconfig no longer gets confused by it. For consistency, all three architectures that support ACPI now select ARCH_SUPPORTS_ACPI in exactly the configuration in which they allow it. We still need the 'default x86', as each one wants a different default: default-y on x86, default-n on arm64, and always-y on ia64. Fixes: 5bcd4408 ("drivers: acpi: add dependency of EFI for arm64") Reviewed-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 12 7月, 2018 1 次提交
-
-
由 Mark Rutland 提交于
To minimize the risk of userspace-controlled values being used under speculation, this patch adds pt_regs based syscall wrappers for arm64, which pass the minimum set of required userspace values to syscall implementations. For each syscall, a wrapper which takes a pt_regs argument is automatically generated, and this extracts the arguments before calling the "real" syscall implementation. Each syscall has three functions generated: * __do_<compat_>sys_<name> is the "real" syscall implementation, with the expected prototype. * __se_<compat_>sys_<name> is the sign-extension/narrowing wrapper, inherited from common code. This takes a series of long parameters, casting each to the requisite types required by the "real" syscall implementation in __do_<compat_>sys_<name>. This wrapper *may* not be necessary on arm64 given the AAPCS rules on unused register bits, but it seemed safer to keep the wrapper for now. * __arm64_<compat_>_sys_<name> takes a struct pt_regs pointer, and extracts *only* the relevant register values, passing these on to the __se_<compat_>sys_<name> wrapper. The syscall invocation code is updated to handle the calling convention required by __arm64_<compat_>_sys_<name>, and passes a single struct pt_regs pointer. The compiler can fold the syscall implementation and its wrappers, such that the overhead of this approach is minimized. Note that we play games with sys_ni_syscall(). It can't be defined with SYSCALL_DEFINE0() because we must avoid the possibility of error injection. Additionally, there are a couple of locations where we need to call it from C code, and we don't (currently) have a ksys_ni_syscall(). While it has no wrapper, passing in a redundant pt_regs pointer is benign per the AAPCS. When ARCH_HAS_SYSCALL_WRAPPER is selected, no prototype is defines for sys_ni_syscall(). Since we need to treat it differently for in-kernel calls and the syscall tables, the prototype is defined as-required. The wrappers are largely the same as their x86 counterparts, but simplified as we don't have a variety of compat calling conventions that require separate stubs. Unlike x86, we have some zero-argument compat syscalls, and must define COMPAT_SYSCALL_DEFINE0() to ensure that these are also given an __arm64_compat_sys_ prefix. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NDominik Brodowski <linux@dominikbrodowski.net> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 11 7月, 2018 2 次提交
-
-
由 Will Deacon 提交于
Implement calls to rseq_signal_deliver, rseq_handle_notify_resume and rseq_syscall so that we can select HAVE_RSEQ on arm64. Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Arnd Bergmann 提交于
Building without NUMA but with FLATMEM results in a link error because mem_map[] is not available: aarch64-linux-ld -EB -maarch64elfb --no-undefined -X -pie -shared -Bsymbolic --no-apply-dynamic-relocs --build-id -o .tmp_vmlinux1 -T ./arch/arm64/kernel/vmlinux.lds --whole-archive built-in.a --no-whole-archive --start-group arch/arm64/lib/lib.a lib/lib.a --end-group init/do_mounts.o: In function `mount_block_root': do_mounts.c:(.init.text+0x1e8): undefined reference to `mem_map' arch/arm64/kernel/vdso.o: In function `vdso_init': vdso.c:(.init.text+0xb4): undefined reference to `mem_map' This uses the same trick as the other architectures, making flatmem depend on !NUMA to avoid the broken configuration. Fixes: e7d4bac4 ("arm64: add ARM64-specific support for flatmem") Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 09 7月, 2018 1 次提交
-
-
由 Nikunj Kela 提交于
Flatmem is useful in reducing kernel memory usage. One usecase is in kdump kernel. We are able to save ~14M by moving to flatmem scheme. Cc: xe-kernel@external.cisco.com Cc: Nikunj Kela <nkela@cisco.com> Signed-off-by: NNikunj Kela <nkela@cisco.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 05 7月, 2018 2 次提交
-
-
由 Will Deacon 提交于
When running with CONFIG_PREEMPT=n, the spinlock fastpaths fit inside 64 bytes, which typically coincides with the L1 I-cache line size. Inline the spinlock fastpaths, like we do already for rwlocks. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
It's fair to say that our ticket lock has served us well over time, but it's time to bite the bullet and start using the generic qspinlock code so we can make use of explicit MCS queuing and potentially better PV performance in future. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 15 6月, 2018 1 次提交
-
-
由 Masahiro Yamada 提交于
HAVE_CC_STACKPROTECTOR should be selected by architectures with stack canary implementation. It is not about the compiler support. For the consistency with commit 050e9baa ("Kbuild: rename CC_STACKPROTECTOR[_STRONG] config variables"), remove 'CC_' from the config symbol. I moved the 'select' lines to keep the alphabetical sorting. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Acked-by: NKees Cook <keescook@chromium.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 08 6月, 2018 2 次提交
-
-
由 Masahiro Yamada 提交于
This becomes much neater in Kconfig. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NKees Cook <keescook@chromium.org>
-
由 Laurent Dufour 提交于
Currently the PTE special supports is turned on in per architecture header files. Most of the time, it is defined in arch/*/include/asm/pgtable.h depending or not on some other per architecture static definition. This patch introduce a new configuration variable to manage this directly in the Kconfig files. It would later replace __HAVE_ARCH_PTE_SPECIAL. Here notes for some architecture where the definition of __HAVE_ARCH_PTE_SPECIAL is not obvious: arm __HAVE_ARCH_PTE_SPECIAL which is currently defined in arch/arm/include/asm/pgtable-3level.h which is included by arch/arm/include/asm/pgtable.h when CONFIG_ARM_LPAE is set. So select ARCH_HAS_PTE_SPECIAL if ARM_LPAE. powerpc __HAVE_ARCH_PTE_SPECIAL is defined in 2 files: - arch/powerpc/include/asm/book3s/64/pgtable.h - arch/powerpc/include/asm/pte-common.h The first one is included if (PPC_BOOK3S & PPC64) while the second is included in all the other cases. So select ARCH_HAS_PTE_SPECIAL all the time. sparc: __HAVE_ARCH_PTE_SPECIAL is defined if defined(__sparc__) && defined(__arch64__) which are defined through the compiler in sparc/Makefile if !SPARC32 which I assume to be if SPARC64. So select ARCH_HAS_PTE_SPECIAL if SPARC64 There is no functional change introduced by this patch. Link: http://lkml.kernel.org/r/1523433816-14460-2-git-send-email-ldufour@linux.vnet.ibm.comSigned-off-by: NLaurent Dufour <ldufour@linux.vnet.ibm.com> Suggested-by: NJerome Glisse <jglisse@redhat.com> Reviewed-by: NJerome Glisse <jglisse@redhat.com> Acked-by: NDavid Rientjes <rientjes@google.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.vnet.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Rich Felker <dalias@libc.org> Cc: David S. Miller <davem@davemloft.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Albert Ou <albert@sifive.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: David Rientjes <rientjes@google.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Christophe LEROY <christophe.leroy@c-s.fr> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 01 6月, 2018 1 次提交
-
-
由 Marc Zyngier 提交于
As for Spectre variant-2, we rely on SMCCC 1.1 to provide the discovery mechanism for detecting the SSBD mitigation. A new capability is also allocated for that purpose, and a config option. Reviewed-by: NJulien Grall <julien.grall@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-