- 09 4月, 2021 8 次提交
-
-
由 Ard Biesheuvel 提交于
mainline inclusion from mainline-5.11-rc1 commit 2b865293 category: bugfix bugzilla: 50423 CVE: NA --------------------------------------------- We recently introduced a 1 GB sized ZONE_DMA to cater for platforms incorporating masters that can address less than 32 bits of DMA, in particular the Raspberry Pi 4, which has 4 or 8 GB of DRAM, but has peripherals that can only address up to 1 GB (and its PCIe host bridge can only access the bottom 3 GB) Instructing the DMA layer about these limitations is straight-forward, even though we had to fix some issues regarding memory limits set in the IORT for named components, and regarding the handling of ACPI _DMA methods. However, the DMA layer also needs to be able to allocate memory that is guaranteed to meet those DMA constraints, for bounce buffering as well as allocating the backing for consistent mappings. This is why the 1 GB ZONE_DMA was introduced recently. Unfortunately, it turns out the having a 1 GB ZONE_DMA as well as a ZONE_DMA32 causes problems with kdump, and potentially in other places where allocations cannot cross zone boundaries. Therefore, we should avoid having two separate DMA zones when possible. So let's do an early scan of the IORT, and only create the ZONE_DMA if we encounter any devices that need it. This puts the burden on the firmware to describe such limitations in the IORT, which may be redundant (and less precise) if _DMA methods are also being provided. However, it should be noted that this situation is highly unusual for arm64 ACPI machines. Also, the DMA subsystem still gives precedence to the _DMA method if implemented, and so we will not lose the ability to perform streaming DMA outside the ZONE_DMA if the _DMA method permits it. [nsaenz: unified implementation with DT's counterpart] Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NHanjun Guo <guohanjun@huawei.com> Cc: Jeremy Linton <jeremy.linton@arm.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Cc: Rob Herring <robh+dt@kernel.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Hanjun Guo <guohanjun@huawei.com> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20201119175400.9995-7-nsaenzjulienne@suse.deSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJing Xiangfeng <jingxiangfeng@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Nicolas Saenz Julienne 提交于
mainline inclusion from mainline-5.11-rc1 commit 8424ecdd category: bugfix bugzilla: 50423 CVE: NA --------------------------------------------- We recently introduced a 1 GB sized ZONE_DMA to cater for platforms incorporating masters that can address less than 32 bits of DMA, in particular the Raspberry Pi 4, which has 4 or 8 GB of DRAM, but has peripherals that can only address up to 1 GB (and its PCIe host bridge can only access the bottom 3 GB) The DMA layer also needs to be able to allocate memory that is guaranteed to meet those DMA constraints, for bounce buffering as well as allocating the backing for consistent mappings. This is why the 1 GB ZONE_DMA was introduced recently. Unfortunately, it turns out the having a 1 GB ZONE_DMA as well as a ZONE_DMA32 causes problems with kdump, and potentially in other places where allocations cannot cross zone boundaries. Therefore, we should avoid having two separate DMA zones when possible. So, with the help of of_dma_get_max_cpu_address() get the topmost physical address accessible to all DMA masters in system and use that information to fine-tune ZONE_DMA's size. In the absence of addressing limited masters ZONE_DMA will span the whole 32-bit address space, otherwise, in the case of the Raspberry Pi 4 it'll only span the 30-bit address space, and have ZONE_DMA32 cover the rest of the 32-bit address space. Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Link: https://lore.kernel.org/r/20201119175400.9995-6-nsaenzjulienne@suse.deSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJing Xiangfeng <jingxiangfeng@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Nicolas Saenz Julienne 提交于
mainline inclusion from mainline-5.11-rc1 commit 07d13a1d category: bugfix bugzilla: 50423 CVE: NA --------------------------------------------- Introduce a test for of_dma_get_max_cup_address(), it uses the same DT data as the rest of dma-ranges unit tests. Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: NRob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20201119175400.9995-5-nsaenzjulienne@suse.deSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJing Xiangfeng <jingxiangfeng@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Nicolas Saenz Julienne 提交于
mainline inclusion from mainline-5.11-rc1 commit 964db79d category: bugfix bugzilla: 50423 CVE: NA --------------------------------------------- Introduce of_dma_get_max_cpu_address(), which provides the highest CPU physical address addressable by all DMA masters in the system. It's specially useful for setting memory zones sizes at early boot time. Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Reviewed-by: NRob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20201119175400.9995-4-nsaenzjulienne@suse.deSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJing Xiangfeng <jingxiangfeng@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Nicolas Saenz Julienne 提交于
mainline inclusion from mainline-5.11-rc1 commit 9804f8c6 category: bugfix bugzilla: 50423 CVE: NA --------------------------------------------- zone_dma_bits's initialization happens earlier that it's actually needed, in arm64_memblock_init(). So move it into the more suitable zone_sizes_init(). Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Link: https://lore.kernel.org/r/20201119175400.9995-3-nsaenzjulienne@suse.deSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJing Xiangfeng <jingxiangfeng@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Nicolas Saenz Julienne 提交于
mainline inclusion from mainline-5.11-rc1 commit 0a30c535 category: bugfix bugzilla: 50423 CVE: NA --------------------------------------------- crashkernel might reserve memory located in ZONE_DMA. We plan to delay ZONE_DMA's initialization after unflattening the devicetree and ACPI's boot table initialization, so move it later in the boot process. Specifically into bootmem_init() since request_standard_resources() depends on it. Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Tested-by: NJeremy Linton <jeremy.linton@arm.com> Link: https://lore.kernel.org/r/20201119175400.9995-2-nsaenzjulienne@suse.deSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJing Xiangfeng <jingxiangfeng@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Chen Jun 提交于
hulk inclusion category: bugfix bugzilla: 50067 CVE: NA ------------------------------------------------------------------------- If build Image without CONFIG_PM_SLEEP, there would be a compile warning: warning: ‘cdn_dp_resume’ defined but not used [-Wunused-function] Because SET_SYSTEM_SLEEP_PM_OPS will do nothing without CONFIG_PM_SLEEP. Make cdn_dp_resume depend on CONFIG_PM_SLEEP Signed-off-by: NChen Jun <chenjun102@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ard Biesheuvel 提交于
mainline inclusion from mainline-5.11-rc1 commit 660d2062 category: bugfix bugzilla: 49979 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=660d2062190db131d2feaf19914e90f868fe285c ------------------------------------------------------------------------- Unlike many other structure types defined in the crypto API, the 'shash_desc' structure is permitted to live on the stack, which implies its contents may not be accessed by DMA masters. (This is due to the fact that the stack may be located in the vmalloc area, which requires a different virtual-to-physical translation than the one implemented by the DMA subsystem) Our definition of CRYPTO_MINALIGN_ATTR is based on ARCH_KMALLOC_MINALIGN, which may take DMA constraints into account on architectures that support non-cache coherent DMA such as ARM and arm64. In this case, the value is chosen to reflect the largest cacheline size in the system, in order to ensure that explicit cache maintenance as required by non-coherent DMA masters does not affect adjacent, unrelated slab allocations. On arm64, this value is currently set at 128 bytes. This means that applying CRYPTO_MINALIGN_ATTR to struct shash_desc is both unnecessary (as it is never used for DMA), and undesirable, given that it wastes stack space (on arm64, performing the alignment costs 112 bytes in the worst case, and the hole between the 'tfm' and '__ctx' members takes up another 120 bytes, resulting in an increased stack footprint of up to 232 bytes.) So instead, let's switch to the minimum SLAB alignment, which does not take DMA constraints into account. Note that this is a no-op for x86. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 08 4月, 2021 32 次提交
-
-
由 Ye Bin 提交于
hulk inclusion commit 76bbe667ce4ea3f02bd325ca8e8c999c15034079 category: feature feature: ARM kaslr support bugzilla: 47952 CVE: NA ------------------------------------------------- Signed-off-by: NYe Bin <yebin10@huawei.com> Reviewed-by: NJason Yan <yanaijie@huawei.com> Signed-off-by: Nyangerkun <yangerkun@huawei.com> Signed-off-by: NCui GaoSheng <cuigaosheng1@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ye Bin 提交于
hulk inclusion commit 6337511516862e5a4d2d5a96481510e4a7a12b1b category: feature feature: ARM kaslr support bugzilla: 47952 CVE: NA ------------------------------------------------- If we not hide the GOT, when insert module which reference global variable we got error "Unknown symbol_GLOBAL_OFFSET_TABLE_ (err 0)". Signed-off-by: NYe Bin <yebin10@huawei.com> Reviewed-by: NJing Xiangfeng <jingxiangfeng@huawei.com> Signed-off-by: Nzhangyi (F) <yi.zhang@huawei.com> Signed-off-by: NCui GaoSheng <cuigaosheng1@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ye Bin 提交于
arm32: kaslr: When boot with vxboot, we must adjust dtb address before kaslr_early_init, and store dtb address after init. hulk inclusion commit b20bc6211469919f2022884e9a1634d8e576c281 category: feature feature: ARM kaslr support bugzilla: 47952 CVE: NA ------------------------------------------------- When boot with vxboot, we must adjust dtb address before kaslr_early_init, and store dtb address after init. Signed-off-by: NYe Bin <yebin10@huawei.com> Reviewed-by: NJing Xiangfeng <jingxiangfeng@huawei.com> Signed-off-by: Nzhangyi (F) <yi.zhang@huawei.com> Signed-off-by: NCui GaoSheng <cuigaosheng1@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ye Bin 提交于
hulk inclusion commit 88bf5c03832d56c68fac61e4ae97158b3332bd63 category: feature feature: ARM kaslr support bugzilla: 47952 CVE: NA ------------------------------------------------- Fix memtop calculate, when there is no memory top info, we can't use zero instead it. Signed-off-by: NYe Bin <yebin10@huawei.com> Reviewed-by: NJing Xiangfeng <jingxiangfeng@huawei.com> Signed-off-by: Nzhangyi (F) <yi.zhang@huawei.com> Signed-off-by: NCui GaoSheng <cuigaosheng1@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ye Bin 提交于
maillist inclusion from mainline-v5.7-rc1 commit 32830a05 category: bugfix bugzilla: 47952 DTS: NA CVE: NA ------------------------------------------------------------------------ Fix follow warnings: arm-linux-gnueabihf-ld: warning: orphan section `.data.rel.local' from `net/sunrpc/xprt.o' being placed in section `.data.rel.local'. ...... arm-linux-gnueabihf-ld: warning: orphan section `.got.plt' from `arch/arm/kernel/head.o' being placed in section `.got.plt'. arm-linux-gnueabihf-ld: warning: orphan section `.plt' from `arch/arm/kernel/head.o' being placed in section `.plt'. arm-linux-gnueabihf-ld: warning: orphan section `.data.rel.ro' from `arch/arm/kernel/head.o' being placed in section `.data.rel.ro'. ...... Fixes:("ARM: kernel: make vmlinux buildable as a PIE executable") Signed-off-by: NYe Bin <yebin10@huawei.com> Signed-off-by: NCui GaoSheng <cuigaosheng1@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ard Biesheuvel 提交于
maillist inclusion commit b4fa1dbef0cac754a6daec1dec575540967dc240 category: feature feature: ARM kaslr support bugzilla: 47952 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm-kaslr-latest&id=b4fa1dbef0cac754a6daec1dec575540967dc240 Gcc flag '-fvisibility=hidden' specifies the visibility attribute for external linkage entities in object files. You can also selectively set visibility attributes for entities by using pairs of the #pragma GCC visibility push and #pragma GCC visibility pop compiler directives throughout your source program.when we include the hidden.h, __bss_start and __bss_end went from global symbol to local symbol,so we need to modify the regular expression to accommodate this change. ------------------------------------------------- Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCui GaoSheng <cuigaosheng1@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ard Biesheuvel 提交于
maillist inclusion commit b152e5c5054c3937211a541be50d8a7c98a59974 category: feature feature: ARM kaslr support bugzilla: 47952 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm-kaslr-latest&id=b152e5c5054c3937211a541be50d8a7c98a59974 ------------------------------------------------- Add support to the decompressor to load the kernel at a randomized offset, and invoke the kernel proper while passing on the information about the offset at which the kernel was loaded. This implementation will extract some pseudo-randomness from the low bits of the generic timer (if available), and use CRC-16 to combine it with the build ID string and the device tree binary (which ideally has a /chosen/kaslr-seed property, but may also have other properties that differ between boots). This seed is used to select one of the candidate offsets in the lowmem region that don't overlap the zImage itself, the DTB, the initrd and /memreserve/s and/or /reserved-memory nodes that should be left alone. When booting via the UEFI stub, it is left up to the firmware to supply a suitable seed and select an offset. Cc: Russell King <linux@armlinux.org.uk> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCui GaoSheng <cuigaosheng1@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ard Biesheuvel 提交于
maillist inclusion commit a58cdcfbee11974669a651e3ce049ef729e81411 category: feature feature: ARM kaslr support bugzilla: 47952 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm-kaslr-latest&id=a58cdcfbee11974669a651e3ce049ef729e81411 ------------------------------------------------- When randomizing the kernel load address, there may be a large distance in memory between the decompressor binary and its payload and the destination area in memory. Ensure that the decompressor itself is mapped cacheable in this case, by tweaking the existing routine that takes care of this for XIP decompressors. Cc: Russell King <linux@armlinux.org.uk> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCui GaoSheng <cuigaosheng1@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ard Biesheuvel 提交于
maillist inclusion commit c11744cd7b351b0fbc5233c04c32822544c96fc1 category: feature feature: ARM kaslr support bugzilla: 47952 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm-kaslr-latest&id=c11744cd7b351b0fbc5233c04c32822544c96fc1 Update the Kconfig RANDOMIZE_BASE depends on !JUMP_LABEL to resolve compilation conflicts between fpic and JUMP_LABEL. ------------------------------------------------- This implements randomization of the placement of the kernel image inside the lowmem region. It is intended to work together with the decompressor to place the kernel at an offset in physical memory that is a multiple of 2 MB, and to take the same offset into account when creating the virtual mapping. This uses runtime relocation of the kernel built as a PIE binary, to fix up all absolute symbol references to refer to their runtime virtual address. The physical-to-virtual mapping remains unchanged. In order to allow the decompressor to hand over to the core kernel without making assumptions that are not guaranteed to hold when invoking the core kernel directly using bootloaders that are not KASLR aware, the KASLR offset is expected to be placed in r3 when entering the kernel 4 bytes past the entry point, skipping the first instruction. Cc: Russell King <linux@armlinux.org.uk> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCui GaoSheng <cuigaosheng1@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ard Biesheuvel 提交于
maillist inclusion commit 11f8bbc5b0d4d76b3d7114bf9af1805607a20372 category: feature feature: ARM kaslr support bugzilla: 47952 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm-kaslr-latest&id=11f8bbc5b0d4d76b3d7114bf9af1805607a20372 ------------------------------------------------- The location of the ARM vector table in virtual memory is not a compile time constant, and so the virtual addresses of the various entry points are rather meaningless (although they are most likely to reside at the offsets below) ffff1004 t vector_rst ffff1020 t vector_irq ffff10a0 t vector_dabt ffff1120 t vector_pabt ffff11a0 t vector_und ffff1220 t vector_addrexcptn ffff1240 T vector_fiq However, when running with KASLR enabled, the virtual addresses are subject to runtime relocation, which means we should avoid to take absolute references to these symbols, not only directly (by taking the address in C code), but also via /proc/kallsyms or other kernel facilities that deal with ELF symbols. For instance, /proc/kallsyms will list their addresses as 0abf1004 t vector_rst 0abf1020 t vector_irq 0abf10a0 t vector_dabt 0abf1120 t vector_pabt 0abf11a0 t vector_und 0abf1220 t vector_addrexcptn 0abf1240 T vector_fiq when running randomized, which may confuse tools like perf that may use /proc/kallsyms to annotate stack traces. So use .L prefixes for these symbols. This will prevent them from being visible at all outside the assembler source. Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCui GaoSheng <cuigaosheng1@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ard Biesheuvel 提交于
maillist inclusion commit fe64d7efe89877bc52454f9f2bc9ab0ce01ae8fc category: feature feature: ARM kaslr support bugzilla: 47952 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm-kaslr-latest&id=fe64d7efe89877bc52454f9f2bc9ab0ce01ae8fc ------------------------------------------------- The location of swapper_pg_dir is relative to the kernel, not to PAGE_OFFSET or PHYS_OFFSET. So define the symbol relative to the start of the kernel image, and refer to it via its name. Cc: Russell King <linux@armlinux.org.uk> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCui GaoSheng <cuigaosheng1@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ard Biesheuvel 提交于
maillist inclusion commit c3ae0029ea41f4a26a40f592062155412d1b6d07 category: feature feature: ARM kaslr support bugzilla: 47952 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm-kaslr-latest&id=c3ae0029ea41f4a26a40f592062155412d1b6d07 ------------------------------------------------- In order for the EFI stub to be able to decide over what range to randomize the load address of the kernel, expose the definition of the default vmalloc base address as VMALLOC_DEFAULT_BASE. Cc: Russell King <linux@armlinux.org.uk> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCui GaoSheng <cuigaosheng1@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ard Biesheuvel 提交于
maillist inclusion commit 2c7e6b4d7cbff417ff96a24c243508e16168f90c category: feature feature: ARM kaslr support bugzilla: 47952 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm-kaslr-latest&id=2c7e6b4d7cbff417ff96a24c243508e16168f90c ------------------------------------------------- Replace some unnecessary absolute references with relative ones. Cc: Russell King <linux@armlinux.org.uk> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCui GaoSheng <cuigaosheng1@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ard Biesheuvel 提交于
maillist inclusion commit 7e279c05992a88d0517df371a48e72d060b2ca21 category: feature feature: ARM kaslr support bugzilla: 47952 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm-kaslr-latest&id=7e279c05992a88d0517df371a48e72d060b2ca21 ------------------------------------------------- To prepare for adding support for KASLR, which relocates all absolute symbol references at runtime after the caches have been enabled, update the MMU switch code to avoid using absolute symbol references where possible. This ensures these quantities are invariant under runtime relocation. Cc: Russell King <linux@armlinux.org.uk> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCui GaoSheng <cuigaosheng1@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ard Biesheuvel 提交于
maillist inclusion commit 04be01192973461cdd00ab47908a78f0e2f55ef8 category: feature feature: ARM kaslr support bugzilla: 47952 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm-kaslr-latest&id=04be01192973461cdd00ab47908a78f0e2f55ef8 Update the Kconfig RELOCATABLE depends on !JUMP_LABEL to resolve compilation conflicts between fpic and JUMP_LABEL ------------------------------------------------- Update the build flags and linker script to allow vmlinux to be built as a PIE binary, which retains relocation information about absolute symbol references so that they can be fixed up at runtime. This will be used for implementing KASLR, Cc: Russell King <linux@armlinux.org.uk> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCui GaoSheng <cuigaosheng1@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ard Biesheuvel 提交于
maillist inclusion commit ccb456783dd71f474e5783a81d7f18c2cd4dda81 category: feature feature: ARM kaslr support bugzilla: 47952 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm-kaslr-latest&id=ccb456783dd71f474e5783a81d7f18c2cd4dda81 ------------------------------------------------- To avoid having to relocate the contents of extable entries at runtime when running with KASLR enabled, wire up the existing support for emitting them as relative references. This ensures these quantities are invariant under runtime relocation. Cc: Russell King <linux@armlinux.org.uk> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCui GaoSheng <cuigaosheng1@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ard Biesheuvel 提交于
maillist inclusion commit e2aa765c4eb9bbcdd3046744e6f73050d1175138 category: feature feature: ARM kaslr support bugzilla: 47952 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm-kaslr-latest&id=e2aa765c4eb9bbcdd3046744e6f73050d1175138 ------------------------------------------------- This replaces a few copies of the open coded calculations of the physical address of 'pen_release' in the secondary startup code of a couple of platforms. This ensures these quantities are invariant under runtime relocation. Cc: Russell King <linux@armlinux.org.uk> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCui GaoSheng <cuigaosheng1@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ard Biesheuvel 提交于
maillist inclusion commit 59dee05a68727a7fc3c62240542f8753797e38d6 category: feature feature: ARM kaslr support bugzilla: 47952 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm-kaslr-latest&id=59dee05a68727a7fc3c62240542f8753797e38d6 ------------------------------------------------- This replaces an open coded calculation to obtain the physical address of a far symbol with a call to the new ldr_l etc macro. Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCui GaoSheng <cuigaosheng1@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ard Biesheuvel 提交于
maillist inclusion commit 6b12f315331362a8ec9e8fe3f97d9ae09e43fd28 category: feature feature: ARM kaslr support bugzilla: 47952 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm-kaslr-latest&id=6b12f315331362a8ec9e8fe3f97d9ae09e43fd28 ------------------------------------------------- This replaces a couple of open coded calculations to obtain the physical address of a far symbol with calls to the new adr_l etc macros. Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCui GaoSheng <cuigaosheng1@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Ard Biesheuvel 提交于
maillist inclusion commit 857ddf520d76d6516d5cdca396461141b7ca921b category: feature feature: ARM kaslr support bugzilla: 47952 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/commit/?h=arm-kaslr-latest&id=857ddf520d76d6516d5cdca396461141b7ca921b ------------------------------------------------- When running in PIC mode, the compiler will emit const structures containing runtime relocatable quantities into .data.rel.ro.* sections, so that the linker can be smart about placing them together in a segment that is read-write initially, and is remapped read-only afterwards. This is exactly what __ro_after_init aims to provide, so move these sections together. Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NKees Cook <keescook@chromium.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCui GaoSheng <cuigaosheng1@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Chen Jun 提交于
hulk inclusion category: bugfix bugzilla: 49892 CVE: NA ------------------------------------------------------------------------- This reverts commit 318c3537ee8b3de2bf994008b2439ddc90d39687. Commit 318c3537ee ("[Huawei] Microchip Polarfire SoC Clock Driver") introduce microchip_clk_pfsoc_init which never be used. This driver would be not used now. so revert it. Reviewed-by: NMingwang Li <limingwang@huawei.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Chen Jun 提交于
hulk inclusion category: bugfix bugzilla: 49892 CVE: NA ------------------------------------------------------------------------- This reverts commit 536f67936ebf0b2fdea99f04b31ceaec09cc6a8f. Reviewed-by: NMingwang Li <limingwang@huawei.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiongfeng Wang 提交于
hulk inclusion category: feature bugzilla: 48046 CVE: NA ------------------------------------------------------------------------- Firmware may not trigger SDEI event as required frequency. SDEI event may be triggered too soon, which cause false hardlockup in kernel. Check the time stamp in sdei_watchdog_callbak and skip the hardlockup check if it is invoked too soon. Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiongfeng Wang 提交于
hulk inclusion category: feature bugzilla: 48046 CVE: NA ------------------------------------------------------------------------- Functions called in sdei_handler are not allowed to be kprobed, so marked them as NOKPROBE_SYMBOL. There are so many functions in 'watchdog_check_timestamp()'. Luckily, we don't need 'CONFIG_HARDLOCKUP_CHECK_TIMESTAMP' now. So just make CONFIG_SDEI_WATCHDOG depends on !CONFIG_HARDLOCKUP_CHECK_TIMESTAMP in case someone add 'CONFIG_HARDLOCKUP_CHECK_TIMESTAMP' in the future. Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiongfeng Wang 提交于
hulk inclusion category: feature bugzilla: 48046 CVE: NA ------------------------------------------------------------------------- The period of the secure timer is set to 3s by BIOS. That means the secure timer interrupt will trigger every 3 seconds. To further decrease the NMI watchdog's effect on performance, this patch set the period of the secure timer base on 'watchdog_thresh'. This variable is initiallized to 10s. We can also set the period at runtime by modifying '/proc/sys/kernel/watchdog_thresh' Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiongfeng Wang 提交于
hulk inclusion category: feature bugzilla: 48046 CVE: NA ------------------------------------------------------------------------- When we panic in hardlockup, the secure timer interrupt remains activate because firmware clear eoi after dispatch is completed. This will cause arm_arch_timer interrupt failed to trigger in the second kernel. This patch add a new SMC helper to clear eoi of a certain interrupt and clear eoi of the secure timer before booting the second kernel. Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiongfeng Wang 提交于
hulk inclusion category: feature bugzilla: 48046 CVE: NA ------------------------------------------------------------------------- The trigger period of secure time is set by firmware. We need to check the time_stamp every time the secure time fires to make sure the hardlockup detection is not executed too soon. We need to refresh 'last_timestamp' to the current time when we enable the nmi_watchdog. Otherwise, false hardlockup may be detected when the secure timer fires the first time. Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiongfeng Wang 提交于
hulk inclusion category: feature bugzilla: 48046 CVE: NA ------------------------------------------------------------------------- Add nmi_watchdog support for arm64 based on SDEI. Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Conflicts: arch/arm64/kernel/Makefile Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiongfeng Wang 提交于
hulk inclusion category: feature bugzilla: 48046 CVE: NA ------------------------------------------------------------------------- We call 'sdei_init' as 'subsys_initcall_sync'. lockup detector need to be initialised after sdei_init. The influence of this patch is that we can not detect the hard lockup in init_calls. Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Conflicts: init/main.c Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiongfeng Wang 提交于
hulk inclusion category: feature bugzilla: 48046 CVE: NA ------------------------------------------------------------------------- NMI Watchdog need to enable the event for each core individually. But the existing public api 'sdei_event_enable' enable events for all cores when the event type is private. Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiongfeng Wang 提交于
hulk inclusion category: feature bugzilla: 48046 CVE: NA ------------------------------------------------------------------------- This patch add a interrupt binding api function which returns the binded event number. Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Xiongfeng Wang 提交于
hulk inclusion category: feature bugzilla: 48046 CVE: NA ------------------------------------------------------------------------- In current code, the hardlockup detect code is contained by CONFIG_HARDLOCKUP_DETECTOR_PERF. This patch makes this code public so that other arch hardlockup detector can use it. Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-