1. 08 2月, 2021 1 次提交
    • L
      ARM: 9015/2: Define the virtual space of KASan's shadow region · b85c1e0e
      Linus Walleij 提交于
      mainline inclusion
      from mainline-5.11-rc1
      commit c12366ba
      category: feature
      feature: ARM KASAN support
      bugzilla: 46872
      CVE: NA
      
      Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c12366ba441da2f6f2b915410aca2b5b39c16514
      
      -------------------------------------------------
      Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for
      the Arm kernel address sanitizer. We are "stealing" lowmem (the 4GB
      addressable by a 32bit architecture) out of the virtual address
      space to use as shadow memory for KASan as follows:
      
       +----+ 0xffffffff
       |    |
       |    | |-> Static kernel image (vmlinux) BSS and page table
       |    |/
       +----+ PAGE_OFFSET
       |    |
       |    | |->  Loadable kernel modules virtual address space area
       |    |/
       +----+ MODULES_VADDR = KASAN_SHADOW_END
       |    |
       |    | |-> The shadow area of kernel virtual address.
       |    |/
       +----+->  TASK_SIZE (start of kernel space) = KASAN_SHADOW_START the
       |    |   shadow address of MODULES_VADDR
       |    | |
       |    | |
       |    | |-> The user space area in lowmem. The kernel address
       |    | |   sanitizer do not use this space, nor does it map it.
       |    | |
       |    | |
       |    | |
       |    | |
       |    |/
       ------ 0
      
      0 .. TASK_SIZE is the memory that can be used by shared
      userspace/kernelspace. It us used for userspace processes and for
      passing parameters and memory buffers in system calls etc. We do not
      need to shadow this area.
      
      KASAN_SHADOW_START:
       This value begins with the MODULE_VADDR's shadow address. It is the
       start of kernel virtual space. Since we have modules to load, we need
       to cover also that area with shadow memory so we can find memory
       bugs in modules.
      
      KASAN_SHADOW_END
       This value is the 0x100000000's shadow address: the mapping that would
       be after the end of the kernel memory at 0xffffffff. It is the end of
       kernel address sanitizer shadow area. It is also the start of the
       module area.
      
      KASAN_SHADOW_OFFSET:
       This value is used to map an address to the corresponding shadow
       address by the following formula:
      
         shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;
      
       As you would expect, >> 3 is equal to dividing by 8, meaning each
       byte in the shadow memory covers 8 bytes of kernel memory, so one
       bit shadow memory per byte of kernel memory is used.
      
       The KASAN_SHADOW_OFFSET is provided in a Kconfig option depending
       on the VMSPLIT layout of the system: the kernel and userspace can
       split up lowmem in different ways according to needs, so we calculate
       the shadow offset depending on this.
      
      When kasan is enabled, the definition of TASK_SIZE is not an 8-bit
      rotated constant, so we need to modify the TASK_SIZE access code in the
      *.s file.
      
      The kernel and modules may use different amounts of memory,
      according to the VMSPLIT configuration, which in turn
      determines the PAGE_OFFSET.
      
      We use the following KASAN_SHADOW_OFFSETs depending on how the
      virtual memory is split up:
      
      - 0x1f000000 if we have 1G userspace / 3G kernelspace split:
        - The kernel address space is 3G (0xc0000000)
        - PAGE_OFFSET is then set to 0x40000000 so the kernel static
          image (vmlinux) uses addresses 0x40000000 .. 0xffffffff
        - On top of that we have the MODULES_VADDR which under
          the worst case (using ARM instructions) is
          PAGE_OFFSET - 16M (0x01000000) = 0x3f000000
          so the modules use addresses 0x3f000000 .. 0x3fffffff
        - So the addresses 0x3f000000 .. 0xffffffff need to be
          covered with shadow memory. That is 0xc1000000 bytes
          of memory.
        - 1/8 of that is needed for its shadow memory, so
          0x18200000 bytes of shadow memory is needed. We
          "steal" that from the remaining lowmem.
        - The KASAN_SHADOW_START becomes 0x26e00000, to
          KASAN_SHADOW_END at 0x3effffff.
        - Now we can calculate the KASAN_SHADOW_OFFSET for any
          kernel address as 0x3f000000 needs to map to the first
          byte of shadow memory and 0xffffffff needs to map to
          the last byte of shadow memory. Since:
          SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
          0x26e00000 = (0x3f000000 >> 3) + KASAN_SHADOW_OFFSET
          KASAN_SHADOW_OFFSET = 0x26e00000 - (0x3f000000 >> 3)
          KASAN_SHADOW_OFFSET = 0x26e00000 - 0x07e00000
          KASAN_SHADOW_OFFSET = 0x1f000000
      
      - 0x5f000000 if we have 2G userspace / 2G kernelspace split:
        - The kernel space is 2G (0x80000000)
        - PAGE_OFFSET is set to 0x80000000 so the kernel static
          image uses 0x80000000 .. 0xffffffff.
        - On top of that we have the MODULES_VADDR which under
          the worst case (using ARM instructions) is
          PAGE_OFFSET - 16M (0x01000000) = 0x7f000000
          so the modules use addresses 0x7f000000 .. 0x7fffffff
        - So the addresses 0x7f000000 .. 0xffffffff need to be
          covered with shadow memory. That is 0x81000000 bytes
          of memory.
        - 1/8 of that is needed for its shadow memory, so
          0x10200000 bytes of shadow memory is needed. We
          "steal" that from the remaining lowmem.
        - The KASAN_SHADOW_START becomes 0x6ee00000, to
          KASAN_SHADOW_END at 0x7effffff.
        - Now we can calculate the KASAN_SHADOW_OFFSET for any
          kernel address as 0x7f000000 needs to map to the first
          byte of shadow memory and 0xffffffff needs to map to
          the last byte of shadow memory. Since:
          SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
          0x6ee00000 = (0x7f000000 >> 3) + KASAN_SHADOW_OFFSET
          KASAN_SHADOW_OFFSET = 0x6ee00000 - (0x7f000000 >> 3)
          KASAN_SHADOW_OFFSET = 0x6ee00000 - 0x0fe00000
          KASAN_SHADOW_OFFSET = 0x5f000000
      
      - 0x9f000000 if we have 3G userspace / 1G kernelspace split,
        and this is the default split for ARM:
        - The kernel address space is 1GB (0x40000000)
        - PAGE_OFFSET is set to 0xc0000000 so the kernel static
          image uses 0xc0000000 .. 0xffffffff.
        - On top of that we have the MODULES_VADDR which under
          the worst case (using ARM instructions) is
          PAGE_OFFSET - 16M (0x01000000) = 0xbf000000
          so the modules use addresses 0xbf000000 .. 0xbfffffff
        - So the addresses 0xbf000000 .. 0xffffffff need to be
          covered with shadow memory. That is 0x41000000 bytes
          of memory.
        - 1/8 of that is needed for its shadow memory, so
          0x08200000 bytes of shadow memory is needed. We
          "steal" that from the remaining lowmem.
        - The KASAN_SHADOW_START becomes 0xb6e00000, to
          KASAN_SHADOW_END at 0xbfffffff.
        - Now we can calculate the KASAN_SHADOW_OFFSET for any
          kernel address as 0xbf000000 needs to map to the first
          byte of shadow memory and 0xffffffff needs to map to
          the last byte of shadow memory. Since:
          SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
          0xb6e00000 = (0xbf000000 >> 3) + KASAN_SHADOW_OFFSET
          KASAN_SHADOW_OFFSET = 0xb6e00000 - (0xbf000000 >> 3)
          KASAN_SHADOW_OFFSET = 0xb6e00000 - 0x17e00000
          KASAN_SHADOW_OFFSET = 0x9f000000
      
      - 0x8f000000 if we have 3G userspace / 1G kernelspace with
        full 1 GB low memory (VMSPLIT_3G_OPT):
        - The kernel address space is 1GB (0x40000000)
        - PAGE_OFFSET is set to 0xb0000000 so the kernel static
          image uses 0xb0000000 .. 0xffffffff.
        - On top of that we have the MODULES_VADDR which under
          the worst case (using ARM instructions) is
          PAGE_OFFSET - 16M (0x01000000) = 0xaf000000
          so the modules use addresses 0xaf000000 .. 0xaffffff
        - So the addresses 0xaf000000 .. 0xffffffff need to be
          covered with shadow memory. That is 0x51000000 bytes
          of memory.
        - 1/8 of that is needed for its shadow memory, so
          0x0a200000 bytes of shadow memory is needed. We
          "steal" that from the remaining lowmem.
        - The KASAN_SHADOW_START becomes 0xa4e00000, to
          KASAN_SHADOW_END at 0xaeffffff.
        - Now we can calculate the KASAN_SHADOW_OFFSET for any
          kernel address as 0xaf000000 needs to map to the first
          byte of shadow memory and 0xffffffff needs to map to
          the last byte of shadow memory. Since:
          SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
          0xa4e00000 = (0xaf000000 >> 3) + KASAN_SHADOW_OFFSET
          KASAN_SHADOW_OFFSET = 0xa4e00000 - (0xaf000000 >> 3)
          KASAN_SHADOW_OFFSET = 0xa4e00000 - 0x15e00000
          KASAN_SHADOW_OFFSET = 0x8f000000
      
      - The default value of 0xffffffff for KASAN_SHADOW_OFFSET
        is an error value. We should always match one of the
        above shadow offsets.
      
      When we do this, TASK_SIZE will sometimes get a bit odd values
      that will not fit into immediate mov assembly instructions.
      To account for this, we need to rewrite some assembly using
      TASK_SIZE like this:
      
      -       mov     r1, #TASK_SIZE
      +       ldr     r1, =TASK_SIZE
      
      or
      
      -       cmp     r4, #TASK_SIZE
      +       ldr     r0, =TASK_SIZE
      +       cmp     r4, r0
      
      this is done to avoid the immediate #TASK_SIZE that need to
      fit into a limited number of bits.
      
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: kasan-dev@googlegroups.com
      Cc: Mike Rapoport <rppt@linux.ibm.com>
      Reviewed-by: NArd Biesheuvel <ardb@kernel.org>
      Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
      Tested-by: Florian Fainelli <f.fainelli@gmail.com> # Brahma SoCs
      Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # i.MX6Q
      Reported-by: NArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: NAbbott Liu <liuwenliang@huawei.com>
      Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      (cherry picked from commit c12366ba)
      Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com>
      Reviewed-by: NJing Xiangfeng <jingxiangfeng@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      b85c1e0e
  2. 27 1月, 2021 1 次提交
    • A
      ARM: p2v: reduce p2v alignment requirement to 2 MiB · b9012d8b
      Ard Biesheuvel 提交于
      mainline inclusion
      from mainline-5.11-rc1
      commit 9443076e
      category: bugfix
      bugzilla: 46882
      CVE: NA
      
      -------------------------------------------------
      The ARM kernel's linear map starts at PAGE_OFFSET, which maps to a
      physical address (PHYS_OFFSET) that is platform specific, and is
      discovered at boot. Since we don't want to slow down translations
      between physical and virtual addresses by keeping the offset in a
      variable in memory, we implement this by patching the code performing
      the translation, and putting the offset between PAGE_OFFSET and the
      start of physical RAM directly into the instruction opcodes.
      
      As we only patch up to 8 bits of offset, yielding 4 GiB >> 8 == 16 MiB
      of granularity, we have to round up PHYS_OFFSET to the next multiple if
      the start of physical RAM is not a multiple of 16 MiB. This wastes some
      physical RAM, since the memory that was skipped will now live below
      PAGE_OFFSET, making it inaccessible to the kernel.
      
      We can improve this by changing the patchable sequences and the patching
      logic to carry more bits of offset: 11 bits gives us 4 GiB >> 11 == 2 MiB
      of granularity, and so we will never waste more than that amount by
      rounding up the physical start of DRAM to the next multiple of 2 MiB.
      (Note that 2 MiB granularity guarantees that the linear mapping can be
      created efficiently, whereas less than 2 MiB may result in the linear
      mapping needing another level of page tables)
      
      This helps Zhen Lei's scenario, where the start of DRAM is known to be
      occupied. It also helps EFI boot, which relies on the firmware's page
      allocator to allocate space for the decompressed kernel as low as
      possible. And if the KASLR patches ever land for 32-bit, it will give
      us 3 more bits of randomization of the placement of the kernel inside
      the linear region.
      
      For the ARM code path, it simply comes down to using two add/sub
      instructions instead of one for the carryless version, and patching
      each of them with the correct immediate depending on the rotation
      field. For the LPAE calculation, which has to deal with a carry, it
      patches the MOVW instruction with up to 12 bits of offset (but we only
      need 11 bits anyway)
      
      For the Thumb2 code path, patching more than 11 bits of displacement
      would be somewhat cumbersome, but the 11 bits we need fit nicely into
      the second word of the u16[2] opcode, so we simply update the immediate
      assignment and the left shift to create an addend of the right magnitude.
      Suggested-by: NZhen Lei <thunder.leizhen@huawei.com>
      Acked-by: NNicolas Pitre <nico@fluxnic.net>
      Acked-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      (cherry picked from commit 9443076e)
      Signed-off-by: NZhao Hongjiang <zhaohongjiang@huawei.com>
      Acked-by: NXie XiuQi <xiexiuqi@huawei.com>
      b9012d8b
  3. 07 1月, 2021 1 次提交
  4. 01 12月, 2020 1 次提交
  5. 14 10月, 2020 1 次提交
  6. 09 10月, 2020 1 次提交
  7. 14 9月, 2020 1 次提交
    • M
      ARM: Allow IPIs to be handled as normal interrupts · 56afcd3d
      Marc Zyngier 提交于
      In order to deal with IPIs as normal interrupts, let's add
      a new way to register them with the architecture code.
      
      set_smp_ipi_range() takes a range of interrupts, and allows
      the arch code to request them as if the were normal interrupts.
      A standard handler is then called by the core IRQ code to deal
      with the IPI.
      
      This means that we don't need to call irq_enter/irq_exit, and
      that we don't need to deal with set_irq_regs either. So let's
      move the dispatcher into its own function, and leave handle_IPI()
      as a compatibility function.
      
      On the sending side, let's make use of ipi_send_mask, which
      already exists for this purpose.
      
      One of the major difference is that we end up, in some cases
      (such as when performing IRQ time accounting on the scheduler
      IPI), end up with nested irq_enter()/irq_exit() pairs.
      Other than the (relatively small) overhead, there should be
      no consequences to it (these pairs are designed to nest
      correctly, and the accounting shouldn't be off).
      Reviewed-by: NValentin Schneider <valentin.schneider@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      56afcd3d
  8. 09 9月, 2020 1 次提交
  9. 28 8月, 2020 1 次提交
  10. 21 8月, 2020 1 次提交
  11. 20 8月, 2020 2 次提交
  12. 29 7月, 2020 1 次提交
  13. 28 7月, 2020 2 次提交
  14. 22 7月, 2020 1 次提交
  15. 21 7月, 2020 2 次提交
    • M
      ARM: 8993/1: remove it8152 PCI controller driver · 6da5238f
      Mike Rapoport 提交于
      The it8152 PCI host controller was only used by cm-x2xx platforms.
      Since these platforms were removed, there is no point to keep it8152
      driver.
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      6da5238f
    • S
      ARM: 8991/1: use VFP assembler mnemonics if available · 2cbd1cc3
      Stefan Agner 提交于
      The integrated assembler of Clang 10 and earlier do not allow to access
      the VFP registers through the coprocessor load/store instructions:
      arch/arm/vfp/vfpmodule.c:342:2: error: invalid operand for instruction
              fmxr(FPEXC, fpexc & ~(FPEXC_EX|FPEXC_DEX|FPEXC_FP2V|FPEXC_VV|FPEXC_TRAP_MASK));
              ^
      arch/arm/vfp/vfpinstr.h:79:6: note: expanded from macro 'fmxr'
              asm("mcr p10, 7, %0, " vfpreg(_vfp_) ", cr0, 0 @ fmxr   " #_vfp_ ", %0"
                  ^
      <inline asm>:1:6: note: instantiated into assembly here
              mcr p10, 7, r0, cr8, cr0, 0 @ fmxr      FPEXC, r0
                  ^
      
      This has been addressed with Clang 11 [0]. However, to support earlier
      versions of Clang and for better readability use of VFP assembler
      mnemonics still is preferred.
      
      Ideally we would replace this code with the unified assembler language
      mnemonics vmrs/vmsr on call sites along with .fpu assembler directives.
      The GNU assembler supports the .fpu directive at least since 2.17 (when
      documentation has been added). Since Linux requires binutils 2.21 it is
      safe to use .fpu directive. However, binutils does not allow to use
      FPINST or FPINST2 as an argument to vmrs/vmsr instructions up to
      binutils 2.24 (see binutils commit 16d02dc907c5):
      arch/arm/vfp/vfphw.S: Assembler messages:
      arch/arm/vfp/vfphw.S:162: Error: operand 0 must be FPSID or FPSCR pr FPEXC -- `vmsr FPINST,r6'
      arch/arm/vfp/vfphw.S:165: Error: operand 0 must be FPSID or FPSCR pr FPEXC -- `vmsr FPINST2,r8'
      arch/arm/vfp/vfphw.S:235: Error: operand 1 must be a VFP extension System Register -- `vmrs r3,FPINST'
      arch/arm/vfp/vfphw.S:238: Error: operand 1 must be a VFP extension System Register -- `vmrs r12,FPINST2'
      
      Use as-instr in Kconfig to check if FPINST/FPINST2 can be used. If they
      can be used make use of .fpu directives and UAL VFP mnemonics for
      register access.
      
      This allows to build vfpmodule.c with Clang and its integrated assembler.
      
      [0] https://reviews.llvm.org/D59733
      
      Link: https://github.com/ClangBuiltLinux/linux/issues/905Signed-off-by: NStefan Agner <stefan@agner.ch>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      2cbd1cc3
  16. 19 7月, 2020 1 次提交
  17. 05 7月, 2020 1 次提交
  18. 14 6月, 2020 1 次提交
    • M
      treewide: replace '---help---' in Kconfig files with 'help' · a7f7f624
      Masahiro Yamada 提交于
      Since commit 84af7a61 ("checkpatch: kconfig: prefer 'help' over
      '---help---'"), the number of '---help---' has been gradually
      decreasing, but there are still more than 2400 instances.
      
      This commit finishes the conversion. While I touched the lines,
      I also fixed the indentation.
      
      There are a variety of indentation styles found.
      
        a) 4 spaces + '---help---'
        b) 7 spaces + '---help---'
        c) 8 spaces + '---help---'
        d) 1 space + 1 tab + '---help---'
        e) 1 tab + '---help---'    (correct indentation)
        f) 1 tab + 1 space + '---help---'
        g) 1 tab + 2 spaces + '---help---'
      
      In order to convert all of them to 1 tab + 'help', I ran the
      following commend:
      
        $ find . -name 'Kconfig*' | xargs sed -i 's/^[[:space:]]*---help---/\thelp/'
      Signed-off-by: NMasahiro Yamada <masahiroy@kernel.org>
      a7f7f624
  19. 13 6月, 2020 1 次提交
  20. 26 5月, 2020 2 次提交
  21. 19 5月, 2020 1 次提交
  22. 16 5月, 2020 1 次提交
  23. 15 5月, 2020 1 次提交
  24. 06 5月, 2020 2 次提交
    • S
      clk: Allow the common clk framework to be selectable · bbd7ffdb
      Stephen Boyd 提交于
      Enable build testing and configuration control of the common clk
      framework so that more code coverage and testing can be done on the
      common clk framework across various architectures. This also nicely
      removes the requirement that architectures must select the framework
      when they don't use it in architecture code.
      
      There's one snag with doing this, and that's making sure that randconfig
      builds don't select this option when some architecture or platform
      implements 'struct clk' outside of the common clk framework. Introduce a
      new config option 'HAVE_LEGACY_CLK' to indicate those platforms that
      haven't migrated to the common clk framework and therefore shouldn't be
      allowed to select this new config option. Also add a note that we hope
      one day to remove this config entirely.
      
      Based on a patch by Mark Brown <broonie@kernel.org>.
      
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Aurelien Jacquiot <jacquiot.aurelien@gmail.com>
      Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: <linux-mips@vger.kernel.org>
      Cc: <linux-c6x-dev@linux-c6x.org>
      Cc: <linux-m68k@lists.linux-m68k.org>
      Cc: <linux-arm-kernel@lists.infradead.org>
      Cc: <linux-sh@vger.kernel.org>
      Link: https://lore.kernel.org/r/1470915049-15249-1-git-send-email-broonie@kernel.orgSigned-off-by: NStephen Boyd <sboyd@kernel.org>
      Link: https://lkml.kernel.org/r/20200409064416.83340-8-sboyd@kernel.orgReviewed-by: NMark Brown <broonie@kernel.org>
      Reviewed-by: NArnd Bergmann <arnd@arndb.de>
      bbd7ffdb
    • S
      ARM: Remove redundant CLKDEV_LOOKUP selects · e8bd633b
      Stephen Boyd 提交于
      These platforms select COMMON_CLK indirectly through use of the
      ARCH_MULTIPLATFORM config option that they depend on implicitly via some
      V7/V6/V5 multi platform config option. The COMMON_CLK config option
      already selects CLKDEV_LOOKUP so it's redundant to have this selected
      again.
      
      Cc: Tony Prisk <linux@prisktech.co.nz>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: <linux-arm-kernel@lists.infradead.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Signed-off-by: NStephen Boyd <sboyd@kernel.org>
      Reviewed-by: NArnd Bergmann <arnd@arndb.de>
      Link: https://lkml.kernel.org/r/20200409064416.83340-3-sboyd@kernel.org
      e8bd633b
  25. 23 4月, 2020 1 次提交
  26. 16 4月, 2020 1 次提交
  27. 13 4月, 2020 1 次提交
    • A
      ARM: Prepare Realtek RTD1195 · 86aeee4d
      Andreas Färber 提交于
      Introduce ARCH_REALTEK Kconfig option also for 32-bit Arm.
      
      Override the text offset to cope with boot ROM occupying first 0xa800
      bytes and further reservations up to 0xf4000 (compare Device Tree).
      
      Add a custom machine_desc to enforce memory carveout for I/O registers.
      Signed-off-by: NAndreas Färber <afaerber@suse.de>
      86aeee4d
  28. 24 3月, 2020 1 次提交
  29. 18 2月, 2020 1 次提交
  30. 14 2月, 2020 2 次提交
    • F
      arm: Remove TIF_NOHZ · 1acb2249
      Frederic Weisbecker 提交于
      Arm entry code calls context tracking from fast path. TIF_NOHZ is unused
      and can be safely removed.
      Signed-off-by: NFrederic Weisbecker <frederic@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      1acb2249
    • F
      context-tracking: Introduce CONFIG_HAVE_TIF_NOHZ · 490f561b
      Frederic Weisbecker 提交于
      A few archs (x86, arm, arm64) don't rely anymore on TIF_NOHZ to call
      into context tracking on user entry/exit but instead use static keys
      (or not) to optimize those calls. Ideally every arch should migrate to
      that behaviour in the long run.
      
      Settle a config option to let those archs remove their TIF_NOHZ
      definitions.
      Signed-off-by: NFrederic Weisbecker <frederic@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Burton <paulburton@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: David S. Miller <davem@davemloft.net>
      490f561b
  31. 04 2月, 2020 1 次提交
  32. 26 1月, 2020 2 次提交
    • V
      ARM: 8952/1: Disable kmemleak on XIP kernels · bc420c6c
      Vincenzo Frascino 提交于
      Kmemleak relies on specific symbols to register the read only data
      during init (e.g. __start_ro_after_init).
      Trying to build an XIP kernel on arm results in the linking error
      reported below because when this option is selected read only data
      after init are not allowed since .data is read only (.rodata).
      
        arm-linux-gnueabihf-ld: mm/kmemleak.o: in function `kmemleak_init':
        kmemleak.c:(.init.text+0x148): undefined reference to `__end_ro_after_init'
        arm-linux-gnueabihf-ld: kmemleak.c:(.init.text+0x14c):
           undefined reference to `__end_ro_after_init'
        arm-linux-gnueabihf-ld: kmemleak.c:(.init.text+0x150):
           undefined reference to `__start_ro_after_init'
        arm-linux-gnueabihf-ld: kmemleak.c:(.init.text+0x156):
           undefined reference to `__start_ro_after_init'
        arm-linux-gnueabihf-ld: kmemleak.c:(.init.text+0x162):
           undefined reference to `__start_ro_after_init'
        arm-linux-gnueabihf-ld: kmemleak.c:(.init.text+0x16a):
           undefined reference to `__start_ro_after_init'
        linux/Makefile:1078: recipe for target 'vmlinux' failed
      
      Fix the issue enabling kmemleak only on non XIP kernels.
      Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      bc420c6c
    • V
      ARM: 8951/1: Fix Kexec compilation issue. · 76950f71
      Vincenzo Frascino 提交于
      To perform the reserve_crashkernel() operation kexec uses SECTION_SIZE to
      find a memblock in a range.
      SECTION_SIZE is not defined for nommu systems. Trying to compile kexec in
      these conditions results in a build error:
      
        linux/arch/arm/kernel/setup.c: In function ‘reserve_crashkernel’:
        linux/arch/arm/kernel/setup.c:1016:25: error: ‘SECTION_SIZE’ undeclared
           (first use in this function); did you mean ‘SECTIONS_WIDTH’?
                   crash_size, SECTION_SIZE);
                               ^~~~~~~~~~~~
                               SECTIONS_WIDTH
        linux/arch/arm/kernel/setup.c:1016:25: note: each undeclared identifier
           is reported only once for each function it appears in
        linux/scripts/Makefile.build:265: recipe for target 'arch/arm/kernel/setup.o'
           failed
      
      Make KEXEC depend on MMU to fix the compilation issue.
      Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      76950f71
  33. 07 1月, 2020 1 次提交