1. 25 2月, 2016 6 次提交
    • S
      arm64: Add helper for extracting ASIDBits · 038dc9c6
      Suzuki K Poulose 提交于
      Add a helper to extract ASIDBits on the current cpu
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      038dc9c6
    • S
      arm64: Enable CPU capability verification unconditionally · fd9c2790
      Suzuki K Poulose 提交于
      We verify the capabilities of the secondary CPUs only when
      hotplug is enabled. The boot time activated CPUs do not
      go through the verification by checking whether the system
      wide capabilities were initialised or not.
      
      This patch removes the capability check dependency on CONFIG_HOTPLUG_CPU,
      to make sure that all the secondary CPUs go through the check.
      The boot time activated CPUs will still skip the system wide
      capability check. The plan is to hook in a check for CPU features
      used by the kernel at early boot up, based on the Boot CPU values.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      fd9c2790
    • S
      arm64: Handle early CPU boot failures · bb905274
      Suzuki K Poulose 提交于
      A secondary CPU could fail to come online due to insufficient
      capabilities and could simply die or loop in the kernel.
      e.g, a CPU with no support for the selected kernel PAGE_SIZE
      loops in kernel with MMU turned off.
      or a hotplugged CPU which doesn't have one of the advertised
      system capability will die during the activation.
      
      There is no way to synchronise the status of the failing CPU
      back to the master. This patch solves the issue by adding a
      field to the secondary_data which can be updated by the failing
      CPU. If the secondary CPU fails even before turning the MMU on,
      it updates the status in a special variable reserved in the head.txt
      section to make sure that the update can be cache invalidated safely
      without possible sharing of cache write back granule.
      
      Here are the possible states :
      
       -1. CPU_MMU_OFF - Initial value set by the master CPU, this value
      indicates that the CPU could not turn the MMU on, hence the status
      could not be reliably updated in the secondary_data. Instead, the
      CPU has updated the status @ __early_cpu_boot_status.
      
       0. CPU_BOOT_SUCCESS - CPU has booted successfully.
      
       1. CPU_KILL_ME - CPU has invoked cpu_ops->die, indicating the
      master CPU to synchronise by issuing a cpu_ops->cpu_kill.
      
       2. CPU_STUCK_IN_KERNEL - CPU couldn't invoke die(), instead is
      looping in the kernel. This information could be used by say,
      kexec to check if it is really safe to do a kexec reboot.
      
       3. CPU_PANIC_KERNEL - CPU detected some serious issues which
      requires kernel to crash immediately. The secondary CPU cannot
      call panic() until it has initialised the GIC. This flag can
      be used to instruct the master to do so.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      [catalin.marinas@arm.com: conflict resolution]
      [catalin.marinas@arm.com: converted "status" from int to long]
      [catalin.marinas@arm.com: updated update_early_cpu_boot_status to use str_l]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bb905274
    • S
      arm64: Move cpu_die_early to smp.c · fce6361f
      Suzuki K Poulose 提交于
      This patch moves cpu_die_early to smp.c, where it fits better.
      No functional changes, except for adding the necessary checks
      for CONFIG_HOTPLUG_CPU.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      fce6361f
    • S
      arm64: Introduce cpu_die_early · ee02a159
      Suzuki K Poulose 提交于
      Or in other words, make fail_incapable_cpu() reusable.
      
      We use fail_incapable_cpu() to kill a secondary CPU early during the
      bringup, which doesn't have the system advertised capabilities.
      This patch makes the routine more generic, to kill a secondary
      booting CPU, getting rid of the dependency on capability struct.
      This can be used by checks which are not necessarily attached to
      a capability struct (e.g, cpu ASIDBits).
      
      In that process, renames the function to cpu_die_early() to better
      match its functionality. This will be moved to arch/arm64/kernel/smp.c
      later.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ee02a159
    • S
      arm64: Add a helper for parking CPUs in a loop · c4bc34d2
      Suzuki K Poulose 提交于
      Adds a routine which can be used to park CPUs (spinning in kernel)
      when they can't be killed.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c4bc34d2
  2. 24 2月, 2016 12 次提交
    • A
      arm64: efi: invoke EFI_RNG_PROTOCOL to supply KASLR randomness · 2b5fe07a
      Ard Biesheuvel 提交于
      Since arm64 does not use a decompressor that supplies an execution
      environment where it is feasible to some extent to provide a source of
      randomness, the arm64 KASLR kernel depends on the bootloader to supply
      some random bits in the /chosen/kaslr-seed DT property upon kernel entry.
      
      On UEFI systems, we can use the EFI_RNG_PROTOCOL, if supplied, to obtain
      some random bits. At the same time, use it to randomize the offset of the
      kernel Image in physical memory.
      Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      2b5fe07a
    • A
      arm64: kaslr: randomize the linear region · c031a421
      Ard Biesheuvel 提交于
      When KASLR is enabled (CONFIG_RANDOMIZE_BASE=y), and entropy has been
      provided by the bootloader, randomize the placement of RAM inside the
      linear region if sufficient space is available. For instance, on a 4KB
      granule/3 levels kernel, the linear region is 256 GB in size, and we can
      choose any 1 GB aligned offset that is far enough from the top of the
      address space to fit the distance between the start of the lowest memblock
      and the top of the highest memblock.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c031a421
    • A
      arm64: add support for kernel ASLR · f80fb3a3
      Ard Biesheuvel 提交于
      This adds support for KASLR is implemented, based on entropy provided by
      the bootloader in the /chosen/kaslr-seed DT property. Depending on the size
      of the address space (VA_BITS) and the page size, the entropy in the
      virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all
      4 levels), with the sidenote that displacements that result in the kernel
      image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB
      granule kernels, respectively) are not allowed, and will be rounded up to
      an acceptable value.
      
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is
      randomized independently from the core kernel. This makes it less likely
      that the location of core kernel data structures can be determined by an
      adversary, but causes all function calls from modules into the core kernel
      to be resolved via entries in the module PLTs.
      
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is
      randomized by choosing a page aligned 128 MB region inside the interval
      [_etext - 128 MB, _stext + 128 MB). This gives between 10 and 14 bits of
      entropy (depending on page size), independently of the kernel randomization,
      but still guarantees that modules are within the range of relative branch
      and jump instructions (with the caveat that, since the module region is
      shared with other uses of the vmalloc area, modules may need to be loaded
      further away if the module region is exhausted)
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f80fb3a3
    • A
      arm64: add support for building vmlinux as a relocatable PIE binary · 1e48ef7f
      Ard Biesheuvel 提交于
      This implements CONFIG_RELOCATABLE, which links the final vmlinux
      image with a dynamic relocation section, allowing the early boot code
      to perform a relocation to a different virtual address at runtime.
      
      This is a prerequisite for KASLR (CONFIG_RANDOMIZE_BASE).
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1e48ef7f
    • A
      arm64: switch to relative exception tables · 6c94f27a
      Ard Biesheuvel 提交于
      Instead of using absolute addresses for both the exception location
      and the fixup, use offsets relative to the exception table entry values.
      Not only does this cut the size of the exception table in half, it is
      also a prerequisite for KASLR, since absolute exception table entries
      are subject to dynamic relocation, which is incompatible with the sorting
      of the exception table that occurs at build time.
      
      This patch also introduces the _ASM_EXTABLE preprocessor macro (which
      exists on x86 as well) and its _asm_extable assembly counterpart, as
      shorthands to emit exception table entries.
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      6c94f27a
    • A
      arm64: make asm/elf.h available to asm files · 4a2e034e
      Ard Biesheuvel 提交于
      This reshuffles some code in asm/elf.h and puts a #ifndef __ASSEMBLY__
      around its C definitions so that the CPP defines can be used in asm
      source files as well.
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      4a2e034e
    • A
      arm64: avoid dynamic relocations in early boot code · 2bf31a4a
      Ard Biesheuvel 提交于
      Before implementing KASLR for arm64 by building a self-relocating PIE
      executable, we have to ensure that values we use before the relocation
      routine is executed are not subject to dynamic relocation themselves.
      This applies not only to virtual addresses, but also to values that are
      supplied by the linker at build time and relocated using R_AARCH64_ABS64
      relocations.
      
      So instead, use assemble time constants, or force the use of static
      relocations by folding the constants into the instructions.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      2bf31a4a
    • A
      arm64: avoid R_AARCH64_ABS64 relocations for Image header fields · 6ad1fe5d
      Ard Biesheuvel 提交于
      Unfortunately, the current way of using the linker to emit build time
      constants into the Image header will no longer work once we switch to
      the use of PIE executables. The reason is that such constants are emitted
      into the binary using R_AARCH64_ABS64 relocations, which are resolved at
      runtime, not at build time, and the places targeted by those relocations
      will contain zeroes before that.
      
      So refactor the endian swapping linker script constant generation code so
      that it emits the upper and lower 32-bit words separately.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      6ad1fe5d
    • A
      arm64: add support for module PLTs · fd045f6c
      Ard Biesheuvel 提交于
      This adds support for emitting PLTs at module load time for relative
      branches that are out of range. This is a prerequisite for KASLR, which
      may place the kernel and the modules anywhere in the vmalloc area,
      making it more likely that branch target offsets exceed the maximum
      range of +/- 128 MB.
      
      In this version, I removed the distinction between relocations against
      .init executable sections and ordinary executable sections. The reason
      is that it is hardly worth the trouble, given that .init.text usually
      does not contain that many far branches, and this version now only
      reserves PLT entry space for jump and call relocations against undefined
      symbols (since symbols defined in the same module can be assumed to be
      within +/- 128 MB)
      
      For example, the mac80211.ko module (which is fairly sizable at ~400 KB)
      built with -mcmodel=large gives the following relocation counts:
      
                          relocs    branches   unique     !local
        .text              3925       3347       518        219
        .init.text           11          8         7          1
        .exit.text            4          4         4          1
        .text.unlikely       81         67        36         17
      
      ('unique' means branches to unique type/symbol/addend combos, of which
      !local is the subset referring to undefined symbols)
      
      IOW, we are only emitting a single PLT entry for the .init sections, and
      we are better off just adding it to the core PLT section instead.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      fd045f6c
    • A
      arm64: move brk immediate argument definitions to separate header · f98deee9
      Ard Biesheuvel 提交于
      Instead of reversing the header dependency between asm/bug.h and
      asm/debug-monitors.h, split off the brk instruction immediate value
      defines into a new header asm/brk-imm.h, and include it from both.
      
      This solves the circular dependency issue that prevents BUG() from
      being used in some header files, and keeps the definitions together.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f98deee9
    • A
      arm64: mm: use bit ops rather than arithmetic in pa/va translations · 8439e62a
      Ard Biesheuvel 提交于
      Since PAGE_OFFSET is chosen such that it cuts the kernel VA space right
      in half, and since the size of the kernel VA space itself is always a
      power of 2, we can treat PAGE_OFFSET as a bitmask and replace the
      additions/subtractions with 'or' and 'and-not' operations.
      
      For the comparison against PAGE_OFFSET, a mov/cmp/branch sequence ends
      up getting replaced with a single tbz instruction. For the additions and
      subtractions, we save a mov instruction since the mask is folded into the
      instruction's immediate field.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      8439e62a
    • A
      arm64: mm: only perform memstart_addr sanity check if DEBUG_VM · a92405f0
      Ard Biesheuvel 提交于
      Checking whether memstart_addr has been assigned every time it is
      referenced adds a branch instruction that may hurt performance if
      the reference in question occurs on a hot path. So only perform the
      check if CONFIG_DEBUG_VM=y.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      [catalin.marinas@arm.com: replaced #ifdef with VM_BUG_ON]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a92405f0
  3. 19 2月, 2016 14 次提交
  4. 18 2月, 2016 2 次提交
  5. 17 2月, 2016 3 次提交
    • A
      arm64: use local label prefixes for __reg_num symbols · 7abc7d83
      Ard Biesheuvel 提交于
      The __reg_num_xNN symbols that are used to implement the msr_s and
      mrs_s macros are recorded in the ELF metadata of each object file.
      This does not affect the size of the final binary, but it does clutter
      the output of tools like readelf, i.e.,
      
        $ readelf -a vmlinux |grep -c __reg_num_x
        50976
      
      So let's use symbols with the .L prefix, these are strictly local,
      and don't end up in the object files.
      
        $ readelf -a vmlinux |grep -c __reg_num_x
        0
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      7abc7d83
    • D
      arm64: vdso: Mark vDSO code as read-only · 88d8a799
      David Brown 提交于
      Although the arm64 vDSO is cleanly separated by code/data with the
      code being read-only in userspace mappings, the code page is still
      writable from the kernel.  There have been exploits (such as
      http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
      from a bad kernel write to full root.
      
      Prevent this specific exploit on arm64 by putting the vDSO code page
      in read-only memory as well.
      
      Before the change:
      [    3.138366] vdso: 2 pages (1 code @ ffffffc000a71000, 1 data @ ffffffc000a70000)
      ---[ Kernel Mapping ]---
      0xffffffc000000000-0xffffffc000082000         520K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc000082000-0xffffffc000200000        1528K     ro x  SHD AF            UXN MEM/NORMAL
      0xffffffc000200000-0xffffffc000800000           6M     ro x  SHD AF        BLK UXN MEM/NORMAL
      0xffffffc000800000-0xffffffc0009b6000        1752K     ro x  SHD AF            UXN MEM/NORMAL
      0xffffffc0009b6000-0xffffffc000c00000        2344K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc000c00000-0xffffffc008000000         116M     RW NX SHD AF        BLK UXN MEM/NORMAL
      0xffffffc00c000000-0xffffffc07f000000        1840M     RW NX SHD AF        BLK UXN MEM/NORMAL
      0xffffffc800000000-0xffffffc840000000           1G     RW NX SHD AF        BLK UXN MEM/NORMAL
      0xffffffc840000000-0xffffffc87ae00000         942M     RW NX SHD AF        BLK UXN MEM/NORMAL
      0xffffffc87ae00000-0xffffffc87ae70000         448K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc87af80000-0xffffffc87af8a000          40K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc87af8b000-0xffffffc87b000000         468K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc87b000000-0xffffffc87fe00000          78M     RW NX SHD AF        BLK UXN MEM/NORMAL
      0xffffffc87fe00000-0xffffffc87ff50000        1344K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc87ff90000-0xffffffc87ffa0000          64K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc87fff0000-0xffffffc880000000          64K     RW NX SHD AF            UXN MEM/NORMAL
      
      After:
      [    3.138368] vdso: 2 pages (1 code @ ffffffc0006de000, 1 data @ ffffffc000a74000)
      ---[ Kernel Mapping ]---
      0xffffffc000000000-0xffffffc000082000         520K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc000082000-0xffffffc000200000        1528K     ro x  SHD AF            UXN MEM/NORMAL
      0xffffffc000200000-0xffffffc000800000           6M     ro x  SHD AF        BLK UXN MEM/NORMAL
      0xffffffc000800000-0xffffffc0009b8000        1760K     ro x  SHD AF            UXN MEM/NORMAL
      0xffffffc0009b8000-0xffffffc000c00000        2336K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc000c00000-0xffffffc008000000         116M     RW NX SHD AF        BLK UXN MEM/NORMAL
      0xffffffc00c000000-0xffffffc07f000000        1840M     RW NX SHD AF        BLK UXN MEM/NORMAL
      0xffffffc800000000-0xffffffc840000000           1G     RW NX SHD AF        BLK UXN MEM/NORMAL
      0xffffffc840000000-0xffffffc87ae00000         942M     RW NX SHD AF        BLK UXN MEM/NORMAL
      0xffffffc87ae00000-0xffffffc87ae70000         448K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc87af80000-0xffffffc87af8a000          40K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc87af8b000-0xffffffc87b000000         468K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc87b000000-0xffffffc87fe00000          78M     RW NX SHD AF        BLK UXN MEM/NORMAL
      0xffffffc87fe00000-0xffffffc87ff50000        1344K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc87ff90000-0xffffffc87ffa0000          64K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc87fff0000-0xffffffc880000000          64K     RW NX SHD AF            UXN MEM/NORMAL
      
      Inspired by https://lkml.org/lkml/2016/1/19/494 based on work by the
      PaX Team, Brad Spengler, and Kees Cook.
      Signed-off-by: NDavid Brown <david.brown@linaro.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      [catalin.marinas@arm.com: removed superfluous __PAGE_ALIGNED_DATA]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      88d8a799
    • Y
      arm64: ubsan: select ARCH_HAS_UBSAN_SANITIZE_ALL · f0b7f8a4
      Yang Shi 提交于
      To enable UBSAN on arm64, ARCH_HAS_UBSAN_SANITIZE_ALL need to be selected.
      
      Basic kernel bootup test is passed on arm64 with CONFIG_UBSAN_SANITIZE_ALL
      enabled.
      Signed-off-by: NYang Shi <yang.shi@linaro.org>
      Acked-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f0b7f8a4
  6. 16 2月, 2016 3 次提交
    • Y
      arm64: replace read_lock to rcu lock in call_step_hook · cf0a2543
      Yang Shi 提交于
      BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:917
      in_atomic(): 1, irqs_disabled(): 128, pid: 383, name: sh
      Preemption disabled at:[<ffff800000124c18>] kgdb_cpu_enter+0x158/0x6b8
      
      CPU: 3 PID: 383 Comm: sh Tainted: G        W       4.1.13-rt13 #2
      Hardware name: Freescale Layerscape 2085a RDB Board (DT)
      Call trace:
      [<ffff8000000885e8>] dump_backtrace+0x0/0x128
      [<ffff800000088734>] show_stack+0x24/0x30
      [<ffff80000079a7c4>] dump_stack+0x80/0xa0
      [<ffff8000000bd324>] ___might_sleep+0x18c/0x1a0
      [<ffff8000007a20ac>] __rt_spin_lock+0x2c/0x40
      [<ffff8000007a2268>] rt_read_lock+0x40/0x58
      [<ffff800000085328>] single_step_handler+0x38/0xd8
      [<ffff800000082368>] do_debug_exception+0x58/0xb8
      Exception stack(0xffff80834a1e7c80 to 0xffff80834a1e7da0)
      7c80: ffffff9c ffffffff 92c23ba0 0000ffff 4a1e7e40 ffff8083 001bfcc4 ffff8000
      7ca0: f2000400 00000000 00000000 00000000 4a1e7d80 ffff8083 0049501c ffff8000
      7cc0: 00005402 00000000 00aaa210 ffff8000 4a1e7ea0 ffff8083 000833f4 ffff8000
      7ce0: ffffff9c ffffffff 92c23ba0 0000ffff 4a1e7ea0 ffff8083 001bfcc0 ffff8000
      7d00: 4a0fc400 ffff8083 00005402 00000000 4a1e7d40 ffff8083 00490324 ffff8000
      7d20: ffffff9c 00000000 92c23ba0 0000ffff 000a0000 00000000 00000000 00000000
      7d40: 00000008 00000000 00080000 00000000 92c23b8b 0000ffff 92c23b8e 0000ffff
      7d60: 00000038 00000000 00001cb2 00000000 00000005 00000000 92d7b498 0000ffff
      7d80: 01010101 01010101 92be9000 0000ffff 00000000 00000000 00000030 00000000
      [<ffff8000000833f4>] el1_dbg+0x18/0x6c
      
      This issue is similar with 62c6c61a("arm64: replace read_lock to rcu lock in
      call_break_hook"), but comes to single_step_handler.
      
      This also solves kgdbts boot test silent hang issue on 4.4 -rt kernel.
      Signed-off-by: NYang Shi <yang.shi@linaro.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      cf0a2543
    • L
      arm64: ptdump: Indicate whether memory should be faulting · d7e9d594
      Laura Abbott 提交于
      With CONFIG_DEBUG_PAGEALLOC, pages do not have the valid bit
      set when free in the buddy allocator. Add an indiciation to
      the page table dumping code that the valid bit is not set,
      'F' for fault, to make this easier to understand.
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NLaura Abbott <labbott@fedoraproject.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      d7e9d594
    • L
      arm64: Add support for ARCH_SUPPORTS_DEBUG_PAGEALLOC · 83863f25
      Laura Abbott 提交于
      ARCH_SUPPORTS_DEBUG_PAGEALLOC provides a hook to map and unmap
      pages for debugging purposes. This requires memory be mapped
      with PAGE_SIZE mappings since breaking down larger mappings
      at runtime will lead to TLB conflicts. Check if debug_pagealloc
      is enabled at runtime and if so, map everyting with PAGE_SIZE
      pages. Implement the functions to actually map/unmap the
      pages at runtime.
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NLaura Abbott <labbott@fedoraproject.org>
      [catalin.marinas@arm.com: static annotation block_mappings_allowed() and #ifdef]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      83863f25