1. 04 1月, 2019 3 次提交
  2. 03 1月, 2019 1 次提交
    • S
      arm64: smp: Fix compilation error · 1236cd2b
      Shaokun Zhang 提交于
      For arm64: updates for 4.21, there is a compilation error:
      arch/arm64/kernel/head.S: Assembler messages:
      arch/arm64/kernel/head.S:824: Error: missing ')'
      arch/arm64/kernel/head.S:824: Error: missing ')'
      arch/arm64/kernel/head.S:824: Error: missing ')'
      arch/arm64/kernel/head.S:824: Error: unexpected characters following instruction at operand 2 -- `mov x2,#(2)|(2U<<(8))'
      scripts/Makefile.build:391: recipe for target 'arch/arm64/kernel/head.o' failed
      make[1]: *** [arch/arm64/kernel/head.o] Error 1
      GCC version is gcc (Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
      
      Let's fix it using the UL() macro.
      
      Fixes: 66f16a24 ("arm64: smp: Rework early feature mismatched detection")
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Tested-by: NJohn Stultz <john.stultz@linaro.org>
      Signed-off-by: NShaokun Zhang <zhangshaokun@hisilicon.com>
      [will: consistent use of UL() for all shifts in asm constants]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      1236cd2b
  3. 14 12月, 2018 17 次提交
  4. 13 12月, 2018 4 次提交
  5. 12 12月, 2018 6 次提交
    • R
      arm64: Add memory hotplug support · 4ab21506
      Robin Murphy 提交于
      Wire up the basic support for hot-adding memory. Since memory hotplug
      is fairly tightly coupled to sparsemem, we tweak pfn_valid() to also
      cross-check the presence of a section in the manner of the generic
      implementation, before falling back to memblock to check for no-map
      regions within a present section as before. By having arch_add_memory(()
      create the linear mapping first, this then makes everything work in the
      way that __add_section() expects.
      
      We expect hotplug to be ACPI-driven, so the swapper_pg_dir updates
      should be safe from races by virtue of the global device hotplug lock.
      Signed-off-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      4ab21506
    • W
      arm64: percpu: Fix LSE implementation of value-returning pcpu atomics · 6e4ede69
      Will Deacon 提交于
      Commit 959bf2fd ("arm64: percpu: Rewrite per-cpu ops to allow use of
      LSE atomics") introduced alternative code sequences for the arm64 percpu
      atomics, so that the LSE instructions can be patched in at runtime if
      they are supported by the CPU.
      
      Unfortunately, when patching in the LSE sequence for a value-returning
      pcpu atomic, the argument registers are the wrong way round. The
      implementation of this_cpu_add_return() therefore ends up adding
      uninitialised stack to the percpu variable and returning garbage.
      
      As it turns out, there aren't very many users of the value-returning
      percpu atomics in mainline and we only spotted this due to a failure in
      the kprobes selftests. In this case, when attempting to single-step over
      the out-of-line instruction slot, the debug monitors would not be
      enabled because calling this_cpu_inc_return() on the kernel debug
      monitor refcount would fail to detect the transition from 0. We would
      consequently execute past the slot and take an undefined instruction
      exception from the kernel, resulting in a BUG:
      
       | kernel BUG at arch/arm64/kernel/traps.c:421!
       | PREEMPT SMP
       | pc : do_undefinstr+0x268/0x278
       | lr : do_undefinstr+0x124/0x278
       | Process swapper/0 (pid: 1, stack limit = 0x(____ptrval____))
       | Call trace:
       |  do_undefinstr+0x268/0x278
       |  el1_undef+0x10/0x78
       |  0xffff00000803c004
       |  init_kprobes+0x150/0x180
       |  do_one_initcall+0x74/0x178
       |  kernel_init_freeable+0x188/0x224
       |  kernel_init+0x10/0x100
       |  ret_from_fork+0x10/0x1c
      
      Fix the argument order to get the value-returning pcpu atomics working
      correctly when implemented using the LSE instructions.
      Reported-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      6e4ede69
    • M
      arm64: add <asm/asm-prototypes.h> · c3296a13
      Mark Rutland 提交于
      While we can export symbols from assembly files, CONFIG_MODVERIONS requires C
      declarations of anyhting that's exported.
      
      Let's account for this as other architectures do by placing these declarations
      in <asm/asm-prototypes.h>, which kbuild will automatically use to generate
      modversion information for assembly files.
      
      Since we already define most prototypes in existing headers, we simply need to
      include those headers in <asm/asm-prototypes.h>, and don't need to duplicate
      these.
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      c3296a13
    • W
      arm64: mm: Introduce MAX_USER_VA_BITS definition · 9b31cf49
      Will Deacon 提交于
      With the introduction of 52-bit virtual addressing for userspace, we are
      now in a position where the virtual addressing capability of userspace
      may exceed that of the kernel. Consequently, the VA_BITS definition
      cannot be used blindly, since it reflects only the size of kernel
      virtual addresses.
      
      This patch introduces MAX_USER_VA_BITS which is either VA_BITS or 52
      depending on whether 52-bit virtual addressing has been configured at
      build time, removing a few places where the 52 is open-coded based on
      explicit CONFIG_ guards.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      9b31cf49
    • A
      arm64: fix ARM64_USER_VA_BITS_52 builds · 4d08d20f
      Arnd Bergmann 提交于
      In some randconfig builds, the new CONFIG_ARM64_USER_VA_BITS_52
      triggered a build failure:
      
      arch/arm64/mm/proc.S:287: Error: immediate out of range
      
      As it turns out, we were incorrectly setting PGTABLE_LEVELS here,
      lacking any other default value.
      This fixes the calculation of CONFIG_PGTABLE_LEVELS to consider
      all combinations again.
      
      Fixes: 68d23da4 ("arm64: Kconfig: Re-jig CONFIG options for 52-bit VA")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      4d08d20f
    • W
      arm64: preempt: Fix big-endian when checking preempt count in assembly · 7faa313f
      Will Deacon 提交于
      Commit 39624469 ("arm64: preempt: Provide our own implementation of
      asm/preempt.h") extended the preempt count field in struct thread_info
      to 64 bits, so that it consists of a 32-bit count plus a 32-bit flag
      indicating whether or not the current task needs rescheduling.
      
      Whilst the asm-offsets definition of TSK_TI_PREEMPT was updated to point
      to this new field, the assembly usage was left untouched meaning that a
      32-bit load from TSK_TI_PREEMPT on a big-endian machine actually returns
      the reschedule flag instead of the count.
      
      Whilst we could fix this by pointing TSK_TI_PREEMPT at the count field,
      we're actually better off reworking the two assembly users so that they
      operate on the whole 64-bit value in favour of inspecting the thread
      flags separately in order to determine whether a reschedule is needed.
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reported-by: N"kernelci.org bot" <bot@kernelci.org>
      Tested-by: NKevin Hilman <khilman@baylibre.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      7faa313f
  6. 11 12月, 2018 9 次提交