1. 19 6月, 2019 4 次提交
  2. 13 6月, 2019 1 次提交
    • D
      arm64/sve: Fix missing SVE/FPSIMD endianness conversions · 41040cf7
      Dave Martin 提交于
      The in-memory representation of SVE and FPSIMD registers is
      different: the FPSIMD V-registers are stored as single 128-bit
      host-endian values, whereas SVE registers are stored in an
      endianness-invariant byte order.
      
      This means that the two representations differ when running on a
      big-endian host.  But we blindly copy data from one representation
      to another when converting between the two, resulting in the
      register contents being unintentionally byteswapped in certain
      situations.  Currently this can be triggered by the first SVE
      instruction after a syscall, for example (though the potential
      trigger points may vary in future).
      
      So, fix the conversion functions fpsimd_to_sve(), sve_to_fpsimd()
      and sve_sync_from_fpsimd_zeropad() to swab where appropriate.
      
      There is no common swahl128() or swab128() that we could use here.
      Maybe it would be worth making this generic, but for now add a
      simple local hack.
      
      Since the byte order differences are exposed in ABI, also clarify
      the documentation.
      
      Cc: Alex Bennée <alex.bennee@linaro.org>
      Cc: Peter Maydell <peter.maydell@linaro.org>
      Cc: Alan Hayward <alan.hayward@arm.com>
      Cc: Julien Grall <julien.grall@arm.com>
      Fixes: bc0ee476 ("arm64/sve: Core task context handling")
      Fixes: 8cd969d2 ("arm64/sve: Signal handling support")
      Fixes: 43d4da2c ("arm64/sve: ptrace and ELF coredump support")
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      [will: Fix typos in comments and docs spotted by Julien]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      41040cf7
  3. 12 6月, 2019 2 次提交
    • W
      arm64: tlbflush: Ensure start/end of address range are aligned to stride · 01d57485
      Will Deacon 提交于
      Since commit 3d65b6bb ("arm64: tlbi: Set MAX_TLBI_OPS to
      PTRS_PER_PTE"), we resort to per-ASID invalidation when attempting to
      perform more than PTRS_PER_PTE invalidation instructions in a single
      call to __flush_tlb_range(). Whilst this is beneficial, the mmu_gather
      code does not ensure that the end address of the range is rounded-up
      to the stride when freeing intermediate page tables in pXX_free_tlb(),
      which defeats our range checking.
      
      Align the bounds passed into __flush_tlb_range().
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Reported-by: NHanjun Guo <guohanjun@huawei.com>
      Tested-by: NHanjun Guo <guohanjun@huawei.com>
      Reviewed-by: NHanjun Guo <guohanjun@huawei.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      01d57485
    • N
      arm64: Don't unconditionally add -Wno-psabi to KBUILD_CFLAGS · fa63da2a
      Nathan Chancellor 提交于
      This is a GCC only option, which warns about ABI changes within GCC, so
      unconditionally adding it breaks Clang with tons of:
      
      warning: unknown warning option '-Wno-psabi' [-Wunknown-warning-option]
      
      and link time failures:
      
      ld.lld: error: undefined symbol: __efistub___stack_chk_guard
      >>> referenced by arm-stub.c:73
      (/home/nathan/cbl/linux/drivers/firmware/efi/libstub/arm-stub.c:73)
      >>>               arm-stub.stub.o:(__efistub_install_memreserve_table)
      in archive ./drivers/firmware/efi/libstub/lib.a
      
      These failures come from the lack of -fno-stack-protector, which is
      added via cc-option in drivers/firmware/efi/libstub/Makefile. When an
      unknown flag is added to KBUILD_CFLAGS, clang will noisily warn that it
      is ignoring the option like above, unlike gcc, who will just error.
      
      $ echo "int main() { return 0; }" > tmp.c
      
      $ clang -Wno-psabi tmp.c; echo $?
      warning: unknown warning option '-Wno-psabi' [-Wunknown-warning-option]
      1 warning generated.
      0
      
      $ gcc -Wsometimes-uninitialized tmp.c; echo $?
      gcc: error: unrecognized command line option
      ‘-Wsometimes-uninitialized’; did you mean ‘-Wmaybe-uninitialized’?
      1
      
      For cc-option to work properly with clang and behave like gcc, -Werror
      is needed, which was done in commit c3f0d0bc ("kbuild, LLVMLinux:
      Add -Werror to cc-option to support clang").
      
      $ clang -Werror -Wno-psabi tmp.c; echo $?
      error: unknown warning option '-Wno-psabi'
      [-Werror,-Wunknown-warning-option]
      1
      
      As a consequence of this, when an unknown flag is unconditionally added
      to KBUILD_CFLAGS, it will cause cc-option to always fail and those flags
      will never get added:
      
      $ clang -Werror -Wno-psabi -fno-stack-protector tmp.c; echo $?
      error: unknown warning option '-Wno-psabi'
      [-Werror,-Wunknown-warning-option]
      1
      
      This can be seen when compiling the whole kernel as some warnings that
      are normally disabled (see below) show up. The full list of flags
      missing from drivers/firmware/efi/libstub are the following (gathered
      from diffing .arm64-stub.o.cmd):
      
      -fno-delete-null-pointer-checks
      -Wno-address-of-packed-member
      -Wframe-larger-than=2048
      -Wno-unused-const-variable
      -fno-strict-overflow
      -fno-merge-all-constants
      -fno-stack-check
      -Werror=date-time
      -Werror=incompatible-pointer-types
      -ffreestanding
      -fno-stack-protector
      
      Use cc-disable-warning so that it gets disabled for GCC and does nothing
      for Clang.
      
      Fixes: ebcc5928 ("arm64: Silence gcc warnings about arch ABI drift")
      Link: https://github.com/ClangBuiltLinux/linux/issues/511Reported-by: NQian Cai <cai@lca.pw>
      Acked-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NNick Desaulniers <ndesaulniers@google.com>
      Signed-off-by: NNathan Chancellor <natechancellor@gmail.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      fa63da2a
  4. 06 6月, 2019 2 次提交
  5. 05 6月, 2019 8 次提交
  6. 31 5月, 2019 5 次提交
  7. 29 5月, 2019 4 次提交
  8. 28 5月, 2019 2 次提交
  9. 24 5月, 2019 5 次提交
  10. 23 5月, 2019 5 次提交
    • M
      arm64: Handle erratum 1418040 as a superset of erratum 1188873 · a5325089
      Marc Zyngier 提交于
      We already mitigate erratum 1188873 affecting Cortex-A76 and
      Neoverse-N1 r0p0 to r2p0. It turns out that revisions r0p0 to
      r3p1 of the same cores are affected by erratum 1418040, which
      has the same workaround as 1188873.
      
      Let's expand the range of affected revisions to match 1418040,
      and repaint all occurences of 1188873 to 1418040. Whilst we're
      there, do a bit of reformating in silicon-errata.txt and drop
      a now unnecessary dependency on ARM_ARCH_TIMER_OOL_WORKAROUND.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      a5325089
    • A
      arm64/module: deal with ambiguity in PRELxx relocation ranges · 1cf24a2c
      Ard Biesheuvel 提交于
      The R_AARCH64_PREL16 and R_AARCH64_PREL32 relocations are
      documented as permitting a range of [-2^15 .. 2^16), resp.
      [-2^31 .. 2^32). It is also documented that this means we
      cannot detect overflow in some cases, which is bad.
      
      Since we always interpret the targets of these relocations as
      signed quantities (e.g., in the ksymtab handling code), let's
      tighten the overflow checks so that targets that are out of
      range for our signed interpretation of the relocated quantity
      get flagged.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      1cf24a2c
    • A
      arm64/kernel: kaslr: reduce module randomization range to 2 GB · b2eed9b5
      Ard Biesheuvel 提交于
      The following commit
      
        7290d580 ("module: use relative references for __ksymtab entries")
      
      updated the ksymtab handling of some KASLR capable architectures
      so that ksymtab entries are emitted as pairs of 32-bit relative
      references. This reduces the size of the entries, but more
      importantly, it gets rid of statically assigned absolute
      addresses, which require fixing up at boot time if the kernel
      is self relocating (which takes a 24 byte RELA entry for each
      member of the ksymtab struct).
      
      Since ksymtab entries are always part of the same module as the
      symbol they export, it was assumed at the time that a 32-bit
      relative reference is always sufficient to capture the offset
      between a ksymtab entry and its target symbol.
      
      Unfortunately, this is not always true: in the case of per-CPU
      variables, a per-CPU variable's base address (which usually differs
      from the actual address of any of its per-CPU copies) is allocated
      in the vicinity of the ..data.percpu section in the core kernel
      (i.e., in the per-CPU reserved region which follows the section
      containing the core kernel's statically allocated per-CPU variables).
      
      Since we randomize the module space over a 4 GB window covering
      the core kernel (based on the -/+ 4 GB range of an ADRP/ADD pair),
      we may end up putting the core kernel out of the -/+ 2 GB range of
      32-bit relative references of module ksymtab entries that refer to
      per-CPU variables.
      
      So reduce the module randomization range a bit further. We lose
      1 bit of randomization this way, but this is something we can
      tolerate.
      
      Cc: <stable@vger.kernel.org> # v4.19+
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      b2eed9b5
    • W
      arm64: errata: Add workaround for Cortex-A76 erratum #1463225 · 969f5ea6
      Will Deacon 提交于
      Revisions of the Cortex-A76 CPU prior to r4p0 are affected by an erratum
      that can prevent interrupts from being taken when single-stepping.
      
      This patch implements a software workaround to prevent userspace from
      effectively being able to disable interrupts.
      
      Cc: <stable@vger.kernel.org>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      969f5ea6
    • W
      arm64: Remove useless message during oops · 3e29ead5
      Will Deacon 提交于
      During an oops, we print the name of the current task and its pid twice.
      We also helpfully advertise its stack limit as "0x(____ptrval____)".
      
      Drop these useless messages.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      3e29ead5
  11. 21 5月, 2019 1 次提交
  12. 17 5月, 2019 1 次提交