1. 14 11月, 2019 1 次提交
  2. 12 11月, 2019 1 次提交
  3. 09 11月, 2019 2 次提交
    • M
      arm64: kaslr: Check command line before looking for a seed · 2203e1ad
      Mark Brown 提交于
      Now that we print diagnostics at boot the reason why we do not initialise
      KASLR matters. Currently we check for a seed before we check if the user
      has explicitly disabled KASLR on the command line which will result in
      misleading diagnostics so reverse the order of those checks. We still
      parse the seed from the DT early so that if the user has both provided a
      seed and disabled KASLR on the command line we still mask the seed on
      the command line.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      2203e1ad
    • M
      arm64: kaslr: Announce KASLR status on boot · 294a9ddd
      Mark Brown 提交于
      Currently the KASLR code is silent at boot unless it forces on KPTI in
      which case a message will be printed for that. This can lead to users
      incorrectly believing their system has the feature enabled when it in
      fact does not, and if they notice the problem the lack of any
      diagnostics makes it harder to understand the problem. Add an initcall
      which prints a message showing the status of KASLR during boot to make
      the status clear.
      
      This is particularly useful in cases where we don't have a seed. It
      seems to be a relatively common error for system integrators and
      administrators to enable KASLR in their configuration but not provide
      the seed at runtime, often due to seed provisioning breaking at some
      later point after it is initially enabled and verified.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      294a9ddd
  4. 07 11月, 2019 1 次提交
  5. 06 11月, 2019 10 次提交
    • M
      arm64: ftrace: minimize ifdeffery · 7f08ae53
      Mark Rutland 提交于
      Now that we no longer refer to mod->arch.ftrace_trampolines in the body
      of ftrace_make_call(), we can use IS_ENABLED() rather than ifdeffery,
      and make the code easier to follow. Likewise in ftrace_make_nop().
      
      Let's do so.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NTorsten Duwe <duwe@suse.de>
      Tested-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: NTorsten Duwe <duwe@suse.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      7f08ae53
    • T
      arm64: implement ftrace with regs · 3b23e499
      Torsten Duwe 提交于
      This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
      function's arguments (and some other registers) to be captured into a
      struct pt_regs, allowing these to be inspected and/or modified. This is
      a building block for live-patching, where a function's arguments may be
      forwarded to another function. This is also necessary to enable ftrace
      and in-kernel pointer authentication at the same time, as it allows the
      LR value to be captured and adjusted prior to signing.
      
      Using GCC's -fpatchable-function-entry=N option, we can have the
      compiler insert a configurable number of NOPs between the function entry
      point and the usual prologue. This also ensures functions are AAPCS
      compliant (e.g. disabling inter-procedural register allocation).
      
      For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
      following:
      
      | unsigned long bar(void);
      |
      | unsigned long foo(void)
      | {
      |         return bar() + 1;
      | }
      
      ... to:
      
      | <foo>:
      |         nop
      |         nop
      |         stp     x29, x30, [sp, #-16]!
      |         mov     x29, sp
      |         bl      0 <bar>
      |         add     x0, x0, #0x1
      |         ldp     x29, x30, [sp], #16
      |         ret
      
      This patch builds the kernel with -fpatchable-function-entry=2,
      prefixing each function with two NOPs. To trace a function, we replace
      these NOPs with a sequence that saves the LR into a GPR, then calls an
      ftrace entry assembly function which saves this and other relevant
      registers:
      
      | mov	x9, x30
      | bl	<ftrace-entry>
      
      Since patchable functions are AAPCS compliant (and the kernel does not
      use x18 as a platform register), x9-x18 can be safely clobbered in the
      patched sequence and the ftrace entry code.
      
      There are now two ftrace entry functions, ftrace_regs_entry (which saves
      all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
      allocated for each within modules.
      Signed-off-by: NTorsten Duwe <duwe@suse.de>
      [Mark: rework asm, comments, PLTs, initialization, commit message]
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NTorsten Duwe <duwe@suse.de>
      Tested-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: NTorsten Duwe <duwe@suse.de>
      Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Julien Thierry <jthierry@redhat.com>
      Cc: Will Deacon <will@kernel.org>
      3b23e499
    • M
      arm64: asm-offsets: add S_FP · 1f377e04
      Mark Rutland 提交于
      So that assembly code can more easily manipulate the FP (x29) within a
      pt_regs, add an S_FP asm-offsets definition.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NTorsten Duwe <duwe@suse.de>
      Tested-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: NTorsten Duwe <duwe@suse.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      1f377e04
    • M
      arm64: insn: add encoder for MOV (register) · e3bf8a67
      Mark Rutland 提交于
      For FTRACE_WITH_REGS, we're going to want to generate a MOV (register)
      instruction as part of the callsite intialization. As MOV (register) is
      an alias for ORR (shifted register), we can generate this with
      aarch64_insn_gen_logical_shifted_reg(), but it's somewhat verbose and
      difficult to read in-context.
      
      Add a aarch64_insn_gen_move_reg() wrapper for this case so that we can
      write callers in a more straightforward way.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NTorsten Duwe <duwe@suse.de>
      Tested-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: NTorsten Duwe <duwe@suse.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      e3bf8a67
    • M
      arm64: module/ftrace: intialize PLT at load time · f1a54ae9
      Mark Rutland 提交于
      Currently we lazily-initialize a module's ftrace PLT at runtime when we
      install the first ftrace call. To do so we have to apply a number of
      sanity checks, transiently mark the module text as RW, and perform an
      IPI as part of handling Neoverse-N1 erratum #1542419.
      
      We only expect the ftrace trampoline to point at ftrace_caller() (AKA
      FTRACE_ADDR), so let's simplify all of this by intializing the PLT at
      module load time, before the module loader marks the module RO and
      performs the intial I-cache maintenance for the module.
      
      Thus we can rely on the module having been correctly intialized, and can
      simplify the runtime work necessary to install an ftrace call in a
      module. This will also allow for the removal of module_disable_ro().
      
      Tested by forcing ftrace_make_call() to use the module PLT, and then
      loading up a module after setting up ftrace with:
      
      | echo ":mod:<module-name>" > set_ftrace_filter;
      | echo function > current_tracer;
      | modprobe <module-name>
      
      Since FTRACE_ADDR is only defined when CONFIG_DYNAMIC_FTRACE is
      selected, we wrap its use along with most of module_init_ftrace_plt()
      with ifdeffery rather than using IS_ENABLED().
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NTorsten Duwe <duwe@suse.de>
      Tested-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: NTorsten Duwe <duwe@suse.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Will Deacon <will@kernel.org>
      f1a54ae9
    • M
      arm64: module: rework special section handling · bd8b21d3
      Mark Rutland 提交于
      When we load a module, we have to perform some special work for a couple
      of named sections. To do this, we iterate over all of the module's
      sections, and perform work for each section we recognize.
      
      To make it easier to handle the unexpected absence of a section, and to
      make the section-specific logic easer to read, let's factor the section
      search into a helper. Similar is already done in the core module loader,
      and other architectures (and ideally we'd unify these in future).
      
      If we expect a module to have an ftrace trampoline section, but it
      doesn't have one, we'll now reject loading the module. When
      ARM64_MODULE_PLTS is selected, any correctly built module should have
      one (and this is assumed by arm64's ftrace PLT code) and the absence of
      such a section implies something has gone wrong at build time.
      
      Subsequent patches will make use of the new helper.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NTorsten Duwe <duwe@suse.de>
      Tested-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: NTorsten Duwe <duwe@suse.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      bd8b21d3
    • M
      module/ftrace: handle patchable-function-entry · a1326b17
      Mark Rutland 提交于
      When using patchable-function-entry, the compiler will record the
      callsites into a section named "__patchable_function_entries" rather
      than "__mcount_loc". Let's abstract this difference behind a new
      FTRACE_CALLSITE_SECTION, so that architectures don't have to handle this
      explicitly (e.g. with custom module linker scripts).
      
      As parisc currently handles this explicitly, it is fixed up accordingly,
      with its custom linker script removed. Since FTRACE_CALLSITE_SECTION is
      only defined when DYNAMIC_FTRACE is selected, the parisc module loading
      code is updated to only use the definition in that case. When
      DYNAMIC_FTRACE is not selected, modules shouldn't have this section, so
      this removes some redundant work in that case.
      
      To make sure that this is keep up-to-date for modules and the main
      kernel, a comment is added to vmlinux.lds.h, with the existing ifdeffery
      simplified for legibility.
      
      I built parisc generic-{32,64}bit_defconfig with DYNAMIC_FTRACE enabled,
      and verified that the section made it into the .ko files for modules.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NHelge Deller <deller@gmx.de>
      Acked-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NTorsten Duwe <duwe@suse.de>
      Tested-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: NSven Schnelle <svens@stackframe.org>
      Tested-by: NTorsten Duwe <duwe@suse.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: Jessica Yu <jeyu@kernel.org>
      Cc: linux-parisc@vger.kernel.org
      a1326b17
    • R
      arm64: kpti: Add NVIDIA's Carmel core to the KPTI whitelist · 918e1946
      Rich Wiley 提交于
      NVIDIA Carmel CPUs don't implement ID_AA64PFR0_EL1.CSV3 but
      aren't susceptible to Meltdown, so add Carmel to kpti_safe_list[].
      Signed-off-by: NRich Wiley <rwiley@nvidia.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      918e1946
    • B
      arm64: mm: Remove MAX_USER_VA_BITS definition · 218564b1
      Bhupesh Sharma 提交于
      commit 9b31cf49 ("arm64: mm: Introduce MAX_USER_VA_BITS definition")
      introduced the MAX_USER_VA_BITS definition, which was used to support
      the arm64 mm use-cases where the user-space could use 52-bit virtual
      addresses whereas the kernel-space would still could a maximum of 48-bit
      virtual addressing.
      
      But, now with commit b6d00d47 ("arm64: mm: Introduce 52-bit Kernel
      VAs"), we removed the 52-bit user/48-bit kernel kconfig option and hence
      there is no longer any scenario where user VA != kernel VA size
      (even with CONFIG_ARM64_FORCE_52BIT enabled, the same is true).
      
      Hence we can do away with the MAX_USER_VA_BITS macro as it is equal to
      VA_BITS (maximum VA space size) in all possible use-cases. Note that
      even though the 'vabits_actual' value would be 48 for arm64 hardware
      which don't support LVA-8.2 extension (even when CONFIG_ARM64_VA_BITS_52
      is enabled), VA_BITS would still be set to a value 52. Hence this change
      would be safe in all possible VA address space combinations.
      
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Steve Capper <steve.capper@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: linux-kernel@vger.kernel.org
      Cc: kexec@lists.infradead.org
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NBhupesh Sharma <bhsharma@redhat.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      218564b1
    • M
      arm64: mm: simplify the page end calculation in __create_pgd_mapping() · 32d18708
      Masahiro Yamada 提交于
      Calculate the page-aligned end address more simply.
      
      The local variable, "length" is unneeded.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      32d18708
  6. 01 11月, 2019 2 次提交
  7. 29 10月, 2019 2 次提交
  8. 28 10月, 2019 8 次提交
  9. 26 10月, 2019 7 次提交
  10. 25 10月, 2019 3 次提交
  11. 18 10月, 2019 3 次提交