1. 12 7月, 2018 9 次提交
    • M
      arm64: convert raw syscall invocation to C · 4141c857
      Mark Rutland 提交于
      As a first step towards invoking syscalls with a pt_regs argument,
      convert the raw syscall invocation logic to C. We end up with a bit more
      register shuffling, but the unified invocation logic means we can unify
      the tracing paths, too.
      
      Previously, assembly had to open-code calls to ni_sys() when the system
      call number was out-of-bounds for the relevant syscall table. This case
      is now handled by invoke_syscall(), and the assembly no longer need to
      handle this case explicitly. This allows the tracing paths to be
      simplified and unified, as we no longer need the __ni_sys_trace path and
      the __sys_trace_return label.
      
      This only converts the invocation of the syscall. The rest of the
      syscall triage and tracing is left in assembly for now, and will be
      converted in subsequent patches.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      4141c857
    • M
      arm64: introduce syscall_fn_t · 27d83e68
      Mark Rutland 提交于
      In preparation for invoking arbitrary syscalls from C code, let's define
      a type for an arbitrary syscall, matching the parameter passing rules of
      the AAPCS.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      27d83e68
    • M
      arm64: remove sigreturn wrappers · 3085e164
      Mark Rutland 提交于
      The arm64 sigreturn* syscall handlers are non-standard. Rather than
      taking a number of user parameters in registers as per the AAPCS,
      they expect the pt_regs as their sole argument.
      
      To make this work, we override the syscall definitions to invoke
      wrappers written in assembly, which mov the SP into x0, and branch to
      their respective C functions.
      
      On other architectures (such as x86), the sigreturn* functions take no
      argument and instead use current_pt_regs() to acquire the user
      registers. This requires less boilerplate code, and allows for other
      features such as interposing C code in this path.
      
      This patch takes the same approach for arm64.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tentatively-reviewed-by: NDave Martin <dave.martin@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      3085e164
    • M
      arm64: move sve_user_{enable,disable} to <asm/fpsimd.h> · f9209e26
      Mark Rutland 提交于
      In subsequent patches, we'll want to make use of sve_user_enable() and
      sve_user_disable() outside of kernel/fpsimd.c. Let's move these to
      <asm/fpsimd.h> where we can make use of them.
      
      To avoid ifdeffery in sequences like:
      
      if (system_supports_sve() && some_condition)
      	sve_user_disable();
      
      ... empty stubs are provided when support for SVE is not enabled. Note
      that system_supports_sve() contains as IS_ENABLED(CONFIG_ARM64_SVE), so
      the sve_user_disable() call should be optimized away entirely when
      CONFIG_ARM64_SVE is not selected.
      
      To ensure that this is the case, the stub definitions contain a
      BUILD_BUG(), as we do for other stubs for which calls should always be
      optimized away when the relevant config option is not selected.
      
      At the same time, the include list of <asm/fpsimd.h> is sorted while
      adding <asm/sysreg.h>.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      f9209e26
    • M
      arm64: kill change_cpacr() · 8d370933
      Mark Rutland 提交于
      Now that we have sysreg_clear_set(), we can use this instead of
      change_cpacr().
      
      Note that the order of the set and clear arguments differs between
      change_cpacr() and sysreg_clear_set(), so these are flipped as part of
      the conversion. Also, sve_user_enable() redundantly clears
      CPACR_EL1_ZEN_EL0EN before setting it; this is removed for clarity.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      8d370933
    • M
      arm64: kill config_sctlr_el1() · 25be597a
      Mark Rutland 提交于
      Now that we have sysreg_clear_set(), we can consistently use this
      instead of config_sctlr_el1().
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      25be597a
    • M
      arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h> · 1c312e84
      Mark Rutland 提交于
      Currently we assert that the SCTLR_EL{1,2}_{SET,CLEAR} bits are
      self-consistent with an assertion in config_sctlr_el1(). This is a bit
      unusual, since config_sctlr_el1() doesn't make use of these definitions,
      and is far away from the definitions themselves.
      
      We can use the CPP #error directive to have equivalent assertions in
      <asm/sysreg.h>, next to the definitions of the set/clear bits, which is
      a bit clearer and simpler.
      
      At the same time, lets fill in the upper 32 bits for both registers in
      their respective RES0 definitions. This could be a little nicer with
      GENMASK_ULL(63, 32), but this currently lives in <linux/bitops.h>, which
      cannot safely be included from assembly, as <asm/sysreg.h> can.
      
      Note the when the preprocessor evaluates an expression for an #if
      directive, all signed or unsigned values are treated as intmax_t or
      uintmax_t respectively. To avoid ambiguity, we define explicitly define
      the mask of all 64 bits.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Dave Martin <dave.martin@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      1c312e84
    • M
      arm64: consistently use unsigned long for thread flags · 3eb6f1f9
      Mark Rutland 提交于
      In do_notify_resume, we manipulate thread_flags as a 32-bit unsigned
      int, whereas thread_info::flags is a 64-bit unsigned long, and elsewhere
      (e.g. in the entry assembly) we manipulate the flags as a 64-bit
      quantity.
      
      For consistency, and to avoid problems if we end up with more than 32
      flags, let's make do_notify_resume take the flags as a 64-bit unsigned
      long.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      3eb6f1f9
    • W
      Revert "arm64: fix infinite stacktrace" · e87a4a92
      Will Deacon 提交于
      This reverts commit 7e7df71f.
      
      When unwinding out of the IRQ stack and onto the interrupted EL1 stack,
      we cannot rely on the frame pointer being strictly increasing, as this
      could terminate the backtrace early depending on how the stacks have
      been allocated.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      e87a4a92
  2. 11 7月, 2018 3 次提交
  3. 10 7月, 2018 1 次提交
    • L
      arm64: numa: rework ACPI NUMA initialization · e1896249
      Lorenzo Pieralisi 提交于
      Current ACPI ARM64 NUMA initialization code in
      
      acpi_numa_gicc_affinity_init()
      
      carries out NUMA nodes creation and cpu<->node mappings at the same time
      in the arch backend so that a single SRAT walk is needed to parse both
      pieces of information.  This implies that the cpu<->node mappings must
      be stashed in an array (sized NR_CPUS) so that SMP code can later use
      the stashed values to avoid another SRAT table walk to set-up the early
      cpu<->node mappings.
      
      If the kernel is configured with a NR_CPUS value less than the actual
      processor entries in the SRAT (and MADT), the logic in
      acpi_numa_gicc_affinity_init() is broken in that the cpu<->node mapping
      is only carried out (and stashed for future use) only for a number of
      SRAT entries up to NR_CPUS, which do not necessarily correspond to the
      possible cpus detected at SMP initialization in
      acpi_map_gic_cpu_interface() (ie MADT and SRAT processor entries order
      is not enforced), which leaves the kernel with broken cpu<->node
      mappings.
      
      Furthermore, given the current ACPI NUMA code parsing logic in
      acpi_numa_gicc_affinity_init(), PXM domains for CPUs that are not parsed
      because they exceed NR_CPUS entries are not mapped to NUMA nodes (ie the
      PXM corresponding node is not created in the kernel) leaving the system
      with a broken NUMA topology.
      
      Rework the ACPI ARM64 NUMA initialization process so that the NUMA
      nodes creation and cpu<->node mappings are decoupled. cpu<->node
      mappings are moved to SMP initialization code (where they are needed),
      at the cost of an extra SRAT walk so that ACPI NUMA mappings can be
      batched before being applied, fixing current parsing pitfalls.
      Acked-by: NHanjun Guo <hanjun.guo@linaro.org>
      Tested-by: NJohn Garry <john.garry@huawei.com>
      Fixes: d8b47fca ("arm64, ACPI, NUMA: NUMA support based on SRAT and
      SLIT")
      Link: http://lkml.kernel.org/r/1527768879-88161-2-git-send-email-xiexiuqi@huawei.comReported-by: NXie XiuQi <xiexiuqi@huawei.com>
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Punit Agrawal <punit.agrawal@arm.com>
      Cc: Jonathan Cameron <jonathan.cameron@huawei.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Hanjun Guo <guohanjun@huawei.com>
      Cc: Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com>
      Cc: Jeremy Linton <jeremy.linton@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Xie XiuQi <xiexiuqi@huawei.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      e1896249
  4. 09 7月, 2018 2 次提交
  5. 06 7月, 2018 21 次提交
  6. 05 7月, 2018 4 次提交