1. 22 11月, 2016 1 次提交
    • C
      arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1 · 4b65a5db
      Catalin Marinas 提交于
      This patch adds the uaccess macros/functions to disable access to user
      space by setting TTBR0_EL1 to a reserved zeroed page. Since the value
      written to TTBR0_EL1 must be a physical address, for simplicity this
      patch introduces a reserved_ttbr0 page at a constant offset from
      swapper_pg_dir. The uaccess_disable code uses the ttbr1_el1 value
      adjusted by the reserved_ttbr0 offset.
      
      Enabling access to user is done by restoring TTBR0_EL1 with the value
      from the struct thread_info ttbr0 variable. Interrupts must be disabled
      during the uaccess_ttbr0_enable code to ensure the atomicity of the
      thread_info.ttbr0 read and TTBR0_EL1 write. This patch also moves the
      get_thread_info asm macro from entry.S to assembler.h for reuse in the
      uaccess_ttbr0_* macros.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      4b65a5db
  2. 12 11月, 2016 2 次提交
    • M
      arm64: split thread_info from task stack · c02433dd
      Mark Rutland 提交于
      This patch moves arm64's struct thread_info from the task stack into
      task_struct. This protects thread_info from corruption in the case of
      stack overflows, and makes its address harder to determine if stack
      addresses are leaked, making a number of attacks more difficult. Precise
      detection and handling of overflow is left for subsequent patches.
      
      Largely, this involves changing code to store the task_struct in sp_el0,
      and acquire the thread_info from the task struct. Core code now
      implements current_thread_info(), and as noted in <linux/sched.h> this
      relies on offsetof(task_struct, thread_info) == 0, enforced by core
      code.
      
      This change means that the 'tsk' register used in entry.S now points to
      a task_struct, rather than a thread_info as it used to. To make this
      clear, the TI_* field offsets are renamed to TSK_TI_*, with asm-offsets
      appropriately updated to account for the structural change.
      
      Userspace clobbers sp_el0, and we can no longer restore this from the
      stack. Instead, the current task is cached in a per-cpu variable that we
      can safely access from early assembly as interrupts are disabled (and we
      are thus not preemptible).
      
      Both secondary entry and idle are updated to stash the sp and task
      pointer separately.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c02433dd
    • M
      arm64: asm-offsets: remove unused definitions · 3fe12da4
      Mark Rutland 提交于
      Subsequent patches will move the thread_info::{task,cpu} fields, and the
      current TI_{TASK,CPU} offset definitions are not used anywhere.
      
      This patch removes the redundant definitions.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      3fe12da4
  3. 09 9月, 2016 1 次提交
    • S
      arm64: Work around systems with mismatched cache line sizes · 116c81f4
      Suzuki K Poulose 提交于
      Systems with differing CPU i-cache/d-cache line sizes can cause
      problems with the cache management by software when the execution
      is migrated from one to another. Usually, the application reads
      the cache size on a CPU and then uses that length to perform cache
      operations. However, if it gets migrated to another CPU with a smaller
      cache line size, things could go completely wrong. To prevent such
      cases, always use the smallest cache line size among the CPUs. The
      kernel CPU feature infrastructure already keeps track of the safe
      value for all CPUID registers including CTR. This patch works around
      the problem by :
      
      For kernel, dynamically patch the kernel to read the cache size
      from the system wide copy of CTR_EL0.
      
      For applications, trap read accesses to CTR_EL0 (by clearing the SCTLR.UCT)
      and emulate the mrs instruction to return the system wide safe value
      of CTR_EL0.
      
      For faster access (i.e, avoiding to lookup the system wide value of CTR_EL0
      via read_system_reg), we keep track of the pointer to table entry for
      CTR_EL0 in the CPU feature infrastructure.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Andre Przywara <andre.przywara@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      116c81f4
  4. 19 7月, 2016 1 次提交
  5. 12 7月, 2016 1 次提交
    • K
      arm64: Add support for CLOCK_MONOTONIC_RAW in clock_gettime() vDSO · 49eea433
      Kevin Brodsky 提交于
      So far the arm64 clock_gettime() vDSO implementation only supported
      the following clocks, falling back to the syscall for the others:
      - CLOCK_REALTIME{,_COARSE}
      - CLOCK_MONOTONIC{,_COARSE}
      
      This patch adds support for the CLOCK_MONOTONIC_RAW clock, taking
      advantage of the recent refactoring of the vDSO time functions. Like
      the non-_COARSE clocks, this only works when the "arch_sys_counter"
      clocksource is in use (allowing us to read the current time from the
      virtual counter register), otherwise we also have to fall back to the
      syscall.
      
      Most of the data is shared with CLOCK_MONOTONIC, and the algorithm is
      similar. The reference implementation in kernel/time/timekeeping.c
      shows that:
      - CLOCK_MONOTONIC = tk->wall_to_monotonic + tk->xtime_sec +
        timekeeping_get_ns(&tk->tkr_mono)
      - CLOCK_MONOTONIC_RAW = tk->raw_time + timekeeping_get_ns(&tk->tkr_raw)
      - tkr_mono and tkr_raw are identical (in particular, same
        clocksource), except these members:
        * mult (only mono's multiplier is NTP-adjusted)
        * xtime_nsec (always 0 for raw)
      
      Therefore, tk->raw_time and tkr_raw->mult are now also stored in the
      vDSO data page.
      
      Cc: Ali Saidi <ali.saidi@arm.com>
      Signed-off-by: NKevin Brodsky <kevin.brodsky@arm.com>
      Reviewed-by: NDave Martin <dave.martin@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      49eea433
  6. 07 7月, 2016 1 次提交
  7. 28 4月, 2016 3 次提交
  8. 01 3月, 2016 1 次提交
  9. 25 2月, 2016 1 次提交
    • S
      arm64: Handle early CPU boot failures · bb905274
      Suzuki K Poulose 提交于
      A secondary CPU could fail to come online due to insufficient
      capabilities and could simply die or loop in the kernel.
      e.g, a CPU with no support for the selected kernel PAGE_SIZE
      loops in kernel with MMU turned off.
      or a hotplugged CPU which doesn't have one of the advertised
      system capability will die during the activation.
      
      There is no way to synchronise the status of the failing CPU
      back to the master. This patch solves the issue by adding a
      field to the secondary_data which can be updated by the failing
      CPU. If the secondary CPU fails even before turning the MMU on,
      it updates the status in a special variable reserved in the head.txt
      section to make sure that the update can be cache invalidated safely
      without possible sharing of cache write back granule.
      
      Here are the possible states :
      
       -1. CPU_MMU_OFF - Initial value set by the master CPU, this value
      indicates that the CPU could not turn the MMU on, hence the status
      could not be reliably updated in the secondary_data. Instead, the
      CPU has updated the status @ __early_cpu_boot_status.
      
       0. CPU_BOOT_SUCCESS - CPU has booted successfully.
      
       1. CPU_KILL_ME - CPU has invoked cpu_ops->die, indicating the
      master CPU to synchronise by issuing a cpu_ops->cpu_kill.
      
       2. CPU_STUCK_IN_KERNEL - CPU couldn't invoke die(), instead is
      looping in the kernel. This information could be used by say,
      kexec to check if it is really safe to do a kexec reboot.
      
       3. CPU_PANIC_KERNEL - CPU detected some serious issues which
      requires kernel to crash immediately. The secondary CPU cannot
      call panic() until it has initialised the GIC. This flag can
      be used to instruct the master to do so.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      [catalin.marinas@arm.com: conflict resolution]
      [catalin.marinas@arm.com: converted "status" from int to long]
      [catalin.marinas@arm.com: updated update_early_cpu_boot_status to use str_l]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bb905274
  10. 05 1月, 2016 1 次提交
  11. 14 12月, 2015 2 次提交
  12. 07 10月, 2015 1 次提交
    • W
      arm64: mm: rewrite ASID allocator and MM context-switching code · 5aec715d
      Will Deacon 提交于
      Our current switch_mm implementation suffers from a number of problems:
      
        (1) The ASID allocator relies on IPIs to synchronise the CPUs on a
            rollover event
      
        (2) Because of (1), we cannot allocate ASIDs with interrupts disabled
            and therefore make use of a TIF_SWITCH_MM flag to postpone the
            actual switch to finish_arch_post_lock_switch
      
        (3) We run context switch with a reserved (invalid) TTBR0 value, even
            though the ASID and pgd are updated atomically
      
        (4) We take a global spinlock (cpu_asid_lock) during context-switch
      
        (5) We use h/w broadcast TLB operations when they are not required
            (e.g. in flush_context)
      
      This patch addresses these problems by rewriting the ASID algorithm to
      match the bitmap-based arch/arm/ implementation more closely. This in
      turn allows us to remove much of the complications surrounding switch_mm,
      including the ugly thread flag.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5aec715d
  13. 12 8月, 2015 1 次提交
  14. 21 7月, 2015 2 次提交
    • A
      KVM: arm64: introduce vcpu->arch.debug_ptr · 84e690bf
      Alex Bennée 提交于
      This introduces a level of indirection for the debug registers. Instead
      of using the sys_regs[] directly we store registers in a structure in
      the vcpu. The new kvm_arm_reset_debug_ptr() sets the debug ptr to the
      guest context.
      
      Because we no longer give the sys_regs offset for the sys_reg_desc->reg
      field, but instead the index into a debug-specific struct we need to
      add a number of additional trap functions for each register. Also as the
      generic generic user-space access code no longer works we have
      introduced a new pair of function pointers to the sys_reg_desc structure
      to override the generic code when needed.
      Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NAlex Bennée <alex.bennee@linaro.org>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      84e690bf
    • A
      KVM: arm: introduce kvm_arm_init/setup/clear_debug · 56c7f5e7
      Alex Bennée 提交于
      This is a precursor for later patches which will need to do more to
      setup debug state before entering the hyp.S switch code. The existing
      functionality for setting mdcr_el2 has been moved out of hyp.S and now
      uses the value kept in vcpu->arch.mdcr_el2.
      
      As the assembler used to previously mask and preserve MDCR_EL2.HPMN I've
      had to add a mechanism to save the value of mdcr_el2 as a per-cpu
      variable during the initialisation code. The kernel never sets this
      number so we are assuming the bootcode has set up the correct value
      here.
      
      This also moves the conditional setting of the TDA bit from the hyp code
      into the C code which is currently used for the lazy debug register
      context switch code.
      Signed-off-by: NAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      56c7f5e7
  15. 12 6月, 2015 1 次提交
  16. 13 4月, 2015 1 次提交
  17. 20 3月, 2015 1 次提交
  18. 27 1月, 2015 1 次提交
    • L
      arm64: kernel: remove ARM64_CPU_SUSPEND config option · af3cfdbf
      Lorenzo Pieralisi 提交于
      ARM64_CPU_SUSPEND config option was introduced to make code providing
      context save/restore selectable only on platforms requiring power
      management capabilities.
      
      Currently ARM64_CPU_SUSPEND depends on the PM_SLEEP config option which
      in turn is set by the SUSPEND config option.
      
      The introduction of CPU_IDLE for arm64 requires that code configured
      by ARM64_CPU_SUSPEND (context save/restore) should be compiled in
      in order to enable the CPU idle driver to rely on CPU operations
      carrying out context save/restore.
      
      The ARM64_CPUIDLE config option (ARM64 generic idle driver) is therefore
      forced to select ARM64_CPU_SUSPEND, even if there may be (ie PM_SLEEP)
      failed dependencies, which is not a clean way of handling the kernel
      configuration option.
      
      For these reasons, this patch removes the ARM64_CPU_SUSPEND config option
      and makes the context save/restore dependent on CPU_PM, which is selected
      whenever either SUSPEND or CPU_IDLE are configured, cleaning up dependencies
      in the process.
      
      This way, code previously configured through ARM64_CPU_SUSPEND is
      compiled in whenever a power management subsystem requires it to be
      present in the kernel (SUSPEND || CPU_IDLE), which is the behaviour
      expected on ARM64 kernels.
      
      The cpu_suspend and cpu_init_idle CPU operations are added only if
      CPU_IDLE is selected, since they are CPU_IDLE specific methods and
      should be grouped and defined accordingly.
      
      PSCI CPU operations are updated to reflect the introduced changes.
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com>
      Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      af3cfdbf
  19. 21 1月, 2015 1 次提交
  20. 11 7月, 2014 4 次提交
  21. 17 12月, 2013 1 次提交
    • L
      arm64: kernel: cpu_{suspend/resume} implementation · 95322526
      Lorenzo Pieralisi 提交于
      Kernel subsystems like CPU idle and suspend to RAM require a generic
      mechanism to suspend a processor, save its context and put it into
      a quiescent state. The cpu_{suspend}/{resume} implementation provides
      such a framework through a kernel interface allowing to save/restore
      registers, flush the context to DRAM and suspend/resume to/from
      low-power states where processor context may be lost.
      
      The CPU suspend implementation relies on the suspend protocol registered
      in CPU operations to carry out a suspend request after context is
      saved and flushed to DRAM. The cpu_suspend interface:
      
      int cpu_suspend(unsigned long arg);
      
      allows to pass an opaque parameter that is handed over to the suspend CPU
      operations back-end so that it can take action according to the
      semantics attached to it. The arg parameter allows suspend to RAM and CPU
      idle drivers to communicate to suspend protocol back-ends; it requires
      standardization so that the interface can be reused seamlessly across
      systems, paving the way for generic drivers.
      
      Context memory is allocated on the stack, whose address is stashed in a
      per-cpu variable to keep track of it and passed to core functions that
      save/restore the registers required by the architecture.
      
      Even though, upon successful execution, the cpu_suspend function shuts
      down the suspending processor, the warm boot resume mechanism, based
      on the cpu_resume function, makes the resume path operate as a
      cpu_suspend function return, so that cpu_suspend can be treated as a C
      function by the caller, which simplifies coding the PM drivers that rely
      on the cpu_suspend API.
      
      Upon context save, the minimal amount of memory is flushed to DRAM so
      that it can be retrieved when the MMU is off and caches are not searched.
      
      The suspend CPU operation, depending on the required operations (eg CPU vs
      Cluster shutdown) is in charge of flushing the cache hierarchy either
      implicitly (by calling firmware implementations like PSCI) or explicitly
      by executing the required cache maintainance functions.
      
      Debug exceptions are disabled during cpu_{suspend}/{resume} operations
      so that debug registers can be saved and restored properly preventing
      preemption from debug agents enabled in the kernel.
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      95322526
  22. 04 7月, 2013 1 次提交
  23. 12 6月, 2013 1 次提交
  24. 17 9月, 2012 1 次提交