1. 19 7月, 2014 1 次提交
    • A
      efi: efistub: Convert into static library · f4f75ad5
      Ard Biesheuvel 提交于
      This patch changes both x86 and arm64 efistub implementations
      from #including shared .c files under drivers/firmware/efi to
      building shared code as a static library.
      
      The x86 code uses a stub built into the boot executable which
      uncompresses the kernel at boot time. In this case, the library is
      linked into the decompressor.
      
      In the arm64 case, the stub is part of the kernel proper so the library
      is linked into the kernel proper as well.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      f4f75ad5
  2. 08 7月, 2014 5 次提交
  3. 31 5月, 2014 1 次提交
    • L
      arm64: kernel: initialize broadcast hrtimer based clock event device · 9358d755
      Lorenzo Pieralisi 提交于
      On platforms implementing CPU power management, the CPUidle subsystem
      can allow CPUs to enter idle states where local timers logic is lost on power
      down. To keep the software timers functional the kernel relies on an
      always-on broadcast timer to be present in the platform to relay the
      interrupt signalling the timer expiries.
      
      For platforms implementing CPU core gating that do not implement an always-on
      HW timer or implement it in a broken way, this patch adds code to initialize
      the kernel hrtimer based clock event device upon boot (which can be chosen as
      tick broadcast device by the kernel).
      It relies on a dynamically chosen CPU to be always powered-up. This CPU then
      relays the timer interrupt to CPUs in deep-idle states through its HW local
      timer device.
      
      Having a CPU always-on has implications on power management platform
      capabilities and makes CPUidle suboptimal, since at least a CPU is kept
      always in a shallow idle state by the kernel to relay timer interrupts,
      but at least leaves the kernel with a functional system with some working
      power management capabilities.
      
      The hrtimer based clock event device is unconditionally registered, but
      has the lowest possible rating such that any broadcast-capable HW clock
      event device present will be chosen in preference as the tick broadcast
      device.
      Reviewed-by: NPreeti U Murthy <preeti@linux.vnet.ibm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      9358d755
  4. 29 5月, 2014 5 次提交
  5. 23 5月, 2014 5 次提交
  6. 17 5月, 2014 5 次提交
  7. 15 5月, 2014 2 次提交
  8. 12 5月, 2014 5 次提交
    • A
      arm64: is_compat_task is defined both in asm/compat.h and linux/compat.h · fd92d4a5
      AKASHI Takahiro 提交于
      Some kernel files may include both linux/compat.h and asm/compat.h directly
      or indirectly. Since both header files contain is_compat_task() under
      !CONFIG_COMPAT, compiling them with !CONFIG_COMPAT will eventually fail.
      Such files include kernel/auditsc.c, kernel/seccomp.c and init/do_mountfs.c
      (do_mountfs.c may read asm/compat.h via asm/ftrace.h once ftrace is
      implemented).
      
      So this patch proactively
      1) removes is_compat_task() under !CONFIG_COMPAT from asm/compat.h
      2) replaces asm/compat.h to linux/compat.h in kernel/*.c,
         but asm/compat.h is still necessary in ptrace.c and process.c because
         they use is_compat_thread().
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      fd92d4a5
    • A
      arm64: split syscall_trace() into separate functions for enter/exit · 3157858f
      AKASHI Takahiro 提交于
      As done in arm, this change makes it easy to confirm we invoke syscall
      related hooks, including syscall tracepoint, audit and seccomp which would
      be implemented later, in correct order. That is, undoing operations in the
      opposite order on exit that they were done on entry.
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      3157858f
    • A
      arm64: make a single hook to syscall_trace() for all syscall features · 449f81a4
      AKASHI Takahiro 提交于
      Currently syscall_trace() is called only for ptrace.
      With additional TIF_xx flags defined, it is now called in all the cases
      of audit, ftrace and seccomp in addition to ptrace.
      Acked-by: NRichard Guy Briggs <rgb@redhat.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      449f81a4
    • W
      arm64: debug: avoid accessing mdscr_el1 on fault paths where possible · 2a283070
      Will Deacon 提交于
      Since mdscr_el1 is part of the debug register group, it is highly likely
      to be trapped by a hypervisor to prevent virtual machines from debugging
      (buggering?) each other. Unfortunately, this absolutely destroys our
      performance, since we access the register on many of our low-level
      fault handling paths to keep track of the various debug state machines.
      
      This patch removes our dependency on mdscr_el1 in the case that debugging
      is not being used. More specifically we:
      
        - Use TIF_SINGLESTEP to indicate that a task is stepping at EL0 and
          avoid disabling step in the MDSCR when we don't need to.
          MDSCR_EL1.SS handling is moved to kernel_entry, when trapping from
          userspace.
      
        - Ensure debug exceptions are re-enabled on *all* exception entry
          paths, even the debug exception handling path (where we re-enable
          exceptions after invoking the handler). Since we can now rely on
          MDSCR_EL1.SS being cleared by the entry code, exception handlers can
          usually enable debug immediately before enabling interrupts.
      
        - Remove all debug exception unmasking from ret_to_user and
          el1_preempt, since we will never get here with debug exceptions
          masked.
      
      This results in a slight change to kernel debug behaviour, where we now
      step into interrupt handlers and data aborts from EL1 when debugging the
      kernel, which is actually a useful thing to do. A side-effect of this is
      that it *does* potentially prevent stepping off {break,watch}points when
      there is a high-frequency interrupt source (e.g. a timer), so a debugger
      would need to use either breakpoints or manually disable interrupts to
      get around this issue.
      
      With this patch applied, guest performance is restored under KVM when
      debug register accesses are trapped (and we get a measurable performance
      increase on the host on Cortex-A57 too).
      
      Cc: Ian Campbell <ian.campbell@citrix.com>
      Tested-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      2a283070
    • S
      arm64: use cpu_online_mask when using forced irq_set_affinity · 601c9421
      Sudeep Holla 提交于
      Commit 01f8fa4f("genirq: Allow forcing cpu affinity of interrupts")
      enabled the forced irq_set_affinity which previously refused to route an
      interrupt to an offline cpu.
      
      Commit ffde1de6("irqchip: Gic: Support forced affinity setting")
      implements this force logic and disables the cpu online check for GIC
      interrupt controller.
      
      When __cpu_disable calls migrate_irqs, it disables the current cpu in
      cpu_online_mask and uses forced irq_set_affinity to migrate the IRQs
      away from the cpu but passes affinity mask with the cpu being offlined
      also included in it.
      
      When calling irq_set_affinity with force == true in a cpu hotplug path,
      the caller must ensure that the cpu being offlined is not present in the
      affinity mask or it may be selected as the target CPU, leading to the
      interrupt not being migrated.
      
      This patch uses cpu_online_mask when using forced irq_set_affinity so
      that the IRQs are properly migrated away.
      Signed-off-by: NSudeep Holla <sudeep.holla@arm.com>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      601c9421
  9. 10 5月, 2014 2 次提交
    • W
      arm64: head: fix cache flushing and barriers in set_cpu_boot_mode_flag · d0488597
      Will Deacon 提交于
      set_cpu_boot_mode_flag is used to identify which exception levels are
      encountered across the system by CPUs trying to enter the kernel. The
      basic algorithm is: if a CPU is booting at EL2, it will set a flag at
      an offset of #4 from __boot_cpu_mode, a cacheline-aligned variable.
      Otherwise, a flag is set at an offset of zero into the same cacheline.
      This enables us to check that all CPUs booted at the same exception
      level.
      
      This cacheline is written with the stage-1 MMU off (that is, via a
      strongly-ordered mapping) and will bypass any clean lines in the cache,
      leading to potential coherence problems when the variable is later
      checked via the normal, cacheable mapping of the kernel image.
      
      This patch reworks the broken flushing code so that we:
      
        (1) Use a DMB to order the strongly-ordered write of the cacheline
            against the subsequent cache-maintenance operation (by-VA
            operations only hazard against normal, cacheable accesses).
      
        (2) Use a single dc ivac instruction to invalidate any clean lines
            containing a stale copy of the line after it has been updated.
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      d0488597
    • W
      arm64: barriers: make use of barrier options with explicit barriers · 98f7685e
      Will Deacon 提交于
      When calling our low-level barrier macros directly, we can often suffice
      with more relaxed behaviour than the default "all accesses, full system"
      option.
      
      This patch updates the users of dsb() to specify the option which they
      actually require.
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      98f7685e
  10. 09 5月, 2014 6 次提交
  11. 08 5月, 2014 3 次提交
    • A
      arm64: add support for kernel mode NEON in interrupt context · 190f1ca8
      Ard Biesheuvel 提交于
      This patch modifies kernel_neon_begin() and kernel_neon_end(), so
      they may be called from any context. To address the case where only
      a couple of registers are needed, kernel_neon_begin_partial(u32) is
      introduced which takes as a parameter the number of bottom 'n' NEON
      q-registers required. To mark the end of such a partial section, the
      regular kernel_neon_end() should be used.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      190f1ca8
    • A
      arm64: defer reloading a task's FPSIMD state to userland resume · 005f78cd
      Ard Biesheuvel 提交于
      If a task gets scheduled out and back in again and nothing has touched
      its FPSIMD state in the mean time, there is really no reason to reload
      it from memory. Similarly, repeated calls to kernel_neon_begin() and
      kernel_neon_end() will preserve and restore the FPSIMD state every time.
      
      This patch defers the FPSIMD state restore to the last possible moment,
      i.e., right before the task returns to userland. If a task does not return to
      userland at all (for any reason), the existing FPSIMD state is preserved
      and may be reused by the owning task if it gets scheduled in again on the
      same CPU.
      
      This patch adds two more functions to abstract away from straight FPSIMD
      register file saves and restores:
      - fpsimd_restore_current_state -> ensure current's FPSIMD state is loaded
      - fpsimd_flush_task_state -> invalidate live copies of a task's FPSIMD state
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      005f78cd
    • A
      arm64: add abstractions for FPSIMD state manipulation · c51f9269
      Ard Biesheuvel 提交于
      There are two tacit assumptions in the FPSIMD handling code that will no longer
      hold after the next patch that optimizes away some FPSIMD state restores:
      . the FPSIMD registers of this CPU contain the userland FPSIMD state of
        task 'current';
      . when switching to a task, its FPSIMD state will always be restored from
        memory.
      
      This patch adds the following functions to abstract away from straight FPSIMD
      register file saves and restores:
      - fpsimd_preserve_current_state -> ensure current's FPSIMD state is saved
      - fpsimd_update_current_state -> replace current's FPSIMD state
      
      Where necessary, the signal handling and fork code are updated to use the above
      wrappers instead of poking into the FPSIMD registers directly.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      c51f9269