1. 05 12月, 2014 1 次提交
    • S
      clocksource: arch_timer: Fix code to use physical timers when requested · 0b46b8a7
      Sonny Rao 提交于
      This is a bug fix for using physical arch timers when
      the arch_timer_use_virtual boolean is false.  It restores the
      arch_counter_get_cntpct() function after removal in
      
      0d651e4e "clocksource: arch_timer: use virtual counters"
      
      We need this on certain ARMv7 systems which are architected like this:
      
      * The firmware doesn't know and doesn't care about hypervisor mode and
        we don't want to add the complexity of hypervisor there.
      
      * The firmware isn't involved in SMP bringup or resume.
      
      * The ARCH timer come up with an uninitialized offset between the
        virtual and physical counters.  Each core gets a different random
        offset.
      
      * The device boots in "Secure SVC" mode.
      
      * Nothing has touched the reset value of CNTHCTL.PL1PCEN or
        CNTHCTL.PL1PCTEN (both default to 1 at reset)
      
      One example of such as system is RK3288 where it is much simpler to
      use the physical counter since there's nobody managing the offset and
      each time a core goes down and comes back up it will get reinitialized
      to some other random value.
      
      Fixes: 0d651e4e ("clocksource: arch_timer: use virtual counters")
      Cc: stable@vger.kernel.org
      Signed-off-by: NSonny Rao <sonnyrao@chromium.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NOlof Johansson <olof@lixom.net>
      0b46b8a7
  2. 14 10月, 2014 1 次提交
    • C
      arm64: KVM: Implement 48 VA support for KVM EL2 and Stage-2 · 38f791a4
      Christoffer Dall 提交于
      This patch adds the necessary support for all host kernel PGSIZE and
      VA_SPACE configuration options for both EL2 and the Stage-2 page tables.
      
      However, for 40bit and 42bit PARange systems, the architecture mandates
      that VTCR_EL2.SL0 is maximum 1, resulting in fewer levels of stage-2
      pagge tables than levels of host kernel page tables.  At the same time,
      systems with a PARange > 42bit, we limit the IPA range by always setting
      VTCR_EL2.T0SZ to 24.
      
      To solve the situation with different levels of page tables for Stage-2
      translation than the host kernel page tables, we allocate a dummy PGD
      with pointers to our actual inital level Stage-2 page table, in order
      for us to reuse the kernel pgtable manipulation primitives.  Reproducing
      all these in KVM does not look pretty and unnecessarily complicates the
      32-bit side.
      
      Systems with a PARange < 40bits are not yet supported.
      
       [ I have reworked this patch from its original form submitted by
         Jungseok to take the architecture constraints into consideration.
         There were too many changes from the original patch for me to
         preserve the authorship.  Thanks to Catalin Marinas for his help in
         figuring out a good solution to this challenge.  I have also fixed
         various bugs and missing error code handling from the original
         patch. - Christoffer ]
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NJungseok Lee <jungseoklee85@gmail.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      38f791a4
  3. 10 10月, 2014 4 次提交
  4. 03 10月, 2014 3 次提交
  5. 01 10月, 2014 1 次提交
  6. 29 9月, 2014 2 次提交
  7. 26 9月, 2014 2 次提交
  8. 25 9月, 2014 2 次提交
    • C
      arm64: Fix typos in KGDB macros · 7acf71d1
      Catalin Marinas 提交于
      Some of the KGDB macros used for generating the BRK instructions had the
      wrong spelling for DBG and KGDB abbreviations.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      7acf71d1
    • M
      arm64: insn: Add return statements after BUG_ON() · a9ae04c9
      Mark Brown 提交于
      Following a recent series of enhancements to the insn code the ARMv8
      allnoconfig build has been generating a large number of warnings in the
      form of:
      
      arch/arm64/kernel/insn.c:689:8: warning: 'insn' may be used uninitialized in this function [-Wmaybe-uninitialized]
      
      This is because BUG() and related macros can be compiled out so we get
      execution paths which normally result in a panic compiling out to noops
      instead.
      
      I wasn't able to immediately identify a sensible return value to use in
      these cases so just return AARCH64_BREAK_FAULT - this is all "should
      never happen" code so hopefully it never has a practical impact.
      Signed-off-by: NMark Brown <broonie@kernel.org>
      [catalin.marinas@arm.com: AARCH64_BREAK_FAULT definition contributed by Daniel Borkmann]
      [catalin.marinas@arm.com: replace return 0 with AARCH64_BREAK_FAULT]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a9ae04c9
  9. 24 9月, 2014 2 次提交
    • T
      kvm: Add arch specific mmu notifier for page invalidation · fe71557a
      Tang Chen 提交于
      This will be used to let the guest run while the APIC access page is
      not pinned.  Because subsequent patches will fill in the function
      for x86, place the (still empty) x86 implementation in the x86.c file
      instead of adding an inline function in kvm_host.h.
      Signed-off-by: NTang Chen <tangchen@cn.fujitsu.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      fe71557a
    • A
      kvm: Fix page ageing bugs · 57128468
      Andres Lagar-Cavilla 提交于
      1. We were calling clear_flush_young_notify in unmap_one, but we are
      within an mmu notifier invalidate range scope. The spte exists no more
      (due to range_start) and the accessed bit info has already been
      propagated (due to kvm_pfn_set_accessed). Simply call
      clear_flush_young.
      
      2. We clear_flush_young on a primary MMU PMD, but this may be mapped
      as a collection of PTEs by the secondary MMU (e.g. during log-dirty).
      This required expanding the interface of the clear_flush_young mmu
      notifier, so a lot of code has been trivially touched.
      
      3. In the absence of shadow_accessed_mask (e.g. EPT A bit), we emulate
      the access bit by blowing the spte. This requires proper synchronizing
      with MMU notifier consumers, like every other removal of spte's does.
      Signed-off-by: NAndres Lagar-Cavilla <andreslc@google.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      57128468
  10. 23 9月, 2014 1 次提交
    • C
      Revert "arm64: dmi: Add SMBIOS/DMI support" · 6f325eaa
      Catalin Marinas 提交于
      This reverts commit 668ebd10.
      
      ... because of lots of warnings during boot if Linux isn't started as an EFI
      application:
      
      WARNING: CPU: 4 PID: 1 at
      /work/Linux/linux-2.6-aarch64/drivers/firmware/dmi_scan.c:591 dmi_matches+0x10c/0x110()
      dmi check: not initialized yet.
      Modules linked in:
      CPU: 4 PID: 1 Comm: swapper/0 Not tainted 3.17.0-rc4+ #606
      Call trace:
      [<ffffffc000087fb0>] dump_backtrace+0x0/0x124
      [<ffffffc0000880e4>] show_stack+0x10/0x1c
      [<ffffffc0004d58f8>] dump_stack+0x74/0xb8
      [<ffffffc0000ab640>] warn_slowpath_common+0x8c/0xb4
      [<ffffffc0000ab6b4>] warn_slowpath_fmt+0x4c/0x58
      [<ffffffc0003f2d7c>] dmi_matches+0x108/0x110
      [<ffffffc0003f2da8>] dmi_check_system+0x24/0x68
      [<ffffffc0006974c4>] atkbd_init+0x10/0x34
      [<ffffffc0000814ac>] do_one_initcall+0x88/0x1a0
      [<ffffffc00067aab4>] kernel_init_freeable+0x148/0x1e8
      [<ffffffc0004d2c64>] kernel_init+0x10/0xd4
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      6f325eaa
  11. 22 9月, 2014 3 次提交
  12. 14 9月, 2014 2 次提交
  13. 12 9月, 2014 2 次提交
    • L
      arm64: kernel: introduce cpu_init_idle CPU operation · d64f84f6
      Lorenzo Pieralisi 提交于
      The CPUidle subsystem on ARM64 machines requires the idle states
      implementation back-end to initialize idle states parameter upon
      boot. This patch adds a hook in the CPU operations structure that
      should be initialized by the CPU operations back-end in order to
      provide a function that initializes cpu idle states.
      
      This patch also adds the infrastructure to arm64 kernel required
      to export the CPU operations based initialization interface, so
      that drivers (ie CPUidle) can use it when they are initialized
      at probe time.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      d64f84f6
    • L
      arm64: kernel: refactor the CPU suspend API for retention states · 714f5992
      Lorenzo Pieralisi 提交于
      CPU suspend is the standard kernel interface to be used to enter
      low-power states on ARM64 systems. Current cpu_suspend implementation
      by default assumes that all low power states are losing the CPU context,
      so the CPU registers must be saved and cleaned to DRAM upon state
      entry. Furthermore, the current cpu_suspend() implementation assumes
      that if the CPU suspend back-end method returns when called, this has
      to be considered an error regardless of the return code (which can be
      successful) since the CPU was not expected to return from a code path that
      is different from cpu_resume code path - eg returning from the reset vector.
      
      All in all this means that the current API does not cope well with low-power
      states that preserve the CPU context when entered (ie retention states),
      since first of all the context is saved for nothing on state entry for
      those states and a successful state entry can return as a normal function
      return, which is considered an error by the current CPU suspend
      implementation.
      
      This patch refactors the cpu_suspend() API so that it can be split in
      two separate functionalities. The arm64 cpu_suspend API just provides
      a wrapper around CPU suspend operation hook. A new function is
      introduced (for architecture code use only) for states that require
      context saving upon entry:
      
      __cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
      
      __cpu_suspend() saves the context on function entry and calls the
      so called suspend finisher (ie fn) to complete the suspend operation.
      The finisher is not expected to return, unless it fails in which case
      the error is propagated back to the __cpu_suspend caller.
      
      The API refactoring results in the following pseudo code call sequence for a
      suspending CPU, when triggered from a kernel subsystem:
      
      /*
       * int cpu_suspend(unsigned long idx)
       * @idx: idle state index
       */
      {
      -> cpu_suspend(idx)
      	|---> CPU operations suspend hook called, if present
      		|--> if (retention_state)
      			|--> direct suspend back-end call (eg PSCI suspend)
      		     else
      			|--> __cpu_suspend(idx, &back_end_finisher);
      }
      
      By refactoring the cpu_suspend API this way, the CPU operations back-end
      has a chance to detect whether idle states require state saving or not
      and can call the required suspend operations accordingly either through
      simple function call or indirectly through __cpu_suspend() which carries out
      state saving and suspend finisher dispatching to complete idle state entry.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NHanjun Guo <hanjun.guo@linaro.org>
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      714f5992
  14. 11 9月, 2014 1 次提交
  15. 08 9月, 2014 13 次提交