1. 21 1月, 2015 1 次提交
  2. 16 1月, 2015 3 次提交
  3. 11 1月, 2015 1 次提交
    • E
      KVM: arm/arm64: vgic: add init entry to VGIC KVM device · 065c0034
      Eric Auger 提交于
      Since the advent of VGIC dynamic initialization, this latter is
      initialized quite late on the first vcpu run or "on-demand", when
      injecting an IRQ or when the guest sets its registers.
      
      This initialization could be initiated explicitly much earlier
      by the users-space, as soon as it has provided the requested
      dimensioning parameters.
      
      This patch adds a new entry to the VGIC KVM device that allows
      the user to manually request the VGIC init:
      - a new KVM_DEV_ARM_VGIC_GRP_CTRL group is introduced.
      - Its first attribute is KVM_DEV_ARM_VGIC_CTRL_INIT
      
      The rationale behind introducing a group is to be able to add other
      controls later on, if needed.
      Signed-off-by: NEric Auger <eric.auger@linaro.org>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      065c0034
  4. 18 12月, 2014 1 次提交
  5. 13 12月, 2014 3 次提交
    • C
      arm/arm64: KVM: Introduce stage2_unmap_vm · 957db105
      Christoffer Dall 提交于
      Introduce a new function to unmap user RAM regions in the stage2 page
      tables.  This is needed on reboot (or when the guest turns off the MMU)
      to ensure we fault in pages again and make the dcache, RAM, and icache
      coherent.
      
      Using unmap_stage2_range for the whole guest physical range does not
      work, because that unmaps IO regions (such as the GIC) which will not be
      recreated or in the best case faulted in on a page-by-page basis.
      
      Call this function on secondary and subsequent calls to the
      KVM_ARM_VCPU_INIT ioctl so that a reset VCPU will detect the guest
      Stage-1 MMU is off when faulting in pages and make the caches coherent.
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      957db105
    • C
      arm/arm64: KVM: Clarify KVM_ARM_VCPU_INIT ABI · f7fa034d
      Christoffer Dall 提交于
      It is not clear that this ioctl can be called multiple times for a given
      vcpu.  Userspace already does this, so clarify the ABI.
      
      Also specify that userspace is expected to always make secondary and
      subsequent calls to the ioctl with the same parameters for the VCPU as
      the initial call (which userspace also already does).
      
      Add code to check that userspace doesn't violate that ABI in the future,
      and move the kvm_vcpu_set_target() function which is currently
      duplicated between the 32-bit and 64-bit versions in guest.c to a common
      static function in arm.c, shared between both architectures.
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      f7fa034d
    • C
      arm/arm64: KVM: Reset the HCR on each vcpu when resetting the vcpu · b856a591
      Christoffer Dall 提交于
      When userspace resets the vcpu using KVM_ARM_VCPU_INIT, we should also
      reset the HCR, because we now modify the HCR dynamically to
      enable/disable trapping of guest accesses to the VM registers.
      
      This is crucial for reboot of VMs working since otherwise we will not be
      doing the necessary cache maintenance operations when faulting in pages
      with the guest MMU off.
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      b856a591
  6. 12 12月, 2014 1 次提交
    • A
      arch: Add lightweight memory barriers dma_rmb() and dma_wmb() · 1077fa36
      Alexander Duyck 提交于
      There are a number of situations where the mandatory barriers rmb() and
      wmb() are used to order memory/memory operations in the device drivers
      and those barriers are much heavier than they actually need to be.  For
      example in the case of PowerPC wmb() calls the heavy-weight sync
      instruction when for coherent memory operations all that is really needed
      is an lsync or eieio instruction.
      
      This commit adds a coherent only version of the mandatory memory barriers
      rmb() and wmb().  In most cases this should result in the barrier being the
      same as the SMP barriers for the SMP case, however in some cases we use a
      barrier that is somewhere in between rmb() and smp_rmb().  For example on
      ARM the rmb barriers break down as follows:
      
        Barrier   Call     Explanation
        --------- -------- ----------------------------------
        rmb()     dsb()    Data synchronization barrier - system
        dma_rmb() dmb(osh) data memory barrier - outer sharable
        smp_rmb() dmb(ish) data memory barrier - inner sharable
      
      These new barriers are not as safe as the standard rmb() and wmb().
      Specifically they do not guarantee ordering between coherent and incoherent
      memories.  The primary use case for these would be to enforce ordering of
      reads and writes when accessing coherent memory that is shared between the
      CPU and a device.
      
      It may also be noted that there is no dma_mb().  Most architectures don't
      provide a good mechanism for performing a coherent only full barrier without
      resorting to the same mechanism used in mb().  As such there isn't much to
      be gained in trying to define such a function.
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Michael Ellerman <michael@ellerman.id.au>
      Cc: Michael Neuling <mikey@neuling.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: David Miller <davem@davemloft.net>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1077fa36
  7. 11 12月, 2014 1 次提交
  8. 05 12月, 2014 1 次提交
    • S
      clocksource: arch_timer: Fix code to use physical timers when requested · 0b46b8a7
      Sonny Rao 提交于
      This is a bug fix for using physical arch timers when
      the arch_timer_use_virtual boolean is false.  It restores the
      arch_counter_get_cntpct() function after removal in
      
      0d651e4e "clocksource: arch_timer: use virtual counters"
      
      We need this on certain ARMv7 systems which are architected like this:
      
      * The firmware doesn't know and doesn't care about hypervisor mode and
        we don't want to add the complexity of hypervisor there.
      
      * The firmware isn't involved in SMP bringup or resume.
      
      * The ARCH timer come up with an uninitialized offset between the
        virtual and physical counters.  Each core gets a different random
        offset.
      
      * The device boots in "Secure SVC" mode.
      
      * Nothing has touched the reset value of CNTHCTL.PL1PCEN or
        CNTHCTL.PL1PCTEN (both default to 1 at reset)
      
      One example of such as system is RK3288 where it is much simpler to
      use the physical counter since there's nobody managing the offset and
      each time a core goes down and comes back up it will get reinitialized
      to some other random value.
      
      Fixes: 0d651e4e ("clocksource: arch_timer: use virtual counters")
      Cc: stable@vger.kernel.org
      Signed-off-by: NSonny Rao <sonnyrao@chromium.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NOlof Johansson <olof@lixom.net>
      0b46b8a7
  9. 04 12月, 2014 7 次提交
  10. 03 12月, 2014 1 次提交
  11. 02 12月, 2014 3 次提交
  12. 28 11月, 2014 2 次提交
  13. 27 11月, 2014 1 次提交
  14. 25 11月, 2014 1 次提交
  15. 22 11月, 2014 2 次提交
  16. 21 11月, 2014 1 次提交
    • S
      ARM: 8197/1: vfp: Fix VFPv3 hwcap detection on CPUID based cpus · 6c96a4a6
      Stephen Boyd 提交于
      The subarchitecture field in the fpsid register is 7 bits wide on
      ARM CPUs using the CPUID identification scheme, spanning bits 22
      to 16. The topmost bit is used to designate that the
      subarchitecture designer is not ARM when it is set to 1. On
      non-CPUID scheme CPUs the subarchitecture field is only 4 bits
      wide and the higher bits are used to indicate no double precision
      support (bit 20) and the FTSMX/FLDMX format (bits 21-22).
      
      The VFP support code only looks at bits 19-16 to determine the
      VFP version. On Qualcomm's processors (Krait and Scorpion) we
      should see that we have HWCAP_VFPv3 but we don't because bit 22
      is set to 1 to indicate that the subarchitecture is not
      implemented by ARM and the rest of the bits are left as 0 because
      this is the first subarchitecture that Qualcomm has designed.
      Unfortunately we can't just widen the FPSID subarchitecture
      bitmask to consider all the bits on a CPUID scheme because there
      may be CPUs without the CPUID scheme that have VFP without double
      precision support and then the version would be a very wrong and
      large number. Instead, update the version detection logic to
      consider if the CPU is using the CPUID scheme.
      
      If the CPU is using CPUID scheme, use the MVFR registers to
      determine what version of VFP is supported. We already do this
      for VFPv4, so do something similar for VFPv3 and look for single
      or double precision support in MVFR0. Otherwise fall back to
      using FPSID to detect VFP support on non-CPUID scheme CPUs. We
      know that VFPv3 is only present in CPUs that have support for the
      CPUID scheme so this should be equivalent.
      Tested-by: NRob Clark <robdclark@gmail.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      6c96a4a6
  17. 20 11月, 2014 1 次提交
  18. 17 11月, 2014 1 次提交
    • G
      ARM: shmobile: Add early debugging support using SCIF(A) · 7a2071c5
      Geert Uytterhoeven 提交于
      Add serial port debug macros for the SCIF(A) serial ports.
      This includes all supported shmobile SoCs, except for EMEV2.
      
      The configuration logic (both Kconfig and #ifdef) is more complicated than
      one would expect, for several reasons:
        1. Not all SoCs have the same serial devices, and they're not always
           at the same addresses.
        2. There are two different types: SCIF and SCIFA. Fortunately they can
           easily be distinguished by physical address.
        3. Not all boards use the same serial port for the console.
           The defaults correspond to the boards that are supported in
           mainline. If you want to use a different serial port, just change
           the value of CONFIG_DEBUG_UART_PHYS, and the rest will auto-adapt.
        4. debug_ll_io_init() maps the SCIF(A) registers to a fixed virtual
           address. 0xfdxxxxxx was chosen, as it should lie below VMALLOC_END
           = 0xff000000, and must not conflict with the 2 MiB reserved region
           at PCI_IO_VIRT_BASE = 0xfee00000.
             - On SoCs not using the legacy machine_desc.map_io(),
      	 debug_ll_io_init() is called by the ARM core code.
             - On SoCs using the legacy machine_desc.map_io(),
      	 debug_ll_io_init() must be called explicitly. Calls are added
      	 for r8a7740, r8a7779, sh7372, and sh73a0.
      
      This was derived from the r8a7790 version by Laurent Pinchart.
      Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be>
      Acked-by: NLaurent Pinchart <laurent.pinchart@ideasonboard.com>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Tested-by: NSimon Horman <horms+renesas@verge.net.au>
      Signed-off-by: NSimon Horman <horms+renesas@verge.net.au>
      7a2071c5
  19. 14 11月, 2014 4 次提交
  20. 13 11月, 2014 1 次提交
    • D
      cpuidle: Invert CPUIDLE_FLAG_TIME_VALID logic · b82b6cca
      Daniel Lezcano 提交于
      The only place where the time is invalid is when the ACPI_CSTATE_FFH entry
      method is not set. Otherwise for all the drivers, the time can be correctly
      measured.
      
      Instead of duplicating the CPUIDLE_FLAG_TIME_VALID flag in all the drivers
      for all the states, just invert the logic by replacing it by the flag
      CPUIDLE_FLAG_TIME_INVALID, hence we can set this flag only for the acpi idle
      driver, remove the former flag from all the drivers and invert the logic with
      this flag in the different governor.
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      b82b6cca
  21. 10 11月, 2014 1 次提交
  22. 08 11月, 2014 2 次提交