1. 25 11月, 2014 1 次提交
  2. 14 10月, 2014 1 次提交
    • C
      arm64: KVM: Implement 48 VA support for KVM EL2 and Stage-2 · 38f791a4
      Christoffer Dall 提交于
      This patch adds the necessary support for all host kernel PGSIZE and
      VA_SPACE configuration options for both EL2 and the Stage-2 page tables.
      
      However, for 40bit and 42bit PARange systems, the architecture mandates
      that VTCR_EL2.SL0 is maximum 1, resulting in fewer levels of stage-2
      pagge tables than levels of host kernel page tables.  At the same time,
      systems with a PARange > 42bit, we limit the IPA range by always setting
      VTCR_EL2.T0SZ to 24.
      
      To solve the situation with different levels of page tables for Stage-2
      translation than the host kernel page tables, we allocate a dummy PGD
      with pointers to our actual inital level Stage-2 page table, in order
      for us to reuse the kernel pgtable manipulation primitives.  Reproducing
      all these in KVM does not look pretty and unnecessarily complicates the
      32-bit side.
      
      Systems with a PARange < 40bits are not yet supported.
      
       [ I have reworked this patch from its original form submitted by
         Jungseok to take the architecture constraints into consideration.
         There were too many changes from the original patch for me to
         preserve the authorship.  Thanks to Catalin Marinas for his help in
         figuring out a good solution to this challenge.  I have also fixed
         various bugs and missing error code handling from the original
         patch. - Christoffer ]
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NJungseok Lee <jungseoklee85@gmail.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      38f791a4
  3. 10 10月, 2014 5 次提交
  4. 03 10月, 2014 1 次提交
  5. 01 10月, 2014 1 次提交
  6. 30 9月, 2014 2 次提交
    • N
      ARM: 8178/1: fix set_tls for !CONFIG_KUSER_HELPERS · 9cc6d9e5
      Nathan Lynch 提交于
      Joachim Eastwood reports that commit fbfb872f "ARM: 8148/1: flush
      TLS and thumbee register state during exec" causes a boot-time crash
      on a Cortex-M4 nommu system:
      
      Freeing unused kernel memory: 68K (281e5000 - 281f6000)
      Unhandled exception: IPSR = 00000005 LR = fffffff1
      CPU: 0 PID: 1 Comm: swapper Not tainted 3.17.0-rc6-00313-gd2205fa30aa7 #191
      task: 29834000 ti: 29832000 task.ti: 29832000
      PC is at flush_thread+0x2e/0x40
      LR is at flush_thread+0x21/0x40
      pc : [<2800954a>] lr : [<2800953d>] psr: 4100000b
      sp : 29833d60 ip : 00000000 fp : 00000001
      r10: 00003cf8 r9 : 29b1f000 r8 : 00000000
      r7 : 29b0bc00 r6 : 29834000 r5 : 29832000 r4 : 29832000
      r3 : ffff0ff0 r2 : 29832000 r1 : 00000000 r0 : 282121f0
      xPSR: 4100000b
      CPU: 0 PID: 1 Comm: swapper Not tainted 3.17.0-rc6-00313-gd2205fa30aa7 #191
      [<2800afa5>] (unwind_backtrace) from [<2800a327>] (show_stack+0xb/0xc)
      [<2800a327>] (show_stack) from [<2800a963>] (__invalid_entry+0x4b/0x4c)
      
      The problem is that set_tls is attempting to clear the TLS location in
      the kernel-user helper page, which isn't set up on V7M.
      
      Fix this by guarding the write to the kuser helper page with
      a CONFIG_KUSER_HELPERS ifdef.
      
      Fixes: fbfb872f ARM: 8148/1: flush TLS and thumbee register state during exec
      Reported-by: NJoachim Eastwood <manabian@gmail.com>
      Tested-by: NJoachim Eastwood <manabian@gmail.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      9cc6d9e5
    • K
      ARM: 8177/1: cacheflush: Fix v7_exit_coherency_flush exynos build breakage on ARMv6 · ebc77251
      Krzysztof Kozlowski 提交于
      This fixes build breakage of platsmp.c if ARMv6 was chosen for compile
      time options (e.g. by building allmodconfig):
      
      $ make allmodconfig
      $ make
        CC      arch/arm/mach-exynos/platsmp.o
      /tmp/ccdQM0Eg.s: Assembler messages:
      /tmp/ccdQM0Eg.s:432: Error: selected processor does not support ARM mode `isb '
      /tmp/ccdQM0Eg.s:437: Error: selected processor does not support ARM mode `isb '
      /tmp/ccdQM0Eg.s:438: Error: selected processor does not support ARM mode `dsb '
      make[1]: *** [arch/arm/mach-exynos/platsmp.o] Error 1
      
      The error was introduced in commit "ARM: EXYNOS: Move code from
      hotplug.c to platsmp.c".  Previously code using
      v7_exit_coherency_flush() macro was built with '-march=armv7-a' flag but
      this flag dissapeared during the movement.
      
      Fix this by annotating the v7_exit_coherency_flush() asm code with
      armv7-a architecture.
      Signed-off-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com>
      Reported-by: NMark Brown <broonie@kernel.org>
      Acked-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NKukjin Kim <kgene.kim@samsung.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ebc77251
  7. 29 9月, 2014 2 次提交
  8. 26 9月, 2014 3 次提交
  9. 25 9月, 2014 1 次提交
  10. 24 9月, 2014 2 次提交
    • T
      kvm: Add arch specific mmu notifier for page invalidation · fe71557a
      Tang Chen 提交于
      This will be used to let the guest run while the APIC access page is
      not pinned.  Because subsequent patches will fill in the function
      for x86, place the (still empty) x86 implementation in the x86.c file
      instead of adding an inline function in kvm_host.h.
      Signed-off-by: NTang Chen <tangchen@cn.fujitsu.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      fe71557a
    • A
      kvm: Fix page ageing bugs · 57128468
      Andres Lagar-Cavilla 提交于
      1. We were calling clear_flush_young_notify in unmap_one, but we are
      within an mmu notifier invalidate range scope. The spte exists no more
      (due to range_start) and the accessed bit info has already been
      propagated (due to kvm_pfn_set_accessed). Simply call
      clear_flush_young.
      
      2. We clear_flush_young on a primary MMU PMD, but this may be mapped
      as a collection of PTEs by the secondary MMU (e.g. during log-dirty).
      This required expanding the interface of the clear_flush_young mmu
      notifier, so a lot of code has been trivially touched.
      
      3. In the absence of shadow_accessed_mask (e.g. EPT A bit), we emulate
      the access bit by blowing the spte. This requires proper synchronizing
      with MMU notifier consumers, like every other removal of spte's does.
      Signed-off-by: NAndres Lagar-Cavilla <andreslc@google.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      57128468
  11. 19 9月, 2014 1 次提交
  12. 18 9月, 2014 1 次提交
  13. 16 9月, 2014 1 次提交
  14. 14 9月, 2014 2 次提交
  15. 13 9月, 2014 1 次提交
    • V
      ARM: 8137/1: fix get_user BE behavior for target variable with size of 8 bytes · d9981380
      Victor Kamensky 提交于
      e38361d0 'ARM: 8091/2: add get_user() support for 8 byte types' commit
      broke V7 BE get_user call when target var size is 64 bit, but '*ptr' size
      is 32 bit or smaller. e38361d0 changed type of __r2 from 'register
      unsigned long' to 'register typeof(x) __r2 asm("r2")' i.e before the change
      even when target variable size was 64 bit, __r2 was still 32 bit.
      But after e38361d0 commit, for target var of 64 bit size, __r2 became 64
      bit and now it should occupy 2 registers r2, and r3. The issue in BE case
      that r3 register is least significant word of __r2 and r2 register is most
      significant word of __r2. But __get_user_4 still copies result into r2 (most
      significant word of __r2). Subsequent code copies from __r2 into x, but
      for situation described it will pick up only garbage from r3 register.
      
      Special __get_user_64t_(124) functions are introduced. They are similar to
      corresponding __get_user_(124) function but result stored in r3 register
      (lsw in case of 64 bit __r2 in BE image). Those function are used by
      get_user macro in case of BE and target var size is 64bit.
      
      Also changed __get_user_lo8 name into __get_user_32t_8 to get consistent
      naming accross all cases.
      Signed-off-by: NVictor Kamensky <victor.kamensky@linaro.org>
      Suggested-by: NDaniel Thompson <daniel.thompson@linaro.org>
      Reviewed-by: NDaniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      d9981380
  16. 12 9月, 2014 2 次提交
    • S
      xen/arm: remove mach_to_phys rbtree · d50582e0
      Stefano Stabellini 提交于
      Remove the rbtree used to keep track of machine to physical mappings:
      the frontend can grant the same page multiple times, leading to errors
      inserting or removing entries from the mach_to_phys tree.
      
      Linux only needed to know the physical address corresponding to a given
      machine address in swiotlb-xen. Now that swiotlb-xen can call the
      xen_dma_* functions passing the machine address directly, we can remove
      it.
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Tested-by: NDenis Schneider <v1ne2go@gmail.com>
      d50582e0
    • S
      xen/arm: reimplement xen_dma_unmap_page & friends · 340720be
      Stefano Stabellini 提交于
      xen_dma_unmap_page, xen_dma_sync_single_for_cpu and
      xen_dma_sync_single_for_device are currently implemented by calling into
      the corresponding generic ARM implementation of these functions. In
      order to do this, firstly the dma_addr_t handle, that on Xen is a
      machine address, needs to be translated into a physical address.  The
      operation is expensive and inaccurate, given that a single machine
      address can correspond to multiple physical addresses in one domain,
      because the same page can be granted multiple times by the frontend.
      
      To avoid this problem, we introduce a Xen specific implementation of
      xen_dma_unmap_page, xen_dma_sync_single_for_cpu and
      xen_dma_sync_single_for_device, that can operate on machine addresses
      directly.
      
      The new implementation relies on the fact that the hypervisor creates a
      second p2m mapping of any grant pages at physical address == machine
      address of the page for dom0. Therefore we can access memory at physical
      address == dma_addr_r handle and perform the cache flushing there. Some
      cache maintenance operations require a virtual address. Instead of using
      ioremap_cache, that is not safe in interrupt context, we allocate a
      per-cpu PAGE_KERNEL scratch page and we manually update the pte for it.
      
      arm64 doesn't need cache maintenance operations on unmap for now.
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Tested-by: NDenis Schneider <v1ne2go@gmail.com>
      340720be
  17. 11 9月, 2014 1 次提交
  18. 03 9月, 2014 1 次提交
  19. 29 8月, 2014 3 次提交
  20. 28 8月, 2014 2 次提交
  21. 27 8月, 2014 4 次提交
  22. 26 8月, 2014 1 次提交
  23. 14 8月, 2014 1 次提交