1. 25 11月, 2014 2 次提交
  2. 15 10月, 2014 1 次提交
  3. 14 10月, 2014 2 次提交
    • C
      arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE · c3058d5d
      Christoffer Dall 提交于
      When creating or moving a memslot, make sure the IPA space is within the
      addressable range of the guest.  Otherwise, user space can create too
      large a memslot and KVM would try to access potentially unallocated page
      table entries when inserting entries in the Stage-2 page tables.
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      c3058d5d
    • C
      arm64: KVM: Implement 48 VA support for KVM EL2 and Stage-2 · 38f791a4
      Christoffer Dall 提交于
      This patch adds the necessary support for all host kernel PGSIZE and
      VA_SPACE configuration options for both EL2 and the Stage-2 page tables.
      
      However, for 40bit and 42bit PARange systems, the architecture mandates
      that VTCR_EL2.SL0 is maximum 1, resulting in fewer levels of stage-2
      pagge tables than levels of host kernel page tables.  At the same time,
      systems with a PARange > 42bit, we limit the IPA range by always setting
      VTCR_EL2.T0SZ to 24.
      
      To solve the situation with different levels of page tables for Stage-2
      translation than the host kernel page tables, we allocate a dummy PGD
      with pointers to our actual inital level Stage-2 page table, in order
      for us to reuse the kernel pgtable manipulation primitives.  Reproducing
      all these in KVM does not look pretty and unnecessarily complicates the
      32-bit side.
      
      Systems with a PARange < 40bits are not yet supported.
      
       [ I have reworked this patch from its original form submitted by
         Jungseok to take the architecture constraints into consideration.
         There were too many changes from the original patch for me to
         preserve the authorship.  Thanks to Catalin Marinas for his help in
         figuring out a good solution to this challenge.  I have also fixed
         various bugs and missing error code handling from the original
         patch. - Christoffer ]
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NJungseok Lee <jungseoklee85@gmail.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      38f791a4
  4. 13 10月, 2014 1 次提交
  5. 10 10月, 2014 3 次提交
  6. 26 9月, 2014 1 次提交
  7. 11 9月, 2014 1 次提交
  8. 28 8月, 2014 1 次提交
  9. 11 7月, 2014 3 次提交
  10. 28 4月, 2014 1 次提交
    • M
      arm: KVM: fix possible misalignment of PGDs and bounce page · 5d4e08c4
      Mark Salter 提交于
      The kvm/mmu code shared by arm and arm64 uses kalloc() to allocate
      a bounce page (if hypervisor init code crosses page boundary) and
      hypervisor PGDs. The problem is that kalloc() does not guarantee
      the proper alignment. In the case of the bounce page, the page sized
      buffer allocated may also cross a page boundary negating the purpose
      and leading to a hang during kvm initialization. Likewise the PGDs
      allocated may not meet the minimum alignment requirements of the
      underlying MMU. This patch uses __get_free_page() to guarantee the
      worst case alignment needs of the bounce page and PGDs on both arm
      and arm64.
      
      Cc: <stable@vger.kernel.org> # 3.10+
      Signed-off-by: NMark Salter <msalter@redhat.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      5d4e08c4
  11. 03 3月, 2014 4 次提交
  12. 09 1月, 2014 1 次提交
    • M
      arm/arm64: KVM: relax the requirements of VMA alignment for THP · 136d737f
      Marc Zyngier 提交于
      The THP code in KVM/ARM is a bit restrictive in not allowing a THP
      to be used if the VMA is not 2MB aligned. Actually, it is not so much
      the VMA that matters, but the associated memslot:
      
      A process can perfectly mmap a region with no particular alignment
      restriction, and then pass a 2MB aligned address to KVM. In this
      case, KVM will only use this 2MB aligned region, and will ignore
      the range between vma->vm_start and memslot->userspace_addr.
      
      It can also choose to place this memslot at whatever alignment it
      wants in the IPA space. In the end, what matters is the relative
      alignment of the user space and IPA mappings with respect to a
      2M page. They absolutely must be the same if you want to use THP.
      
      Cc: Christoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      136d737f
  13. 12 12月, 2013 1 次提交
  14. 17 11月, 2013 1 次提交
  15. 18 10月, 2013 2 次提交
  16. 14 8月, 2013 1 次提交
  17. 08 8月, 2013 2 次提交
    • M
      arm64: KVM: fix 2-level page tables unmapping · 979acd5e
      Marc Zyngier 提交于
      When using 64kB pages, we only have two levels of page tables,
      meaning that PGD, PUD and PMD are fused. In this case, trying
      to refcount PUDs and PMDs independently is a a complete disaster,
      as they are the same.
      
      We manage to get it right for the allocation (stage2_set_pte uses
      {pmd,pud}_none), but the unmapping path clears both pud and pmd
      refcounts, which fails spectacularly with 2-level page tables.
      
      The fix is to avoid calling clear_pud_entry when both the pmd and
      pud pages are empty. For this, and instead of introducing another
      pud_empty function, consolidate both pte_empty and pmd_empty into
      page_empty (the code is actually identical) and use that to also
      test the validity of the pud.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      979acd5e
    • C
      ARM: KVM: Fix unaligned unmap_range leak · d3840b26
      Christoffer Dall 提交于
      The unmap_range function did not properly cover the case when the start
      address was not aligned to PMD_SIZE or PUD_SIZE and an entire pte table
      or pmd table was cleared, causing us to leak memory when incrementing
      the addr.
      
      The fix is to always move onto the next page table entry boundary
      instead of adding the full size of the VA range covered by the
      corresponding table level entry.
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      d3840b26
  18. 27 6月, 2013 1 次提交
  19. 03 6月, 2013 1 次提交
  20. 29 4月, 2013 6 次提交
    • M
      ARM: KVM: perform HYP initilization for hotplugged CPUs · d157f4a5
      Marc Zyngier 提交于
      Now that we have the necessary infrastructure to boot a hotplugged CPU
      at any point in time, wire a CPU notifier that will perform the HYP
      init for the incoming CPU.
      
      Note that this depends on the platform code and/or firmware to boot the
      incoming CPU with HYP mode enabled and return to the kernel by following
      the normal boot path (HYP stub installed).
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
      d157f4a5
    • M
      ARM: KVM: switch to a dual-step HYP init code · 5a677ce0
      Marc Zyngier 提交于
      Our HYP init code suffers from two major design issues:
      - it cannot support CPU hotplug, as we tear down the idmap very early
      - it cannot perform a TLB invalidation when switching from init to
        runtime mappings, as pages are manipulated from PL1 exclusively
      
      The hotplug problem mandates that we keep two sets of page tables
      (boot and runtime). The TLB problem mandates that we're able to
      transition from one PGD to another while in HYP, invalidating the TLBs
      in the process.
      
      To be able to do this, we need to share a page between the two page
      tables. A page that will have the same VA in both configurations. All we
      need is a VA that has the following properties:
      - This VA can't be used to represent a kernel mapping.
      - This VA will not conflict with the physical address of the kernel text
      
      The vectors page seems to satisfy this requirement:
      - The kernel never maps anything else there
      - The kernel text being copied at the beginning of the physical memory,
        it is unlikely to use the last 64kB (I doubt we'll ever support KVM
        on a system with something like 4MB of RAM, but patches are very
        welcome).
      
      Let's call this VA the trampoline VA.
      
      Now, we map our init page at 3 locations:
      - idmap in the boot pgd
      - trampoline VA in the boot pgd
      - trampoline VA in the runtime pgd
      
      The init scenario is now the following:
      - We jump in HYP with four parameters: boot HYP pgd, runtime HYP pgd,
        runtime stack, runtime vectors
      - Enable the MMU with the boot pgd
      - Jump to a target into the trampoline page (remember, this is the same
        physical page!)
      - Now switch to the runtime pgd (same VA, and still the same physical
        page!)
      - Invalidate TLBs
      - Set stack and vectors
      - Profit! (or eret, if you only care about the code).
      
      Note that we keep the boot mapping permanently (it is not strictly an
      idmap anymore) to allow for CPU hotplug in later patches.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
      5a677ce0
    • M
      ARM: KVM: rework HYP page table freeing · 4f728276
      Marc Zyngier 提交于
      There is no point in freeing HYP page tables differently from Stage-2.
      They now have the same requirements, and should be dealt with the same way.
      
      Promote unmap_stage2_range to be The One True Way, and get rid of a number
      of nasty bugs in the process (good thing we never actually called free_hyp_pmds
      before...).
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
      4f728276
    • M
      ARM: KVM: move to a KVM provided HYP idmap · 2fb41059
      Marc Zyngier 提交于
      After the HYP page table rework, it is pretty easy to let the KVM
      code provide its own idmap, rather than expecting the kernel to
      provide it. It takes actually less code to do so.
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
      2fb41059
    • M
      ARM: KVM: fix HYP mapping limitations around zero · 3562c76d
      Marc Zyngier 提交于
      The current code for creating HYP mapping doesn't like to wrap
      around zero, which prevents from mapping anything into the last
      page of the virtual address space.
      
      It doesn't take much effort to remove this limitation, making
      the code more consistent with the rest of the kernel in the process.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
      3562c76d
    • M
      ARM: KVM: simplify HYP mapping population · 6060df84
      Marc Zyngier 提交于
      The way we populate HYP mappings is a bit convoluted, to say the least.
      Passing a pointer around to keep track of the current PFN is quite
      odd, and we end-up having two different PTE accessors for no good
      reason.
      
      Simplify the whole thing by unifying the two PTE accessors, passing
      a pgprot_t around, and moving the various validity checks to the
      upper layers.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
      6060df84
  21. 07 3月, 2013 4 次提交