1. 13 11月, 2019 2 次提交
  2. 06 8月, 2018 3 次提交
  3. 17 3月, 2018 1 次提交
    • K
      KVM: x86: Update the exit_qualification access bits while walking an address · ddd6f0e9
      KarimAllah Ahmed 提交于
      ... to avoid having a stale value when handling an EPT misconfig for MMIO
      regions.
      
      MMIO regions that are not passed-through to the guest are handled through
      EPT misconfigs. The first time a certain MMIO page is touched it causes an
      EPT violation, then KVM marks the EPT entry to cause an EPT misconfig
      instead. Any subsequent accesses to the entry will generate an EPT
      misconfig.
      
      Things gets slightly complicated with nested guest handling for MMIO
      regions that are not passed through from L0 (i.e. emulated by L0
      user-space).
      
      An EPT violation for one of these MMIO regions from L2, exits to L0
      hypervisor. L0 would then look at the EPT12 mapping for L1 hypervisor and
      realize it is not present (or not sufficient to serve the request). Then L0
      injects an EPT violation to L1. L1 would then update its EPT mappings. The
      EXIT_QUALIFICATION value for L1 would come from exit_qualification variable
      in "struct vcpu". The problem is that this variable is only updated on EPT
      violation and not on EPT misconfig. So if an EPT violation because of a
      read happened first, then an EPT misconfig because of a write happened
      afterwards. The L0 hypervisor will still contain exit_qualification value
      from the previous read instead of the write and end up injecting an EPT
      violation to the L1 hypervisor with an out of date EXIT_QUALIFICATION.
      
      The EPT violation that is injected from L0 to L1 needs to have the correct
      EXIT_QUALIFICATION specially for the access bits because the individual
      access bits for MMIO EPTs are updated only on actual access of this
      specific type. So for the example above, the L1 hypervisor will keep
      updating only the read bit in the EPT then resume the L2 guest. The L2
      guest would end up causing another exit where the L0 *again* will inject
      another EPT violation to L1 hypervisor with *again* an out of date
      exit_qualification which indicates a read and not a write. Then this
      ping-pong just keeps happening without making any forward progress.
      
      The behavior of mapping MMIO regions changed in:
      
         commit a340b3e2 ("kvm: Map PFN-type memory regions as writable (if possible)")
      
      ... where an EPT violation for a read would also fixup the write bits to
      avoid another EPT violation which by acciddent would fix the bug mentioned
      above.
      
      This commit fixes this situation and ensures that the access bits for the
      exit_qualifcation is up to date. That ensures that even L1 hypervisor
      running with a KVM version before the commit mentioned above would still
      work.
      
      ( The description above assumes EPT to be available and used by L1
        hypervisor + the L1 hypervisor is passing through the MMIO region to the L2
        guest while this MMIO region is emulated by the L0 user-space ).
      
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: x86@kernel.org
      Cc: kvm@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NKarimAllah Ahmed <karahmed@amazon.de>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      ddd6f0e9
  4. 12 10月, 2017 1 次提交
  5. 10 10月, 2017 1 次提交
    • L
      KVM: MMU: always terminate page walks at level 1 · 829ee279
      Ladi Prosek 提交于
      is_last_gpte() is not equivalent to the pseudo-code given in commit
      6bb69c9b ("KVM: MMU: simplify last_pte_bitmap") because an incorrect
      value of last_nonleaf_level may override the result even if level == 1.
      
      It is critical for is_last_gpte() to return true on level == 1 to
      terminate page walks. Otherwise memory corruption may occur as level
      is used as an index to various data structures throughout the page
      walking code.  Even though the actual bug would be wherever the MMU is
      initialized (as in the previous patch), be defensive and ensure here
      that is_last_gpte() returns the correct value.
      
      This patch is also enough to fix CVE-2017-12188.
      
      Fixes: 6bb69c9b
      Cc: stable@vger.kernel.org
      Cc: Andy Honig <ahonig@google.com>
      Signed-off-by: NLadi Prosek <lprosek@redhat.com>
      [Panic if walk_addr_generic gets an incorrect level; this is a serious
       bug and it's not worth a WARN_ON where the recovery path might hide
       further exploitable issues; suggested by Andrew Honig. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      829ee279
  6. 18 8月, 2017 1 次提交
    • P
      KVM: x86: fix use of L1 MMIO areas in nested guests · 9034e6e8
      Paolo Bonzini 提交于
      There is currently some confusion between nested and L1 GPAs.  The
      assignment to "direct" in kvm_mmu_page_fault tries to fix that, but
      it is not enough.  What this patch does is fence off the MMIO cache
      completely when using shadow nested page tables, since we have neither
      a GVA nor an L1 GPA to put in the cache.  This also allows some
      simplifications in kvm_mmu_page_fault and FNAME(page_fault).
      
      The EPT misconfig likewise does not have an L1 GPA to pass to
      kvm_io_bus_write, so that must be skipped for guest mode.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: NDavid Hildenbrand <david@redhat.com>
      [Changed comment to say "GPAs" instead of "L1's physical addresses", as
       per David's review. - Radim]
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      9034e6e8
  7. 12 8月, 2017 1 次提交
  8. 16 5月, 2017 1 次提交
  9. 09 5月, 2017 1 次提交
  10. 07 4月, 2017 2 次提交
    • P
      kvm: nVMX: support EPT accessed/dirty bits · ae1e2d10
      Paolo Bonzini 提交于
      Now use bit 6 of EPTP to optionally enable A/D bits for EPTP.  Another
      thing to change is that, when EPT accessed and dirty bits are not in use,
      VMX treats accesses to guest paging structures as data reads.  When they
      are in use (bit 6 of EPTP is set), they are treated as writes and the
      corresponding EPT dirty bit is set.  The MMU didn't know this detail,
      so this patch adds it.
      
      We also have to fix up the exit qualification.  It may be wrong because
      KVM sets bit 6 but the guest might not.
      
      L1 emulates EPT A/D bits using write permissions, so in principle it may
      be possible for EPT A/D bits to be used by L1 even though not available
      in hardware.  The problem is that guest page-table walks will be treated
      as reads rather than writes, so they would not cause an EPT violation.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      [Fixed typo in walk_addr_generic() comment and changed bit clear +
       conditional-set pattern in handle_ept_violation() to conditional-clear]
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      ae1e2d10
    • P
      kvm: x86: MMU support for EPT accessed/dirty bits · 86407bcb
      Paolo Bonzini 提交于
      This prepares the MMU paging code for EPT accessed and dirty bits,
      which can be enabled optionally at runtime.  Code that updates the
      accessed and dirty bits will need a pointer to the struct kvm_mmu.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      86407bcb
  11. 14 7月, 2016 2 次提交
    • B
      kvm: mmu: track read permission explicitly for shadow EPT page tables · d95c5568
      Bandan Das 提交于
      To support execute only mappings on behalf of L1 hypervisors,
      reuse ACC_USER_MASK to signify if the L1 hypervisor has the R bit
      set.
      
      For the nested EPT case, we assumed that the U bit was always set
      since there was no equivalent in EPT page tables.  Strictly
      speaking, this was not necessary because handle_ept_violation
      never set PFERR_USER_MASK in the error code (uf=0 in the
      parlance of update_permission_bitmask).  We now have to set
      both U and UF correctly, respectively in FNAME(gpte_access)
      and in handle_ept_violation.
      
      Also in handle_ept_violation bit 3 of the exit qualification is
      not enough to detect a present PTE; all three bits 3-5 have to
      be checked.
      Signed-off-by: NBandan Das <bsd@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d95c5568
    • B
      kvm: mmu: remove is_present_gpte() · 812f30b2
      Bandan Das 提交于
      We have two versions of the above function.
      To prevent confusion and bugs in the future, remove
      the non-FNAME version entirely and replace all calls
      with the actual check.
      Signed-off-by: NBandan Das <bsd@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      812f30b2
  12. 11 4月, 2016 1 次提交
    • X
      KVM: MMU: fix permission_fault() · 7a98205d
      Xiao Guangrong 提交于
      kvm-unit-tests complained about the PFEC is not set properly, e.g,:
      test pte.rw pte.d pte.nx pde.p pde.rw pde.pse user fetch: FAIL: error code 15
      expected 5
      Dump mapping: address: 0x123400000000
      ------L4: 3e95007
      ------L3: 3e96007
      ------L2: 2000083
      
      It's caused by the reason that PFEC returned to guest is copied from the
      PFEC triggered by shadow page table
      
      This patch fixes it and makes the logic of updating errcode more clean
      Signed-off-by: NXiao Guangrong <guangrong.xiao@linux.intel.com>
      [Do not assume pfec.p=1. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      7a98205d
  13. 22 3月, 2016 3 次提交
  14. 08 3月, 2016 2 次提交
  15. 03 3月, 2016 2 次提交
  16. 25 2月, 2016 1 次提交
    • M
      KVM: x86: MMU: fix ubsan index-out-of-range warning · 17e4bce0
      Mike Krinkin 提交于
      Ubsan reports the following warning due to a typo in
      update_accessed_dirty_bits template, the patch fixes
      the typo:
      
      [  168.791851] ================================================================================
      [  168.791862] UBSAN: Undefined behaviour in arch/x86/kvm/paging_tmpl.h:252:15
      [  168.791866] index 4 is out of range for type 'u64 [4]'
      [  168.791871] CPU: 0 PID: 2950 Comm: qemu-system-x86 Tainted: G           O L  4.5.0-rc5-next-20160222 #7
      [  168.791873] Hardware name: LENOVO 23205NG/23205NG, BIOS G2ET95WW (2.55 ) 07/09/2013
      [  168.791876]  0000000000000000 ffff8801cfcaf208 ffffffff81c9f780 0000000041b58ab3
      [  168.791882]  ffffffff82eb2cc1 ffffffff81c9f6b4 ffff8801cfcaf230 ffff8801cfcaf1e0
      [  168.791886]  0000000000000004 0000000000000001 0000000000000000 ffffffffa1981600
      [  168.791891] Call Trace:
      [  168.791899]  [<ffffffff81c9f780>] dump_stack+0xcc/0x12c
      [  168.791904]  [<ffffffff81c9f6b4>] ? _atomic_dec_and_lock+0xc4/0xc4
      [  168.791910]  [<ffffffff81da9e81>] ubsan_epilogue+0xd/0x8a
      [  168.791914]  [<ffffffff81daafa2>] __ubsan_handle_out_of_bounds+0x15c/0x1a3
      [  168.791918]  [<ffffffff81daae46>] ? __ubsan_handle_shift_out_of_bounds+0x2bd/0x2bd
      [  168.791922]  [<ffffffff811287ef>] ? get_user_pages_fast+0x2bf/0x360
      [  168.791954]  [<ffffffffa1794050>] ? kvm_largepages_enabled+0x30/0x30 [kvm]
      [  168.791958]  [<ffffffff81128530>] ? __get_user_pages_fast+0x360/0x360
      [  168.791987]  [<ffffffffa181b818>] paging64_walk_addr_generic+0x1b28/0x2600 [kvm]
      [  168.792014]  [<ffffffffa1819cf0>] ? init_kvm_mmu+0x1100/0x1100 [kvm]
      [  168.792019]  [<ffffffff8129e350>] ? debug_check_no_locks_freed+0x350/0x350
      [  168.792044]  [<ffffffffa1819cf0>] ? init_kvm_mmu+0x1100/0x1100 [kvm]
      [  168.792076]  [<ffffffffa181c36d>] paging64_gva_to_gpa+0x7d/0x110 [kvm]
      [  168.792121]  [<ffffffffa181c2f0>] ? paging64_walk_addr_generic+0x2600/0x2600 [kvm]
      [  168.792130]  [<ffffffff812e848b>] ? debug_lockdep_rcu_enabled+0x7b/0x90
      [  168.792178]  [<ffffffffa17d9a4a>] emulator_read_write_onepage+0x27a/0x1150 [kvm]
      [  168.792208]  [<ffffffffa1794d44>] ? __kvm_read_guest_page+0x54/0x70 [kvm]
      [  168.792234]  [<ffffffffa17d97d0>] ? kvm_task_switch+0x160/0x160 [kvm]
      [  168.792238]  [<ffffffff812e848b>] ? debug_lockdep_rcu_enabled+0x7b/0x90
      [  168.792263]  [<ffffffffa17daa07>] emulator_read_write+0xe7/0x6d0 [kvm]
      [  168.792290]  [<ffffffffa183b620>] ? em_cr_write+0x230/0x230 [kvm]
      [  168.792314]  [<ffffffffa17db005>] emulator_write_emulated+0x15/0x20 [kvm]
      [  168.792340]  [<ffffffffa18465f8>] segmented_write+0xf8/0x130 [kvm]
      [  168.792367]  [<ffffffffa1846500>] ? em_lgdt+0x20/0x20 [kvm]
      [  168.792374]  [<ffffffffa14db512>] ? vmx_read_guest_seg_ar+0x42/0x1e0 [kvm_intel]
      [  168.792400]  [<ffffffffa1846d82>] writeback+0x3f2/0x700 [kvm]
      [  168.792424]  [<ffffffffa1846990>] ? em_sidt+0xa0/0xa0 [kvm]
      [  168.792449]  [<ffffffffa185554d>] ? x86_decode_insn+0x1b3d/0x4f70 [kvm]
      [  168.792474]  [<ffffffffa1859032>] x86_emulate_insn+0x572/0x3010 [kvm]
      [  168.792499]  [<ffffffffa17e71dd>] x86_emulate_instruction+0x3bd/0x2110 [kvm]
      [  168.792524]  [<ffffffffa17e6e20>] ? reexecute_instruction.part.110+0x2e0/0x2e0 [kvm]
      [  168.792532]  [<ffffffffa14e9a81>] handle_ept_misconfig+0x61/0x460 [kvm_intel]
      [  168.792539]  [<ffffffffa14e9a20>] ? handle_pause+0x450/0x450 [kvm_intel]
      [  168.792546]  [<ffffffffa15130ea>] vmx_handle_exit+0xd6a/0x1ad0 [kvm_intel]
      [  168.792572]  [<ffffffffa17f6a6c>] ? kvm_arch_vcpu_ioctl_run+0xbdc/0x6090 [kvm]
      [  168.792597]  [<ffffffffa17f6bcd>] kvm_arch_vcpu_ioctl_run+0xd3d/0x6090 [kvm]
      [  168.792621]  [<ffffffffa17f6a6c>] ? kvm_arch_vcpu_ioctl_run+0xbdc/0x6090 [kvm]
      [  168.792627]  [<ffffffff8293b530>] ? __ww_mutex_lock_interruptible+0x1630/0x1630
      [  168.792651]  [<ffffffffa17f5e90>] ? kvm_arch_vcpu_runnable+0x4f0/0x4f0 [kvm]
      [  168.792656]  [<ffffffff811eeb30>] ? preempt_notifier_unregister+0x190/0x190
      [  168.792681]  [<ffffffffa17e0447>] ? kvm_arch_vcpu_load+0x127/0x650 [kvm]
      [  168.792704]  [<ffffffffa178e9a3>] kvm_vcpu_ioctl+0x553/0xda0 [kvm]
      [  168.792727]  [<ffffffffa178e450>] ? vcpu_put+0x40/0x40 [kvm]
      [  168.792732]  [<ffffffff8129e350>] ? debug_check_no_locks_freed+0x350/0x350
      [  168.792735]  [<ffffffff82946087>] ? _raw_spin_unlock+0x27/0x40
      [  168.792740]  [<ffffffff8163a943>] ? handle_mm_fault+0x1673/0x2e40
      [  168.792744]  [<ffffffff8129daa8>] ? trace_hardirqs_on_caller+0x478/0x6c0
      [  168.792747]  [<ffffffff8129dcfd>] ? trace_hardirqs_on+0xd/0x10
      [  168.792751]  [<ffffffff812e848b>] ? debug_lockdep_rcu_enabled+0x7b/0x90
      [  168.792756]  [<ffffffff81725a80>] do_vfs_ioctl+0x1b0/0x12b0
      [  168.792759]  [<ffffffff817258d0>] ? ioctl_preallocate+0x210/0x210
      [  168.792763]  [<ffffffff8174aef3>] ? __fget+0x273/0x4a0
      [  168.792766]  [<ffffffff8174acd0>] ? __fget+0x50/0x4a0
      [  168.792770]  [<ffffffff8174b1f6>] ? __fget_light+0x96/0x2b0
      [  168.792773]  [<ffffffff81726bf9>] SyS_ioctl+0x79/0x90
      [  168.792777]  [<ffffffff82946880>] entry_SYSCALL_64_fastpath+0x23/0xc1
      [  168.792780] ================================================================================
      Signed-off-by: NMike Krinkin <krinkin.m.u@gmail.com>
      Reviewed-by: NXiao Guangrong <guangrong.xiao@linux.intel.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      17e4bce0
  17. 23 2月, 2016 1 次提交
  18. 16 1月, 2016 1 次提交
    • D
      kvm: rename pfn_t to kvm_pfn_t · ba049e93
      Dan Williams 提交于
      To date, we have implemented two I/O usage models for persistent memory,
      PMEM (a persistent "ram disk") and DAX (mmap persistent memory into
      userspace).  This series adds a third, DAX-GUP, that allows DAX mappings
      to be the target of direct-i/o.  It allows userspace to coordinate
      DMA/RDMA from/to persistent memory.
      
      The implementation leverages the ZONE_DEVICE mm-zone that went into
      4.3-rc1 (also discussed at kernel summit) to flag pages that are owned
      and dynamically mapped by a device driver.  The pmem driver, after
      mapping a persistent memory range into the system memmap via
      devm_memremap_pages(), arranges for DAX to distinguish pfn-only versus
      page-backed pmem-pfns via flags in the new pfn_t type.
      
      The DAX code, upon seeing a PFN_DEV+PFN_MAP flagged pfn, flags the
      resulting pte(s) inserted into the process page tables with a new
      _PAGE_DEVMAP flag.  Later, when get_user_pages() is walking ptes it keys
      off _PAGE_DEVMAP to pin the device hosting the page range active.
      Finally, get_page() and put_page() are modified to take references
      against the device driver established page mapping.
      
      Finally, this need for "struct page" for persistent memory requires
      memory capacity to store the memmap array.  Given the memmap array for a
      large pool of persistent may exhaust available DRAM introduce a
      mechanism to allocate the memmap from persistent memory.  The new
      "struct vmem_altmap *" parameter to devm_memremap_pages() enables
      arch_add_memory() to use reserved pmem capacity rather than the page
      allocator.
      
      This patch (of 18):
      
      The core has developed a need for a "pfn_t" type [1].  Move the existing
      pfn_t in KVM to kvm_pfn_t [2].
      
      [1]: https://lists.01.org/pipermail/linux-nvdimm/2015-September/002199.html
      [2]: https://lists.01.org/pipermail/linux-nvdimm/2015-September/002218.htmlSigned-off-by: NDan Williams <dan.j.williams@intel.com>
      Acked-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ba049e93
  19. 26 11月, 2015 4 次提交
  20. 10 11月, 2015 1 次提交
  21. 19 10月, 2015 1 次提交
    • T
      KVM: x86: MMU: Initialize force_pt_level before calling mapping_level() · 8c85ac1c
      Takuya Yoshikawa 提交于
      Commit fd136902 ("KVM: x86: MMU: Move mapping_level_dirty_bitmap()
      call in mapping_level()") forgot to initialize force_pt_level to false
      in FNAME(page_fault)() before calling mapping_level() like
      nonpaging_map() does.  This can sometimes result in forcing page table
      level mapping unnecessarily.
      
      Fix this and move the first *force_pt_level check in mapping_level()
      before kvm_vcpu_gfn_to_memslot() call to make it a bit clearer that
      the variable must be initialized before mapping_level() gets called.
      
      This change can also avoid calling kvm_vcpu_gfn_to_memslot() when
      !check_hugepage_cache_consistency() check in tdp_page_fault() forces
      page table level mapping.
      Signed-off-by: NTakuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      8c85ac1c
  22. 16 10月, 2015 3 次提交
  23. 05 8月, 2015 1 次提交
  24. 05 6月, 2015 1 次提交
  25. 11 5月, 2015 1 次提交
  26. 08 5月, 2015 1 次提交