1. 27 6月, 2013 1 次提交
  2. 05 6月, 2013 8 次提交
  3. 16 5月, 2013 1 次提交
  4. 12 5月, 2013 1 次提交
  5. 07 4月, 2013 1 次提交
  6. 22 3月, 2013 2 次提交
    • T
      KVM: MMU: Rename kvm_mmu_free_some_pages() to make_mmu_pages_available() · 81f4f76b
      Takuya Yoshikawa 提交于
      The current name "kvm_mmu_free_some_pages" should be used for something
      that actually frees some shadow pages, as we expect from the name, but
      what the function is doing is to make some, KVM_MIN_FREE_MMU_PAGES,
      shadow pages available: it does nothing when there are enough.
      
      This patch changes the name to reflect this meaning better; while doing
      this renaming, the code in the wrapper function is inlined into the main
      body since the whole function will be inlined into the only caller now.
      Signed-off-by: NTakuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      81f4f76b
    • T
      KVM: MMU: Move kvm_mmu_free_some_pages() into kvm_mmu_alloc_page() · 7ddca7e4
      Takuya Yoshikawa 提交于
      What this function is doing is to ensure that the number of shadow pages
      does not exceed the maximum limit stored in n_max_mmu_pages: so this is
      placed at every code path that can reach kvm_mmu_alloc_page().
      
      Although it might have some sense to spread this function in each such
      code path when it could be called before taking mmu_lock, the rule was
      changed not to do so.
      
      Taking this background into account, this patch moves it into
      kvm_mmu_alloc_page() and simplifies the code.
      
      Note: the unlikely hint in kvm_mmu_free_some_pages() guarantees that the
      overhead of this function is almost zero except when we actually need to
      allocate some shadow pages, so we do not need to care about calling it
      multiple times in one path by doing kvm_mmu_get_page() a few times.
      Signed-off-by: NTakuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      7ddca7e4
  7. 14 3月, 2013 2 次提交
  8. 08 3月, 2013 3 次提交
  9. 28 2月, 2013 1 次提交
    • S
      hlist: drop the node parameter from iterators · b67bfe0d
      Sasha Levin 提交于
      I'm not sure why, but the hlist for each entry iterators were conceived
      
              list_for_each_entry(pos, head, member)
      
      The hlist ones were greedy and wanted an extra parameter:
      
              hlist_for_each_entry(tpos, pos, head, member)
      
      Why did they need an extra pos parameter? I'm not quite sure. Not only
      they don't really need it, it also prevents the iterator from looking
      exactly like the list iterator, which is unfortunate.
      
      Besides the semantic patch, there was some manual work required:
      
       - Fix up the actual hlist iterators in linux/list.h
       - Fix up the declaration of other iterators based on the hlist ones.
       - A very small amount of places were using the 'node' parameter, this
       was modified to use 'obj->member' instead.
       - Coccinelle didn't handle the hlist_for_each_entry_safe iterator
       properly, so those had to be fixed up manually.
      
      The semantic patch which is mostly the work of Peter Senna Tschudin is here:
      
      @@
      iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
      
      type T;
      expression a,c,d,e;
      identifier b;
      statement S;
      @@
      
      -T b;
          <+... when != b
      (
      hlist_for_each_entry(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_continue(a,
      - b,
      c) S
      |
      hlist_for_each_entry_from(a,
      - b,
      c) S
      |
      hlist_for_each_entry_rcu(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_rcu_bh(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_continue_rcu_bh(a,
      - b,
      c) S
      |
      for_each_busy_worker(a, c,
      - b,
      d) S
      |
      ax25_uid_for_each(a,
      - b,
      c) S
      |
      ax25_for_each(a,
      - b,
      c) S
      |
      inet_bind_bucket_for_each(a,
      - b,
      c) S
      |
      sctp_for_each_hentry(a,
      - b,
      c) S
      |
      sk_for_each(a,
      - b,
      c) S
      |
      sk_for_each_rcu(a,
      - b,
      c) S
      |
      sk_for_each_from
      -(a, b)
      +(a)
      S
      + sk_for_each_from(a) S
      |
      sk_for_each_safe(a,
      - b,
      c, d) S
      |
      sk_for_each_bound(a,
      - b,
      c) S
      |
      hlist_for_each_entry_safe(a,
      - b,
      c, d, e) S
      |
      hlist_for_each_entry_continue_rcu(a,
      - b,
      c) S
      |
      nr_neigh_for_each(a,
      - b,
      c) S
      |
      nr_neigh_for_each_safe(a,
      - b,
      c, d) S
      |
      nr_node_for_each(a,
      - b,
      c) S
      |
      nr_node_for_each_safe(a,
      - b,
      c, d) S
      |
      - for_each_gfn_sp(a, c, d, b) S
      + for_each_gfn_sp(a, c, d) S
      |
      - for_each_gfn_indirect_valid_sp(a, c, d, b) S
      + for_each_gfn_indirect_valid_sp(a, c, d) S
      |
      for_each_host(a,
      - b,
      c) S
      |
      for_each_host_safe(a,
      - b,
      c, d) S
      |
      for_each_mesh_entry(a,
      - b,
      c, d) S
      )
          ...+>
      
      [akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
      [akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
      [akpm@linux-foundation.org: checkpatch fixes]
      [akpm@linux-foundation.org: fix warnings]
      [akpm@linux-foudnation.org: redo intrusive kvm changes]
      Tested-by: NPeter Senna Tschudin <peter.senna@gmail.com>
      Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NSasha Levin <sasha.levin@oracle.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b67bfe0d
  10. 21 2月, 2013 1 次提交
  11. 07 2月, 2013 4 次提交
  12. 06 2月, 2013 1 次提交
  13. 05 2月, 2013 4 次提交
  14. 14 1月, 2013 6 次提交
  15. 11 1月, 2013 2 次提交
    • X
      KVM: MMU: fix infinite fault access retry · 7751babd
      Xiao Guangrong 提交于
      We have two issues in current code:
      - if target gfn is used as its page table, guest will refault then kvm will use
        small page size to map it. We need two #PF to fix its shadow page table
      
      - sometimes, say a exception is triggered during vm-exit caused by #PF
        (see handle_exception() in vmx.c), we remove all the shadow pages shadowed
        by the target gfn before go into page fault path, it will cause infinite
        loop:
        delete shadow pages shadowed by the gfn -> try to use large page size to map
        the gfn -> retry the access ->...
      
      To fix these, we can adjust page size early if the target gfn is used as page
      table
      Signed-off-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      7751babd
    • X
      KVM: MMU: fix Dirty bit missed if CR0.WP = 0 · c2288505
      Xiao Guangrong 提交于
      If the write-fault access is from supervisor and CR0.WP is not set on the
      vcpu, kvm will fix it by adjusting pte access - it sets the W bit on pte
      and clears U bit. This is the chance that kvm can change pte access from
      readonly to writable
      
      Unfortunately, the pte access is the access of 'direct' shadow page table,
      means direct sp.role.access = pte_access, then we will create a writable
      spte entry on the readonly shadow page table. It will cause Dirty bit is
      not tracked when two guest ptes point to the same large page. Note, it
      does not have other impact except Dirty bit since cr0.wp is encoded into
      sp.role
      
      It can be fixed by adjusting pte access before establishing shadow page
      table. Also, after that, no mmu specified code exists in the common function
      and drop two parameters in set_spte
      Signed-off-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      c2288505
  16. 06 12月, 2012 1 次提交
    • X
      KVM: MMU: optimize for set_spte · c2193463
      Xiao Guangrong 提交于
      There are two cases we need to adjust page size in set_spte:
      1): the one is other vcpu creates new sp in the window between mapping_level()
          and acquiring mmu-lock.
      2): the another case is the new sp is created by itself (page-fault path) when
          guest uses the target gfn as its page table.
      
      In current code, set_spte drop the spte and emulate the access for these case,
      it works not good:
      - for the case 1, it may destroy the mapping established by other vcpu, and
        do expensive instruction emulation.
      - for the case 2, it may emulate the access even if the guest is accessing
        the page which not used as page table. There is a example, 0~2M is used as
        huge page in guest, in this huge page, only page 3 used as page table, then
        guest read/writes on other pages can cause instruction emulation.
      
      Both of these cases can be fixed by allowing guest to retry the access, it
      will refault, then we can establish the mapping by using small page
      Signed-off-by: NXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Acked-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      c2193463
  17. 30 10月, 2012 1 次提交