1. 24 10月, 2010 2 次提交
    • X
      KVM: MMU: introduce hva_to_pfn_atomic function · 887c08ac
      Xiao Guangrong 提交于
      Introduce hva_to_pfn_atomic(), it's the fast path and can used in atomic
      context, the later patch will use it
      Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      887c08ac
    • Z
      KVM: x86: Add clock sync request to hardware enable · ca84d1a2
      Zachary Amsden 提交于
      If there are active VCPUs which are marked as belonging to
      a particular hardware CPU, request a clock sync for them when
      enabling hardware; the TSC could be desynchronized on a newly
      arriving CPU, and we need to recompute guests system time
      relative to boot after a suspend event.
      
      This covers both cases.
      
      Note that it is acceptable to take the spinlock, as either
      no other tasks will be running and no locks held (BSP after
      resume), or other tasks will be guaranteed to drop the lock
      relatively quickly (AP on CPU_STARTING).
      
      Noting we now get clock synchronization requests for VCPUs
      which are starting up (or restarting), it is tempting to
      attempt to remove the arch/x86/kvm/x86.c CPU hot-notifiers
      at this time, however it is not correct to do so; they are
      required for systems with non-constant TSC as the frequency
      may not be known immediately after the processor has started
      until the cpufreq driver has had a chance to run and query
      the chipset.
      
      Updated: implement better locking semantics for hardware_enable
      
      Removed the hack of dropping and retaking the lock by adding the
      semantic that we always hold kvm_lock when hardware_enable is
      called.  The one place that doesn't need to worry about it is
      resume, as resuming a frozen CPU, the spinlock won't be taken.
      Signed-off-by: NZachary Amsden <zamsden@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      ca84d1a2
  2. 23 9月, 2010 1 次提交
    • A
      KVM: Fix reboot on Intel hosts · ca242ac9
      Avi Kivity 提交于
      When we reboot, we disable vmx extensions or otherwise INIT gets blocked.
      If a task on another cpu hits a vmx instruction, it will fault if vmx is
      disabled.  We trap that to avoid a nasty oops and spin until the reboot
      completes.
      
      Problem is, we sleep with interrupts disabled.  This blocks smp_send_stop()
      from running, and the reboot process halts.
      
      Fix by enabling interrupts before spinning.
      
      KVM-Stable-Tag.
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      ca242ac9
  3. 10 9月, 2010 1 次提交
    • Z
      KVM: x86: Perform hardware_enable in CPU_STARTING callback · da908f2f
      Zachary Amsden 提交于
      The CPU_STARTING callback was added upstream with the intention
      of being used for KVM, specifically for the hardware enablement
      that must be done before we can run in hardware virt.  It had
      bugs on the x86_64 architecture at the time, where it was called
      after CPU_ONLINE.  The arches have since merged and the bug is
      gone.
      
      It might be noted other features should probably start making
      use of this callback; microcode updates in particular which
      might be fixing important erratums would be best applied before
      beginning to run user tasks.
      Signed-off-by: NZachary Amsden <zamsden@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      da908f2f
  4. 02 8月, 2010 2 次提交
  5. 01 8月, 2010 11 次提交
  6. 19 5月, 2010 1 次提交
  7. 17 5月, 2010 7 次提交
    • T
      KVM: Remove test-before-set optimization for dirty bits · d1476937
      Takuya Yoshikawa 提交于
      As Avi pointed out, testing bit part in mark_page_dirty() was important
      in the days of shadow paging, but currently EPT and NPT has already become
      common and the chance of faulting a page more that once per iteration is
      small. So let's remove the test bit to avoid extra access.
      Signed-off-by: NTakuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      d1476937
    • L
      KVM: do not call hardware_disable() on CPU_UP_CANCELED · 66cbff59
      Lai Jiangshan 提交于
      When CPU_UP_CANCELED, hardware_enable() has not been called at the CPU
      which is going up because raw_notifier_call_chain(CPU_ONLINE)
      has not been called for this cpu.
      
      Drop the handling for CPU_UP_CANCELED.
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      66cbff59
    • L
      KVM: use the correct RCU API for PROVE_RCU=y · 90d83dc3
      Lai Jiangshan 提交于
      The RCU/SRCU API have already changed for proving RCU usage.
      
      I got the following dmesg when PROVE_RCU=y because we used incorrect API.
      This patch coverts rcu_deference() to srcu_dereference() or family API.
      
      ===================================================
      [ INFO: suspicious rcu_dereference_check() usage. ]
      ---------------------------------------------------
      arch/x86/kvm/mmu.c:3020 invoked rcu_dereference_check() without protection!
      
      other info that might help us debug this:
      
      rcu_scheduler_active = 1, debug_locks = 0
      2 locks held by qemu-system-x86/8550:
       #0:  (&kvm->slots_lock){+.+.+.}, at: [<ffffffffa011a6ac>] kvm_set_memory_region+0x29/0x50 [kvm]
       #1:  (&(&kvm->mmu_lock)->rlock){+.+...}, at: [<ffffffffa012262d>] kvm_arch_commit_memory_region+0xa6/0xe2 [kvm]
      
      stack backtrace:
      Pid: 8550, comm: qemu-system-x86 Not tainted 2.6.34-rc4-tip-01028-g939eab1 #27
      Call Trace:
       [<ffffffff8106c59e>] lockdep_rcu_dereference+0xaa/0xb3
       [<ffffffffa012f6c1>] kvm_mmu_calculate_mmu_pages+0x44/0x7d [kvm]
       [<ffffffffa012263e>] kvm_arch_commit_memory_region+0xb7/0xe2 [kvm]
       [<ffffffffa011a5d7>] __kvm_set_memory_region+0x636/0x6e2 [kvm]
       [<ffffffffa011a6ba>] kvm_set_memory_region+0x37/0x50 [kvm]
       [<ffffffffa015e956>] vmx_set_tss_addr+0x46/0x5a [kvm_intel]
       [<ffffffffa0126592>] kvm_arch_vm_ioctl+0x17a/0xcf8 [kvm]
       [<ffffffff810a8692>] ? unlock_page+0x27/0x2c
       [<ffffffff810bf879>] ? __do_fault+0x3a9/0x3e1
       [<ffffffffa011b12f>] kvm_vm_ioctl+0x364/0x38d [kvm]
       [<ffffffff81060cfa>] ? up_read+0x23/0x3d
       [<ffffffff810f3587>] vfs_ioctl+0x32/0xa6
       [<ffffffff810f3b19>] do_vfs_ioctl+0x495/0x4db
       [<ffffffff810e6b2f>] ? fget_light+0xc2/0x241
       [<ffffffff810e416c>] ? do_sys_open+0x104/0x116
       [<ffffffff81382d6d>] ? retint_swapgs+0xe/0x13
       [<ffffffff810f3ba6>] sys_ioctl+0x47/0x6a
       [<ffffffff810021db>] system_call_fastpath+0x16/0x1b
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      90d83dc3
    • T
      KVM: limit the number of pages per memory slot · 660c22c4
      Takuya Yoshikawa 提交于
      This patch limits the number of pages per memory slot to make
      us free from extra care about type issues.
      Signed-off-by: NTakuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      660c22c4
    • T
      KVM: coalesced_mmio: fix kvm_coalesced_mmio_init()'s error handling · 6ce5a090
      Takuya Yoshikawa 提交于
      kvm_coalesced_mmio_init() keeps to hold the addresses of a coalesced
      mmio ring page and dev even after it has freed them.
      
      Also, if this function fails, though it might be rare, it seems to be
      suggesting the system's serious state: so we'd better stop the works
      following the kvm_creat_vm().
      
      This patch clears these problems.
      
        We move the coalesced mmio's initialization out of kvm_create_vm().
        This seems to be natural because it includes a registration which
        can be done only when vm is successfully created.
      Signed-off-by: NTakuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      6ce5a090
    • W
      KVM: fix the errno of ioctl KVM_[UN]REGISTER_COALESCED_MMIO failure · a87fa355
      Wei Yongjun 提交于
      This patch change the errno of ioctl KVM_[UN]REGISTER_COALESCED_MMIO
      from -EINVAL to -ENXIO if no coalesced mmio dev exists.
      Signed-off-by: NWei Yongjun <yjwei@cn.fujitsu.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      a87fa355
    • X
      KVM: cleanup kvm trace · 2ed152af
      Xiao Guangrong 提交于
      This patch does:
      
       - no need call tracepoint_synchronize_unregister() when kvm module
         is unloaded since ftrace can handle it
      
       - cleanup ftrace's macro
      Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      2ed152af
  8. 25 4月, 2010 1 次提交
  9. 21 4月, 2010 1 次提交
    • L
      KVM: Add missing srcu_read_lock() for kvm_mmu_notifier_release() · eda2beda
      Lai Jiangshan 提交于
      I got this dmesg due to srcu_read_lock() is missing in
      kvm_mmu_notifier_release().
      
      ===================================================
      [ INFO: suspicious rcu_dereference_check() usage. ]
      ---------------------------------------------------
      arch/x86/kvm/x86.h:72 invoked rcu_dereference_check() without protection!
      
      other info that might help us debug this:
      
      rcu_scheduler_active = 1, debug_locks = 0
      2 locks held by qemu-system-x86/3100:
       #0:  (rcu_read_lock){.+.+..}, at: [<ffffffff810d73dc>] __mmu_notifier_release+0x38/0xdf
       #1:  (&(&kvm->mmu_lock)->rlock){+.+...}, at: [<ffffffffa0130a6a>] kvm_mmu_zap_all+0x21/0x5e [kvm]
      
      stack backtrace:
      Pid: 3100, comm: qemu-system-x86 Not tainted 2.6.34-rc3-22949-gbc8a97a-dirty #2
      Call Trace:
       [<ffffffff8106afd9>] lockdep_rcu_dereference+0xaa/0xb3
       [<ffffffffa0123a89>] unalias_gfn+0x56/0xab [kvm]
       [<ffffffffa0119600>] gfn_to_memslot+0x16/0x25 [kvm]
       [<ffffffffa012ffca>] gfn_to_rmap+0x17/0x6e [kvm]
       [<ffffffffa01300c1>] rmap_remove+0xa0/0x19d [kvm]
       [<ffffffffa0130649>] kvm_mmu_zap_page+0x109/0x34d [kvm]
       [<ffffffffa0130a7e>] kvm_mmu_zap_all+0x35/0x5e [kvm]
       [<ffffffffa0122870>] kvm_arch_flush_shadow+0x16/0x22 [kvm]
       [<ffffffffa01189e0>] kvm_mmu_notifier_release+0x15/0x17 [kvm]
       [<ffffffff810d742c>] __mmu_notifier_release+0x88/0xdf
       [<ffffffff810d73dc>] ? __mmu_notifier_release+0x38/0xdf
       [<ffffffff81040848>] ? exit_mm+0xe0/0x115
       [<ffffffff810c2cb0>] exit_mmap+0x2c/0x17e
       [<ffffffff8103c472>] mmput+0x2d/0xd4
       [<ffffffff81040870>] exit_mm+0x108/0x115
      [...]
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      eda2beda
  10. 20 4月, 2010 1 次提交
  11. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  12. 01 3月, 2010 11 次提交