- 03 1月, 2009 1 次提交
-
-
由 Ingo Brueckl 提交于
Signed-off-by: NIngo Brueckl <ib@wupperonline.de> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 01 1月, 2009 2 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Nick Piggin 提交于
struct dentry is one of the most critical structures in the kernel. So it's sad to see it going neglected. With CONFIG_PROFILING turned on (which is probably the common case at least for distros and kernel developers), sizeof(struct dcache) == 208 here (64-bit). This gives 19 objects per slab. I packed d_mounted into a hole, and took another 4 bytes off the inline name length to take the padding out from the end of the structure. This shinks it to 200 bytes. I could have gone the other way and increased the length to 40, but I'm aiming for a magic number, read on... I then got rid of the d_cookie pointer. This shrinks it to 192 bytes. Rant: why was this ever a good idea? The cookie system should increase its hash size or use a tree or something if lookups are a problem. Also the "fast dcookie lookups" in oprofile should be moved into the dcookie code -- how can oprofile possibly care about the dcookie_mutex? It gets dropped after get_dcookie() returns so it can't be providing any sort of protection. At 192 bytes, 21 objects fit into a 4K page, saving about 3MB on my system with ~140 000 entries allocated. 192 is also a multiple of 64, so we get nice cacheline alignment on 64 and 32 byte line systems -- any given dentry will now require 3 cachelines to touch all fields wheras previously it would require 4. I know the inline name size was chosen quite carefully, however with the reduction in cacheline footprint, it should actually be just about as fast to do a name lookup for a 36 character name as it was before the patch (and faster for other sizes). The memory footprint savings for names which are <= 32 or > 36 bytes long should more than make up for the memory cost for 33-36 byte names. Performance is a feature... Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 31 12月, 2008 37 次提交
-
-
由 Marcelo Tosatti 提交于
The invlpg and sync walkers lack knowledge of large host sptes, descending to non-existant pagetable level. Stop at directory level in such case. Fixes SMP Windows XP with hugepages. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
While most accesses to the i8259 are with the kvm mutex taken, the call to kvm_pic_read_irq() is not. We can't easily take the kvm mutex there since the function is called with interrupts disabled. Fix by adding a spinlock to the virtual interrupt controller. Since we can't send an IPI under the spinlock (we also take the same spinlock in an irq disabled context), we defer the IPI until the spinlock is released. Similarly, we defer irq ack notifications until after spinlock release to avoid lock recursion. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
The pte.g bit is meaningless if global pages are disabled; deferring mmu page synchronization on these ptes will lead to the guest using stale shadow ptes. Fixes Vista x86 smp bootloader failure. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Jes Sorensen 提交于
Fix kvm_arch_vcpu_ioctl_[gs]et_regs() to do something meaningful on ia64. Old versions could never have worked since they required pointers to be set in the ioctl payload which were never being set by the ioctl handler for get_regs. In addition reserve extra space for future extensions. The change of layout of struct kvm_regs doesn't require adding a new CAP since get/set regs never worked on ia64 until now. This version doesn't support copying the KVM kernel stack in/out of the kernel. This should be implemented in a seperate ioctl call if ever needed. Signed-off-by: NJes Sorensen <jes@sgi.com> Acked-by : Xiantao Zhang <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Jan Kiszka 提交于
There is no point in doing the ready_for_nmi_injection/ request_nmi_window dance with user space. First, we don't do this for in-kernel irqchip anyway, while the code path is the same as for user space irqchip mode. And second, there is nothing to loose if a pending NMI is overwritten by another one (in contrast to IRQs where we have to save the number). Actually, there is even the risk of raising spurious NMIs this way because the reason for the held-back NMI might already be handled while processing the first one. Therefore this patch creates a simplified user space NMI injection interface, exporting it under KVM_CAP_USER_NMI and dropping the old KVM_CAP_NMI capability. And this time we also take care to provide the interface only on archs supporting NMIs via KVM (right now only x86). Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Jan Kiszka 提交于
As with the kernel irqchip, don't allow an NMI to stomp over an already injected IRQ; instead wait for the IRQ injection to be completed. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Marcelo Tosatti 提交于
walk_shadow assumes the caller verified validity of the pdptr pointer in question, which is not the case for the invlpg handler. Fixes oops during Solaris 10 install. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Eduardo Habkost 提交于
kvm_get_tsc_khz() currently returns the previously-calculated preset_lpj value, but it is in loops-per-jiffy, not kHz. The current code works correctly only when HZ=1000. Signed-off-by: NEduardo Habkost <ehabkost@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Marcelo Tosatti 提交于
If the guest executes invlpg, peek into the pagetable and attempt to prepopulate the shadow entry. Also stop dirty fault updates from interfering with the fork detector. 2% improvement on RHEL3/AIM7. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Marcelo Tosatti 提交于
Skip syncing global pages on cr3 switch (but not on cr4/cr0). This is important for Linux 32-bit guests with PAE, where the kmap page is marked as global. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Marcelo Tosatti 提交于
Collapse remote TLB flushes on root sync. kernbench is 2.7% faster on 4-way guest. Improvements have been seen with other loads such as AIM7. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Marcelo Tosatti 提交于
Instead of invoking the handler directly collect pages into an array so the caller can work with it. Simplifies TLB flush collapsing. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Amit Shah 提交于
The VMMCALL instruction doesn't get recognised and isn't processed by the emulator. This is seen on an Intel host that tries to execute the VMMCALL instruction after a guest live migrates from an AMD host. Signed-off-by: NAmit Shah <amit.shah@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Guillaume Thouvenin 提交于
Add emulation of shld and shrd instructions Signed-off-by: NGuillaume Thouvenin <guillaume.thouvenin@ext.bull.net> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Guillaume Thouvenin 提交于
Add the assembler code for instruction with three operands and one operand is stored in ECX register Signed-off-by: NGuillaume Thouvenin <guillaume.thouvenin@ext.bull.net> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Guillaume Thouvenin 提交于
Add SrcOne operand type when we need to decode an implied '1' like with regular shift instruction Signed-off-by: NGuillaume Thouvenin <guillaume.thouvenin@ext.bull.net> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Guillaume Thouvenin 提交于
Instruction like shld has three operands, so we need to add a Src2 decode set. We start with Src2None, Src2CL, and Src2ImmByte, Src2One to support shld/shrd and we will expand it later. Signed-off-by: NGuillaume Thouvenin <guillaume.thouvenin@ext.bull.net> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Guillaume Thouvenin 提交于
Extend the opcode descriptor to 32 bits. This is needed by the introduction of a new Src2 operand type. Signed-off-by: NGuillaume Thouvenin <guillaume.thouvenin@ext.bull.net> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
The only significant changes were to kvmppc_exit_timing_write() and kvmppc_exit_timing_show(), both of which were dramatically simplified. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
Existing KVM statistics are either just counters (kvm_stat) reported for KVM generally or trace based aproaches like kvm_trace. For KVM on powerpc we had the need to track the timings of the different exit types. While this could be achieved parsing data created with a kvm_trace extension this adds too much overhead (at least on embedded PowerPC) slowing down the workloads we wanted to measure. Therefore this patch adds a in-kernel exit timing statistic to the powerpc kvm code. These statistic is available per vm&vcpu under the kvm debugfs directory. As this statistic is low, but still some overhead it can be enabled via a .config entry and should be off by default. Since this patch touched all powerpc kvm_stat code anyway this code is now merged and simplified together with the exit timing statistic code (still working with exit timing disabled in .config). Signed-off-by: NChristian Ehrhardt <ehrhardt@linux.vnet.ibm.com> Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
Store shadow TLB entries in memory, but only use it on host context switch (instead of every guest entry). This improves performance for most workloads on 440 by reducing the guest TLB miss rate. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
Formerly, we used to maintain a per-vcpu shadow TLB and on every entry to the guest would load this array into the hardware TLB. This consumed 1280 bytes of memory (64 entries of 16 bytes plus a struct page pointer each), and also required some assembly to loop over the array on every entry. Instead of saving a copy in memory, we can just store shadow mappings directly into the hardware TLB, accepting that the host kernel will clobber these as part of the normal 440 TLB round robin. When we do that we need less than half the memory, and we have decreased the exit handling time for all guest exits, at the cost of increased number of TLB misses because the host overwrites some guest entries. These savings will be increased on processors with larger TLBs or which implement intelligent flush instructions like tlbivax (which will avoid the need to walk arrays in software). In addition to that and to the code simplification, we have a greater chance of leaving other host userspace mappings in the TLB, instead of forcing all subsequent tasks to re-fault all their mappings. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
KVM currently ignores the host's round robin TLB eviction selection, instead maintaining its own TLB state and its own round robin index. However, by participating in the normal 44x TLB selection, we can drop the alternate TLB processing in KVM. This results in a significant performance improvement, since that processing currently must be done on *every* guest exit. Accordingly, KVM needs to be able to access and increment tlb_44x_index. (KVM on 440 cannot be a module, so there is no need to export this symbol.) Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Acked-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
KVM on 440 has always been able to handle large guest mappings with 4K host pages -- we must, since the guest kernel uses 256MB mappings. This patch makes KVM work when the host has large pages too (tested with 64K). Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hannes Eder 提交于
Impact: make global function static arch/x86/kvm/vmx.c:134:3: warning: symbol 'vmx_capability' was not declared. Should it be static? Signed-off-by: NHannes Eder <hannes@hanneseder.net> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
Notices by Guillaume Thouvenin. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
Set operand type and size to get correct writeback behavior. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
'ret' did not set the operand type or size for the destination, so writeback ignored it. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
Switch 'pop r/m' instruction to use the new function. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Christian Borntraeger 提交于
The s390 backend of kvm never calls kvm_vcpu_uninit. This causes a memory leak of vcpu->run pages. Lets call kvm_vcpu_uninit in kvm_arch_vcpu_destroy to free the vcpu->run. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Acked-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Christian Borntraeger 提交于
Currently it is impossible to unload the kvm module on s390. This patch fixes kvm_arch_destroy_vm to release all cpus. This make it possible to unload the module. In addition we stop messing with the module refcount in arch code. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Acked-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
No need to repeat the same assembly block over and over. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Marcelo Tosatti 提交于
The write protect verification in set_spte is unnecessary for page sync. Its guaranteed that, if the unsync spte was writable, the target page does not have a write protected shadow (if it had, the spte would have been write protected under mmu_lock by rmap_write_protect before). Same reasoning applies to mark_page_dirty: the gfn has been marked as dirty via the pagefault path. The cost of hash table and memslot lookups are quite significant if the workload is pagetable write intensive resulting in increased mmu_lock contention. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-