- 01 2月, 2008 3 次提交
-
-
由 Huang, Ying 提交于
Because in i386 early boot stage, boot_cpu_data may be not available, which makes clflush_cach_range() into infinite loop, which is called by change_page_attr(). This patch fixes this by setting boot_cpu_data.x86_clflush_size in early_cpu_detect(). Signed-off-by: NHuang Ying <ying.huang@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Huang, Ying 提交于
This patch replaces __change_page_attr_set_clr() with change_page_attr_set_clr() in change_page_attr_clear() to flush the TLB/cache properly. Signed-off-by: NHuang Ying <ying.huang@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Harvey Harrison 提交于
arch/x86/kernel/cpu/intel_cacheinfo.c:355:7: warning: symbol 'i' shadows an earlier one arch/x86/kernel/cpu/intel_cacheinfo.c:296:39: originally declared here arch/x86/kernel/cpu/intel_cacheinfo.c:367:18: warning: incorrect type in argument 2 (different signedness) arch/x86/kernel/cpu/intel_cacheinfo.c:367:18: expected unsigned int *eax arch/x86/kernel/cpu/intel_cacheinfo.c:367:18: got int * arch/x86/kernel/cpu/intel_cacheinfo.c:367:28: warning: incorrect type in argument 3 (different signedness) arch/x86/kernel/cpu/intel_cacheinfo.c:367:28: expected unsigned int *ebx arch/x86/kernel/cpu/intel_cacheinfo.c:367:28: got int * arch/x86/kernel/cpu/intel_cacheinfo.c:367:38: warning: incorrect type in argument 4 (different signedness) arch/x86/kernel/cpu/intel_cacheinfo.c:367:38: expected unsigned int *ecx arch/x86/kernel/cpu/intel_cacheinfo.c:367:38: got int * arch/x86/kernel/cpu/intel_cacheinfo.c:367:48: warning: incorrect type in argument 5 (different signedness) arch/x86/kernel/cpu/intel_cacheinfo.c:367:48: expected unsigned int *edx arch/x86/kernel/cpu/intel_cacheinfo.c:367:48: got int * Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 31 1月, 2008 35 次提交
-
-
由 Michael Ellerman 提交于
This patch adds support for setting up a fixed IOMMU mapping on certain cell machines. For 64-bit devices this avoids the performance overhead of mapping and unmapping pages at runtime. 32-bit devices are unable to use the fixed mapping. The fixed mapping is established at boot, and maps all of physical memory 1:1 into device space at some offset. On machines with < 30 GB of memory we setup the fixed mapping immediately above the normal IOMMU window. For example a machine with 4GB of memory would end up with the normal IOMMU window from 0-2GB and the fixed mapping window from 2GB to 6GB. In this case a 64-bit device wishing to DMA to 1GB would be told to DMA to 3GB, plus any offset required by firmware. The firmware offset is encoded in the "dma-ranges" property. On machines with 30GB or more of memory, we are unable to place the fixed mapping above the normal IOMMU window as we would run out of address space. Instead we move the normal IOMMU window to coincide with the hash page table, this region does not need to be part of the fixed mapping as no device should ever be DMA'ing to it. We then setup the fixed mapping from 0 to 32GB. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
Split out the ioid fetching and checking logic so we can use it elsewhere in a subsequent patch. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
Add support to cell_iommu_setup_page_tables() for handling two windows, the dynamic window and the fixed window. A fixed window size of 0 indicates that there is no fixed window at all. Currently there are no callers who pass a non-zero fixed window, but the upcoming fixed IOMMU mapping patch will change that. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
Split the IOMMU logic out from cell_dma_dev_setup() into a separate function. If we're not using dma_direct_ops or dma_iommu_ops we don't know what the hell's going on, so BUG. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
Split cell_iommu_setup_hardware() into two parts. Split the page table setup into cell_iommu_setup_page_tables() and the bits that kick the hardware into cell_iommu_enable_hardware(). Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
Split out the logic that allocates a struct iommu into a separate function. This can fail however the calling code has never cared - so just return if we can't allocate an iommu. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
In order to support the fixed IOMMU mapping (in a subsequent patch), we need the hash table to be inside the IOMMUs DMA window. This is usually 2G, but let's make sure the hash table is under 1G as that will satisfy the IOMMU requirements and also means the hash table will be on node 0. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Ingo Molnar 提交于
fix this modular build bug: > CC [M] arch/x86/kernel/test_nx.o > {standard input}: Assembler messages: > {standard input}:58: Error: cannot represent relocation type BFD_RELOC_64 > {standard input}:59: Error: cannot represent relocation type BFD_RELOC_64 > make[2]: *** [arch/x86/kernel/test_nx.o] Error 1 > make[1]: *** [arch/x86/kernel] Error 2 Reported-by: NAdrian Bunk <bunk@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 travis@sgi.com 提交于
Sparc64 has a way of providing the base address for the per cpu area of the currently executing processor in a global register. Sparc64 also provides a way to calculate the address of a per cpu area from a base address instead of performing an array lookup. Cc: David Miller <davem@davemloft.net> Signed-off-by: NMike Travis <travis@sgi.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 travis@sgi.com 提交于
Change: config ARCH_SETS_UP_PER_CPU_AREA to: config HAVE_SETUP_PER_CPU_AREA Cc: Andi Kleen <ak@suse.de> Cc: Tony Luck <tony.luck@intel.com> Cc: David Miller <davem@davemloft.net> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Cc: linuxppc-dev@ozlabs.org Cc: linux-ia64@vger.kernel.org Signed-off-by: NMike Travis <travis@sgi.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 travis@sgi.com 提交于
percpu_modcopy() is defined multiple times in arch files. However, the only user is module.c. Put a static definition into module.c and remove the definitions from the arch files. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Andrew Morton 提交于
Cc: Neil Brown <neilb@cse.unsw.edu.au> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 James Bottomley 提交于
With the sg table code, every SCSI driver is now either chain capable or broken (or has sg_tablesize set so chaining is never activated), so there's no need to have a check in the host template. Also tidy up the code by moving the scatterlist size defines into the SCSI includes and permit the last entry of the scatterlist pools not to be a power of two. Signed-off-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
-
由 Avi Kivity 提交于
Migrating the apic timer in the critical section is not very nice, and is absolutely horrible with the real-time port. Move migration to the regular vcpu execution path, triggered by a new bitflag. Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
When preparing to enter the guest, if an interrupt comes in while preemption is disabled but interrupts are still enabled, we miss a preemption point. Fix by explicitly checking whether we need to reschedule. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Otherwise we re-initialize the mmu caches, which will fail since the caches are already registered, which will cause us to deinitialize said caches. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
Right now rmap_remove won't set the page as dirty if the shadow pte pointed to this page had write access and then it became readonly. This patches fixes that, by setting the page as dirty for spte changes from write to readonly access. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Sheng Yang 提交于
When executing a test program called "crashme", we found the KVM guest cannot survive more than ten seconds, then encounterd kernel panic. The basic concept of "crashme" is generating random assembly code and trying to execute it. After some fixes on emulator insn validity judgment, we found it's hard to get the current emulator handle the invalid instructions correctly, for the #UD trap for hypercall patching caused troubles. The problem is, if the opcode itself was OK, but combination of opcode and modrm_reg was invalid, and one operand of the opcode was memory (SrcMem or DstMem), the emulator will fetch the memory operand first rather than checking the validity, and may encounter an error there. For example, ".byte 0xfe, 0x34, 0xcd" has this problem. In the patch, we simply check that if the invalid opcode wasn't vmcall/vmmcall, then return from emulate_instruction() and inject a #UD to guest. With the patch, the guest had been running for more than 12 hours. Signed-off-by: NFeng (Eric) Liu <eric.e.liu@intel.com> Signed-off-by: NSheng Yang <sheng.yang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Dong, Eddie 提交于
Remove the redundant level check when fetching shadow pte for present & non-present spte. Signed-off-by: NYaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
If some other cpu steals mmu pages between our check and an attempt to allocate, we can run out of mmu pages. Fix by moving the check into the same critical section as the allocation. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Marcelo Tosatti 提交于
Convert the synchronization of the shadow handling to a separate mmu_lock spinlock. Also guard fetch() by mmap_sem in read-mode to protect against alias and memslot changes. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Since gfn_to_page() is a sleeping function, and we want to make the core mmu spinlocked, we need to pass the page from the walker context (which can sleep) to the shadow context (which cannot). [marcelo: avoid recursive locking of mmap_sem] Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Marcelo Tosatti 提交于
In preparation for a mmu spinlock, add kvm_read_guest_atomic() and use it in fetch() and prefetch_page(). Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Marcelo Tosatti 提交于
Do not hold kvm->lock mutex across the entire pagefault code, only acquire it in places where it is necessary, such as mmu hash list, active list, rmap and parent pte handling. Allow concurrent guest walkers by switching walk_addr() to use mmap_sem in read-mode. And get rid of the lockless __gfn_to_page. [avi: move kvm_mmu_pte_write() locking inside the function] [avi: add locking for real mode] [avi: fix cmpxchg locking] Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
FlexPriority accelerates the tpr without any patching. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
This adds a mechanism for exposing the virtual apic tpr to the guest, and a protocol for letting the guest update the tpr without causing a vmexit if conditions allow (e.g. there is no interrupt pending with a higher priority than the new tpr). Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Add a facility to report on accesses to the local apic tpr even if the local apic is emulated in the kernel. This is basically a hack that allows userspace to patch Windows which tends to bang on the tpr a lot. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
This can help diagnosing what the guest is trying to do. In many cases we can get away with partial emulation of msrs. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Eddie Dong 提交于
Host side TLB flush can be merged together if multiple spte need to be write-protected. Signed-off-by: NYaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Moving kvm_vcpu_kick() to x86.c. Since it should be common for all archs, put its declarations in <linux/kvm_host.h> Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Move ioapic code to common, since IA64 also needs it. Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
This allows reuse of ioapic in ia64. Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
This paves the way for multiple architecture support. Note that while ioapic.c could potentially be shared with ia64, it is also moved. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 30 1月, 2008 2 次提交
-
-
由 Avi Kivity 提交于
Currently, make headers_check barfs due to <asm/kvm.h>, which <linux/kvm.h> includes, not existing. Rather than add a zillion <asm/kvm.h>s, export kvm.h only if the arch actually supports it. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Yinghai Lu 提交于
memnode.map is s16 array because of nodeid is 16 bit now. so need to increase the nodemap_size according to that bits. Signed-off-by: NYinghai Lu <yinghai.lu@sun.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-