- 24 9月, 2019 15 次提交
-
-
由 Sean Christopherson 提交于
Immediately inject a #GP when VMware emulation fails and return EMULATE_DONE instead of propagating EMULATE_FAIL up the stack. This helps pave the way for removing EMULATE_FAIL altogether. Rename EMULTYPE_VMWARE to EMULTYPE_VMWARE_GP to document that the x86 emulator is called to handle VMware #GP interception, e.g. why a #GP is injected on emulation failure for EMULTYPE_VMWARE_GP. Drop EMULTYPE_NO_UD_ON_FAIL as a standalone type. The "no #UD on fail" is used only in the VMWare case and is obsoleted by having the emulator itself reinject #GP. Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Reviewed-by: NLiran Alon <liran.alon@oracle.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Sean Christopherson 提交于
The VMware backdoor hooks #GP faults on IN{S}, OUT{S}, and RDPMC, none of which generate a non-zero error code for their #GP. Re-injecting #GP instead of attempting emulation on a non-zero error code will allow a future patch to move #GP injection (for emulation failure) into kvm_emulate_instruction() without having to plumb in the error code. Reviewed-and-tested-by: NVitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: NLiran Alon <liran.alon@oracle.com> Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Sean Christopherson 提交于
Return the single-step emulation result directly instead of via an out param. Presumably at some point in the past kvm_vcpu_do_singlestep() could be called with *r==EMULATE_USER_EXIT, but that is no longer the case, i.e. all callers are happy to overwrite their own return variable. Reviewed-by: NVitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: NLiran Alon <liran.alon@oracle.com> Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Sean Christopherson 提交于
When handling emulation failure, return the emulation result directly instead of capturing it in a local variable. Future patches will move additional cases into handle_emulation_failure(), clean up the cruft before so there isn't an ugly mix of setting a local variable and returning directly. Reviewed-by: NVitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: NLiran Alon <liran.alon@oracle.com> Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Sean Christopherson 提交于
Move the stat.mmio_exits update into x86_emulate_instruction(). This is both a bug fix, e.g. the current update flows will incorrectly increment mmio_exits on emulation failure, and a preparatory change to set the stage for eliminating EMULATE_DONE and company. Reviewed-by: NVitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Krish Sadhukhan 提交于
According to section "Checks Related to Address-Space Size" in Intel SDM vol 3C, the following checks are performed on vmentry of nested guests: If the logical processor is outside IA-32e mode (if IA32_EFER.LMA = 0) at the time of VM entry, the following must hold: - The "IA-32e mode guest" VM-entry control is 0. - The "host address-space size" VM-exit control is 0. If the logical processor is in IA-32e mode (if IA32_EFER.LMA = 1) at the time of VM entry, the "host address-space size" VM-exit control must be 1. If the "host address-space size" VM-exit control is 0, the following must hold: - The "IA-32e mode guest" VM-entry control is 0. - Bit 17 of the CR4 field (corresponding to CR4.PCIDE) is 0. - Bits 63:32 in the RIP field are 0. If the "host address-space size" VM-exit control is 1, the following must hold: - Bit 5 of the CR4 field (corresponding to CR4.PAE) is 1. - The RIP field contains a canonical address. On processors that do not support Intel 64 architecture, checks are performed to ensure that the "IA-32e mode guest" VM-entry control and the "host address-space size" VM-exit control are both 0. Signed-off-by: NKrish Sadhukhan <krish.sadhukhan@oracle.com> Reviewed-by: NKarl Heubaum <karl.heubaum@oracle.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Vitaly Kuznetsov 提交于
Hyper-V 2019 doesn't expose MD_CLEAR CPUID bit to guests when it cannot guarantee that two virtual processors won't end up running on sibling SMT threads without knowing about it. This is done as an optimization as in this case there is nothing the guest can do to protect itself against MDS and issuing additional flush requests is just pointless. On bare metal the topology is known, however, when Hyper-V is running nested (e.g. on top of KVM) it needs an additional piece of information: a confirmation that the exposed topology (wrt vCPU placement on different SMT threads) is trustworthy. NoNonArchitecturalCoreSharing (CPUID 0x40000004 EAX bit 18) is described in TLFS as follows: "Indicates that a virtual processor will never share a physical core with another virtual processor, except for virtual processors that are reported as sibling SMT threads." From KVM we can give such guarantee in two cases: - SMT is unsupported or forcefully disabled (just 'disabled' doesn't work as it can become re-enabled during the lifetime of the guest). - vCPUs are properly pinned so the scheduler won't put them on sibling SMT threads (when they're not reported as such). This patch reports NoNonArchitecturalCoreSharing bit in to userspace in the first case. The second case is outside of KVM's domain of responsibility (as vCPU pinning is actually done by someone who manages KVM's userspace - e.g. libvirt pinning QEMU threads). Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Wanpeng Li 提交于
Reported by syzkaller: kasan: GPF could be caused by NULL-ptr deref or user memory access general protection fault: 0000 [#1] PREEMPT SMP KASAN RIP: 0010:__apic_accept_irq+0x46/0x740 arch/x86/kvm/lapic.c:1029 Call Trace: kvm_apic_set_irq+0xb4/0x140 arch/x86/kvm/lapic.c:558 stimer_notify_direct arch/x86/kvm/hyperv.c:648 [inline] stimer_expiration arch/x86/kvm/hyperv.c:659 [inline] kvm_hv_process_stimers+0x594/0x1650 arch/x86/kvm/hyperv.c:686 vcpu_enter_guest+0x2b2a/0x54b0 arch/x86/kvm/x86.c:7896 vcpu_run+0x393/0xd40 arch/x86/kvm/x86.c:8152 kvm_arch_vcpu_ioctl_run+0x636/0x900 arch/x86/kvm/x86.c:8360 kvm_vcpu_ioctl+0x6cf/0xaf0 arch/x86/kvm/../../../virt/kvm/kvm_main.c:2765 The testcase programs HV_X64_MSR_STIMERn_CONFIG/HV_X64_MSR_STIMERn_COUNT, in addition, there is no lapic in the kernel, the counters value are small enough in order that kvm_hv_process_stimers() inject this already-expired timer interrupt into the guest through lapic in the kernel which triggers the NULL deferencing. This patch fixes it by don't advertise direct mode synthetic timers and discarding the inject when lapic is not in kernel. syzkaller source: https://syzkaller.appspot.com/x/repro.c?x=1752fe0a600000 Reported-by: syzbot+dff25ee91f0c7d5c1695@syzkaller.appspotmail.com Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: NWanpeng Li <wanpengli@tencent.com> Reviewed-by: NVitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Sean Christopherson 提交于
Zapping collapsible sptes, a.k.a. 4k sptes that can be promoted into a large page, is only necessary when changing only the dirty logging flag of a memory region. If the memslot is also being moved, then all sptes for the memslot are zapped when it is invalidated. When a memslot is being created, it is impossible for there to be existing dirty mappings, e.g. KVM can have MMIO sptes, but not present, and thus dirty, sptes. Note, the comment and logic are shamelessly borrowed from MIPS's version of kvm_arch_commit_memory_region(). Fixes: 3ea3b7fa ("kvm: mmu: lazy collapse small sptes into large sptes") Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Vitaly Kuznetsov 提交于
It was discovered that after commit 65efa61d ("selftests: kvm: provide common function to enable eVMCS") hyperv_cpuid selftest is failing on AMD. The reason is that the commit changed _vcpu_ioctl() to vcpu_ioctl() in the test and this one can't fail. Instead of fixing the test is seems to make more sense to not announce KVM_CAP_HYPERV_ENLIGHTENED_VMCS support if it is definitely missing (on svm and in case kvm_intel.nested=0). Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: NJim Mattson <jmattson@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Vitaly Kuznetsov 提交于
Since commit 5158917c ("KVM: x86: nVMX: Allow nested_enable_evmcs to be NULL") the code in x86.c is prepared to see nested_enable_evmcs being NULL and in VMX case it actually is when nesting is disabled. Remove the unneeded stub from SVM code. Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: NJim Mattson <jmattson@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Vitaly Kuznetsov 提交于
Hyper-V provides direct tlb flush function which helps L1 Hypervisor to handle Hyper-V tlb flush request from L2 guest. Add the function support for VMX. Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: NTianyu Lan <Tianyu.Lan@microsoft.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Tianyu Lan 提交于
Hyper-V direct tlb flush function should be enabled for guest that only uses Hyper-V hypercall. User space hypervisor(e.g, Qemu) can disable KVM identification in CPUID and just exposes Hyper-V identification to make sure the precondition. Add new KVM capability KVM_CAP_ HYPERV_DIRECT_TLBFLUSH for user space to enable Hyper-V direct tlb function and this function is default to be disabled in KVM. Signed-off-by: NTianyu Lan <Tianyu.Lan@microsoft.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Tianyu Lan 提交于
The struct hv_vp_assist_page was defined incorrectly. The "vtl_control" should be u64[3], "nested_enlightenments _control" should be a u64 and there are 7 reserved bytes following "enlighten_vmentry". Fix the definition. Signed-off-by: NTianyu Lan <Tianyu.Lan@microsoft.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Jim Mattson 提交于
These MSRs should be enumerated by KVM_GET_MSR_INDEX_LIST, so that userspace knows that these MSRs may be part of the vCPU state. Signed-off-by: NJim Mattson <jmattson@google.com> Reviewed-by: NEric Hankland <ehankland@google.com> Reviewed-by: NPeter Shier <pshier@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 21 9月, 2019 4 次提交
-
-
由 Paul Burton 提交于
2 recent commits have fixed issues where _PFN_SHIFT grew too large due to the introduction of too many pgprot bits in our PTEs for some MIPS32 kernel configurations. Tracking down such issues can be tricky, so add a BUILD_BUG_ON() to help. Signed-off-by: NPaul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org
-
由 Paul Burton 提交于
Commit 61cbfff4 ("MIPS: pte_special()/pte_mkspecial() support") added a _PAGE_SPECIAL bit to the pgprot bits of our PTEs. Unfortunately for MIPS32 configurations with RiXi support this pushed the number of pgprot bits to 13. Since the PFN field in EntryLo begins at bit 12 this results in us shifting the most significant bit of the physical address beyond the end of the PTE, leading any mapped access to a physical address above 2GB to incorrectly access an address 2GB lower than intended. For now, disable the pte_special() support for MIPS32 configurations that support RiXi. Fixes: 61cbfff4 ("MIPS: pte_special()/pte_mkspecial() support") Signed-off-by: NPaul Burton <paul.burton@mips.com> Cc: Dmitry Korotin <dkorotin@wavecomp.com> Cc: linux-mips@vger.kernel.org
-
由 Vidya Sagar 提交于
Add 3.3V and 12V supplies regulators information of x16 PCIe slot in p2972-0000 platform which is owned by C5 controller and also enable C5 controller. Signed-off-by: NVidya Sagar <vidyas@nvidia.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NAndrew Murray <andrew.murray@arm.com>
-
由 Vidya Sagar 提交于
Add support to configure PCIe C5's sideband signals PERST# and CLKREQ# as output and bi-directional signals respectively which unlike other PCIe controllers sideband signals are not configured by default. Signed-off-by: NVidya Sagar <vidyas@nvidia.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NAndrew Murray <andrew.murray@arm.com>
-
- 19 9月, 2019 1 次提交
-
-
由 Aneesh Kumar K.V 提交于
__find_linux_mm_pte() returns a page table entry pointer after walking the page table without holding locks. To make it safe against a THP split and/or collapse, we disable interrupts around the lockless page table walk. However we need to keep interrupts disabled as long as we use the page table entry pointer that is returned. Fix addr_to_pfn() to do that. Fixes: ba41e1e1 ("powerpc/mce: Hookup derror (load/store) UE errors") Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com> [mpe: Rearrange code slightly and tweak change log wording] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190918145328.28602-1-aneesh.kumar@linux.ibm.com
-
- 18 9月, 2019 3 次提交
-
-
由 Jeremy Cline 提交于
The referenced file does not exist, but tagged-address-abi.rst does. Signed-off-by: NJeremy Cline <jcline@redhat.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Naveen N. Rao 提交于
With support for HAVE_FUNCTION_GRAPH_RET_ADDR_PTR, ftrace_graph_ret_addr() provides more robust unwinding when function graph is in use. Update show_stack() to use the same. With dump_stack() added to sysrq_sysctl_handler(), before this patch: root@(none):/sys/kernel/debug/tracing# cat /proc/sys/kernel/sysrq CPU: 0 PID: 218 Comm: cat Not tainted 5.3.0-rc7-00868-g8453ad4a078c-dirty #20 Call Trace: [c0000000d1e13c30] [c00000000006ab98] return_to_handler+0x0/0x40 (dump_stack+0xe8/0x164) (unreliable) [c0000000d1e13c80] [c000000000145680] sysrq_sysctl_handler+0x48/0xb8 [c0000000d1e13cd0] [c00000000006ab98] return_to_handler+0x0/0x40 (proc_sys_call_handler+0x274/0x2a0) [c0000000d1e13d60] [c00000000006ab98] return_to_handler+0x0/0x40 (return_to_handler+0x0/0x40) [c0000000d1e13d80] [c00000000006ab98] return_to_handler+0x0/0x40 (__vfs_read+0x3c/0x70) [c0000000d1e13dd0] [c00000000006ab98] return_to_handler+0x0/0x40 (vfs_read+0xb8/0x1b0) [c0000000d1e13e20] [c00000000006ab98] return_to_handler+0x0/0x40 (ksys_read+0x7c/0x140) After this patch: Call Trace: [c0000000d1e33c30] [c00000000006ab58] return_to_handler+0x0/0x40 (dump_stack+0xe8/0x164) (unreliable) [c0000000d1e33c80] [c000000000145680] sysrq_sysctl_handler+0x48/0xb8 [c0000000d1e33cd0] [c00000000006ab58] return_to_handler+0x0/0x40 (proc_sys_call_handler+0x274/0x2a0) [c0000000d1e33d60] [c00000000006ab58] return_to_handler+0x0/0x40 (__vfs_read+0x3c/0x70) [c0000000d1e33d80] [c00000000006ab58] return_to_handler+0x0/0x40 (vfs_read+0xb8/0x1b0) [c0000000d1e33dd0] [c00000000006ab58] return_to_handler+0x0/0x40 (ksys_read+0x7c/0x140) [c0000000d1e33e20] [c00000000006ab58] return_to_handler+0x0/0x40 (system_call+0x5c/0x68) Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/dc89c9a887121342d9c7819482c3dabdece2a323.1567707399.git.naveen.n.rao@linux.vnet.ibm.com
-
由 Naveen N. Rao 提交于
This associates entries in the ftrace_ret_stack with corresponding stack frames, enabling more robust stack unwinding. Also update the only user of ftrace_graph_ret_addr() to pass the stack pointer. Signed-off-by: NNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/0224f2d0971b069c678e2ff678cfc2cd1e114cfe.1567707399.git.naveen.n.rao@linux.vnet.ibm.com
-
- 17 9月, 2019 3 次提交
-
-
由 Ganesh Goudar 提交于
Since commit 4388c9b3 ("powerpc: Do not send system reset request through the oops path"), pstore dmesg file is not updated when dump is triggered from HMC. This commit modified system reset (sreset) handler to invoke fadump or kdump (if configured), without pushing dmesg to pstore. This leaves pstore to have old dmesg data which won't be much of a help if kdump fails to capture the dump. This patch fixes that by calling kmsg_dump() before heading to fadump ot kdump. Fixes: 4388c9b3 ("powerpc: Do not send system reset request through the oops path") Reviewed-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Reviewed-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NGanesh Goudar <ganeshgr@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190904075949.15607-1-ganeshgr@linux.ibm.com
-
由 Sami Tolvanen 提交于
Define a weak function in COND_SYSCALL instead of a weak alias to sys_ni_syscall, which has an incompatible type. This fixes indirect call mismatches with Control-Flow Integrity (CFI) checking. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NSami Tolvanen <samitolvanen@google.com> Signed-off-by: NWill Deacon <will@kernel.org>
-
由 Arnd Bergmann 提交于
On arm64 build with clang, sometimes the __cmpxchg_mb is not inlined when CONFIG_OPTIMIZE_INLINING is set. Clang then fails a compile-time assertion, because it cannot tell at compile time what the size of the argument is: mm/memcontrol.o: In function `__cmpxchg_mb': memcontrol.c:(.text+0x1a4c): undefined reference to `__compiletime_assert_175' memcontrol.c:(.text+0x1a4c): relocation truncated to fit: R_AARCH64_CALL26 against undefined symbol `__compiletime_assert_175' Mark all of the cmpxchg() style functions as __always_inline to ensure that the compiler can see the result. Acked-by: NNick Desaulniers <ndesaulniers@google.com> Reported-by: NNathan Chancellor <natechancellor@gmail.com> Link: https://github.com/ClangBuiltLinux/linux/issues/648Reviewed-by: NNathan Chancellor <natechancellor@gmail.com> Tested-by: NNathan Chancellor <natechancellor@gmail.com> Reviewed-by: NAndrew Murray <andrew.murray@arm.com> Tested-by: NAndrew Murray <andrew.murray@arm.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NWill Deacon <will@kernel.org>
-
- 16 9月, 2019 14 次提交
-
-
由 Linus Walleij 提交于
The D-Link DIR-685 had its clock polarity set as active low using the special SPI "spi-cpol" property. This is not correct: the datasheet clearly states: "Fix SCL to GND level when not in use" which is indicative that this line is active high. After a recent fix making the GPIO-based SPI driver force the clock line de-asserted at the beginning of each SPI transaction this reared its ugly head: now de-asserted was taken to mean the line should be driven high, but it should be driven low. Fix this up in the DTS file and the display works again. Link: https://lore.kernel.org/r/20190915135444.11066-1-linus.walleij@linaro.org Cc: Mark Brown <broonie@kernel.org> Fixes: 2922d1cc ("spi: gpio: Add SPI_MASTER_GPIO_SS flag") Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Thomas Richter 提交于
Rewrite some lines to match line length and replace format string 0x%x to %#x. Add and remove blank line. Signed-off-by: NThomas Richter <tmricht@linux.ibm.com> Reviewed-by: NHendrik Brueckner <brueckner@linux.ibm.com> Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
-
由 Sebastian Ott 提交于
After recent changes the MSI message data needs to specify the function-relative IRQ number. Reported-and-tested-by: NAlexander Schmidt <alexs@linux.ibm.com> Signed-off-by: NSebastian Ott <sebott@linux.ibm.com> Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
-
由 Erel Geron 提交于
LAST_IRQ was used incorrectly in init_IRQ. Commit 09ccf036 forgot to update the for loop. Fix this. Fixes: 49da7e64 ("High Performance UML Vector Network Driver") Fixes: 09ccf036 ("um: Fix off by one error in IRQ enumeration") Signed-off-by: NErel Geron <erelx.geron@intel.com> Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Acked-by: NAnton Ivanov <anton.ivanov@cambridgegreys.co.uk> Signed-off-by: NRichard Weinberger <richard@nod.at>
-
由 Alex Dewar 提交于
Convert files to use SPDX header. All files are licensed under the GPLv2. Signed-off-by: NAlex Dewar <alex.dewar@gmx.co.uk> Signed-off-by: NRichard Weinberger <richard@nod.at>
-
由 Alex Dewar 提交于
Convert files to use SPDX header. All files are licensed under the GPLv2. Signed-off-by: NAlex Dewar <alex.dewar@gmx.co.uk> Signed-off-by: NRichard Weinberger <richard@nod.at>
-
由 Alex Dewar 提交于
Convert files to use SPDX header. All files are licensed under the GPLv2. Signed-off-by: NAlex Dewar <alex.dewar@gmx.co.uk> Signed-off-by: NRichard Weinberger <richard@nod.at>
-
由 Alex Dewar 提交于
Convert files to use SPDX header. All files are licensed under the GPLv2. Signed-off-by: NAlex Dewar <alex.dewar@gmx.co.uk> Signed-off-by: NRichard Weinberger <richard@nod.at>
-
由 Johannes Berg 提交于
Implement the VHOST_USER_PROTOCOL_F_REPLY_ACK extension for both slave requests (previous patch) where we have to reply and our own requests where it helps understand if the slave failed. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NRichard Weinberger <richard@nod.at>
-
由 Johannes Berg 提交于
Implement the communication channel for the device to notify us of some events, and notably implement the handling of the config updates needed for the combination of this feature and VHOST_USER_PROTOCOL_F_CONFIG. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NRichard Weinberger <richard@nod.at>
-
由 Erel Geron 提交于
This module allows virtio devices to be used over a vhost-user socket. Signed-off-by: NErel Geron <erelx.geron@intel.com> Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NRichard Weinberger <richard@nod.at>
-
由 Johannes Berg 提交于
When we have virtio enabled, we must have real barriers since we may be running on an SMP machine (quite likely are, in fact), so the other process can be on another CPU. Since in any other case we don't really use DMA barriers, remove their override completely so real barriers will get used. In the future we might need them for other cases as well. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NRichard Weinberger <richard@nod.at>
-
由 Johannes Berg 提交于
UML has its own platform-specific barrier.h under arch/x86/um/, which should get used. Fix the build system to use it, and then fix the barrier.h to actually compile. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NRichard Weinberger <richard@nod.at>
-
由 Johannes Berg 提交于
We currently do the time updates in the timer handler, even if we just call the timer handler ourselves. In basic mode we must in fact do it there since otherwise the OS timer signal won't move time forward, but in inf-cpu mode we don't need to, and it's harder to understand. Restrict the update there to basic mode, adding a comment, and do it before calling the timer_handler() in inf-cpu mode. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NRichard Weinberger <richard@nod.at>
-