1. 14 9月, 2016 7 次提交
  2. 10 9月, 2016 5 次提交
    • S
      vhost-vsock: add virtio sockets device · fc0b9b0e
      Stefan Hajnoczi 提交于
      Implement the new virtio sockets device for host<->guest communication
      using the Sockets API.  Most of the work is done in a vhost kernel
      driver so that virtio-vsock can hook into the AF_VSOCK address family.
      The QEMU vhost-vsock device handles configuration and live migration
      while the rx/tx happens in the vhost_vsock.ko Linux kernel driver.
      
      The vsock device must be given a CID (host-wide unique address):
      
        # qemu -device vhost-vsock-pci,id=vhost-vsock-pci0,guest-cid=3 ...
      
      For more information see:
      http://qemu-project.org/Features/VirtioVsock
      
      [Endianness fixes and virtio-ccw support by Claudio Imbrenda
      <imbrenda@linux.vnet.ibm.com>]
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      [mst: rebase to master]
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      fc0b9b0e
    • S
      virtio: add virtqueue_rewind() · 297a75e6
      Stefan Hajnoczi 提交于
      virtqueue_discard() requires a VirtQueueElement but virtio-balloon does
      not migrate its in-use element.  Introduce a new function that is
      similar to virtqueue_discard() but doesn't require a VirtQueueElement.
      
      This will allow virtio-balloon to access element again after migration
      with the usual proviso that the guest may have modified the vring since
      last time.
      
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Roman Kagan <rkagan@virtuozzo.com>
      Cc: Stefan Hajnoczi <stefanha@redhat.com>
      Signed-off-by: NLadi Prosek <lprosek@redhat.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      297a75e6
    • M
      virtio-pci: reduce modern_mem_bar size · d9997d89
      Marcel Apfelbaum 提交于
      Currently each VQ Notification Virtio Capability is allocated
      on a different page. The idea is to enable split drivers within
      guests, however there are no known plans to do that.
      The allocation will result in a 8MB BAR, more than various
      guest firmwares pre-allocates for PCI Bridges hotplug process.
      
      Reserve 4 bytes per VQ by default and add a new parameter
      "page-per-vq" to be used with split drivers.
      Signed-off-by: NMarcel Apfelbaum <marcel@redhat.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      d9997d89
    • L
      target-i386: present virtual L3 cache info for vcpus · 14c985cf
      Longpeng(Mike) 提交于
      Some software algorithms are based on the hardware's cache info, for example,
      for x86 linux kernel, when cpu1 want to wakeup a task on cpu2, cpu1 will trigger
      a resched IPI and told cpu2 to do the wakeup if they don't share low level
      cache. Oppositely, cpu1 will access cpu2's runqueue directly if they share llc.
      The relevant linux-kernel code as bellow:
      
      	static void ttwu_queue(struct task_struct *p, int cpu)
      	{
      		struct rq *rq = cpu_rq(cpu);
      		......
      		if (... && !cpus_share_cache(smp_processor_id(), cpu)) {
      			......
      			ttwu_queue_remote(p, cpu); /* will trigger RES IPI */
      			return;
      		}
      		......
      		ttwu_do_activate(rq, p, 0); /* access target's rq directly */
      		......
      	}
      
      In real hardware, the cpus on the same socket share L3 cache, so one won't
      trigger a resched IPIs when wakeup a task on others. But QEMU doesn't present a
      virtual L3 cache info for VM, then the linux guest will trigger lots of RES IPIs
      under some workloads even if the virtual cpus belongs to the same virtual socket.
      
      For KVM, there will be lots of vmexit due to guest send IPIs.
      The workload is a SAP HANA's testsuite, we run it one round(about 40 minuates)
      and observe the (Suse11sp3)Guest's amounts of RES IPIs which triggering during
      the period:
              No-L3           With-L3(applied this patch)
      cpu0:	363890		44582
      cpu1:	373405		43109
      cpu2:	340783		43797
      cpu3:	333854		43409
      cpu4:	327170		40038
      cpu5:	325491		39922
      cpu6:	319129		42391
      cpu7:	306480		41035
      cpu8:	161139		32188
      cpu9:	164649		31024
      cpu10:	149823		30398
      cpu11:	149823		32455
      cpu12:	164830		35143
      cpu13:	172269		35805
      cpu14:	179979		33898
      cpu15:	194505		32754
      avg:	268963.6	40129.8
      
      The VM's topology is "1*socket 8*cores 2*threads".
      After present virtual L3 cache info for VM, the amounts of RES IPIs in guest
      reduce 85%.
      
      For KVM, vcpus send IPIs will cause vmexit which is expensive, so it can cause
      severe performance degradation. We had tested the overall system performance if
      vcpus actually run on sparate physical socket. With L3 cache, the performance
      improves 7.2%~33.1%(avg:15.7%).
      Signed-off-by: NLongpeng(Mike) <longpeng2@huawei.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      14c985cf
    • L
      pc: Add 2.8 machine · a4d3c834
      Longpeng(Mike) 提交于
      This will used by the next patch.
      Signed-off-by: NLongpeng(Mike) <longpeng2@huawei.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      a4d3c834
  3. 08 9月, 2016 6 次提交
  4. 07 9月, 2016 4 次提交
  5. 06 9月, 2016 16 次提交
  6. 05 9月, 2016 2 次提交
    • C
      linux-headers: update · dbdfea92
      Cornelia Huck 提交于
      Update headers against 4.8-rc2.
      Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com>
      dbdfea92
    • C
      s390x/css: handle cssid 255 correctly · 882b3b97
      Cornelia Huck 提交于
      The cssid 255 is reserved but still valid from an architectural
      point of view. However, feeding a bogus schid of 0xffffffff into
      the virtio hypercall will lead to a crash:
      
      Stack trace of thread 138363:
              #0  0x00000000100d168c css_find_subch (qemu-system-s390x)
              #1  0x00000000100d3290 virtio_ccw_hcall_notify
              #2  0x00000000100cbf60 s390_virtio_hypercall
              #3  0x000000001010ff7a handle_hypercall
              #4  0x0000000010079ed4 kvm_cpu_exec (qemu-system-s390x)
              #5  0x00000000100609b4 qemu_kvm_cpu_thread_fn
              #6  0x000003ff8b887bb4 start_thread (libpthread.so.0)
              #7  0x000003ff8b78df0a thread_start (libc.so.6)
      
      This is because the css array was only allocated for 0..254
      instead of 0..255.
      
      Let's fix this by bumping MAX_CSSID to 255 and fencing off the
      reserved cssid of 255 during css image allocation.
      Reported-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Tested-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Cc: qemu-stable@nongnu.org
      Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com>
      882b3b97