1. 05 9月, 2014 4 次提交
  2. 04 9月, 2014 1 次提交
    • Z
      net: Forbid dealing with packets when VM is not running · e1d64c08
      zhanghailiang 提交于
      For all NICs(except virtio-net) emulated by qemu,
      Such as e1000, rtl8139, pcnet and ne2k_pci,
      Qemu can still receive packets when VM is not running.
      
      If this happened in *migration's* last PAUSE VM stage, but
      before the end of the migration, the new receiving packets will possibly dirty
      parts of RAM which has been cached in *iovec*(will be sent asynchronously) and
      dirty parts of new RAM which will be missed.
      This will lead serious network fault in VM.
      
      To avoid this, we forbid receiving packets in generic net code when
      VM is not running.
      
      Bug reproduction steps:
      (1) Start a VM which configured at least one NIC
      (2) In VM, open several Terminal and do *Ping IP -i 0.1*
      (3) Migrate the VM repeatedly between two Hosts
      And the *PING* command in VM will very likely fail with message:
      'Destination HOST Unreachable', the NIC in VM will stay unavailable unless you
      run 'service network restart'
      Signed-off-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com>
      Reviewed-by: NJason Wang <jasowang@redhat.com>
      Reviewed-by: NJuan Quintela <quintela@redhat.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      e1d64c08
  3. 02 9月, 2014 2 次提交
    • P
      Merge remote-tracking branch 'remotes/spice/tags/pull-spice-20140902-1' into staging · 30eaca3a
      Peter Maydell 提交于
      sanity check for qxl, minor spice display channel tweak.
      
      # gpg: Signature made Tue 02 Sep 2014 09:53:39 BST using RSA key ID D3E87138
      # gpg: Good signature from "Gerd Hoffmann (work) <kraxel@redhat.com>"
      # gpg:                 aka "Gerd Hoffmann <gerd@kraxel.org>"
      # gpg:                 aka "Gerd Hoffmann (private) <kraxel@gmail.com>"
      
      * remotes/spice/tags/pull-spice-20140902-1:
        spice: use console index as display id
        qxl-render: add more sanity checks
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      30eaca3a
    • X
      implementing victim TLB for QEMU system emulated TLB · 88e89a57
      Xin Tong 提交于
      QEMU system mode page table walks are expensive. Taken by running QEMU
      qemu-system-x86_64 system mode on Intel PIN , a TLB miss and walking a
      4-level page tables in guest Linux OS takes ~450 X86 instructions on
      average.
      
      QEMU system mode TLB is implemented using a directly-mapped hashtable.
      This structure suffers from conflict misses. Increasing the
      associativity of the TLB may not be the solution to conflict misses as
      all the ways may have to be walked in serial.
      
      A victim TLB is a TLB used to hold translations evicted from the
      primary TLB upon replacement. The victim TLB lies between the main TLB
      and its refill path. Victim TLB is of greater associativity (fully
      associative in this patch). It takes longer to lookup the victim TLB,
      but its likely better than a full page table walk. The memory
      translation path is changed as follows :
      
      Before Victim TLB:
      1. Inline TLB lookup
      2. Exit code cache on TLB miss.
      3. Check for unaligned, IO accesses
      4. TLB refill.
      5. Do the memory access.
      6. Return to code cache.
      
      After Victim TLB:
      1. Inline TLB lookup
      2. Exit code cache on TLB miss.
      3. Check for unaligned, IO accesses
      4. Victim TLB lookup.
      5. If victim TLB misses, TLB refill
      6. Do the memory access.
      7. Return to code cache
      
      The advantage is that victim TLB can offer more associativity to a
      directly mapped TLB and thus potentially fewer page table walks while
      still keeping the time taken to flush within reasonable limits.
      However, placing a victim TLB before the refill path increase TLB
      refill path as the victim TLB is consulted before the TLB refill. The
      performance results demonstrate that the pros outweigh the cons.
      
      some performance results taken on SPECINT2006 train
      datasets and kernel boot and qemu configure script on an
      Intel(R) Xeon(R) CPU  E5620  @ 2.40GHz Linux machine are shown in the
      Google Doc link below.
      
      https://docs.google.com/spreadsheets/d/1eiItzekZwNQOal_h-5iJmC4tMDi051m9qidi5_nwvH4/edit?usp=sharing
      
      In summary, victim TLB improves the performance of qemu-system-x86_64 by
      11% on average on SPECINT2006, kernelboot and qemu configscript and with
      highest improvement of in 26% in 456.hmmer. And victim TLB does not result
      in any performance degradation in any of the measured benchmarks. Furthermore,
      the implemented victim TLB is architecture independent and is expected to
      benefit other architectures in QEMU as well.
      
      Although there are measurement fluctuations, the performance
      improvement is very significant and by no means in the range of
      noises.
      Signed-off-by: NXin Tong <trent.tong@gmail.com>
      Message-id: 1407202523-23553-1-git-send-email-trent.tong@gmail.com
      Reviewed-by: NPeter Maydell <peter.maydell@linaro.org>
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      88e89a57
  4. 01 9月, 2014 33 次提交