1. 23 9月, 2016 1 次提交
  2. 16 9月, 2016 1 次提交
  3. 14 9月, 2016 1 次提交
  4. 13 9月, 2016 1 次提交
  5. 07 9月, 2016 2 次提交
  6. 06 9月, 2016 1 次提交
    • E
      vhost-user-test: Use libqos instead of pxe-virtio.rom · cdafe929
      Eduardo Habkost 提交于
      vhost-user-test relies on iPXE just to initialize the virtio-net
      device, and doesn't do any actual packet tx/rx testing.
      
      In addition to that, the test relies on TCG, which is
      imcompatible with vhost. The test only worked by accident: a bug
      the memory backend initialization made memory regions not have
      the DIRTY_MEMORY_CODE bit set in dirty_log_mask.
      
      This changes vhost-user-test to initialize the virtio-net device
      using libqos, and not use TCG nor pxe-virtio.rom.
      Signed-off-by: NEduardo Habkost <ehabkost@redhat.com>
      cdafe929
  7. 08 8月, 2016 1 次提交
  8. 29 7月, 2016 2 次提交
  9. 22 7月, 2016 1 次提交
    • D
      tests: introduce a framework for testing migration performance · 409437e1
      Daniel P. Berrange 提交于
      This introduces a moderately general purpose framework for
      testing performance of migration.
      
      The initial guest workload is provided by the included 'stress'
      program, which is configured to spawn one thread per guest CPU
      and run a maximally memory intensive workload. It will loop
      over GB of memory, xor'ing each byte with data from a 4k array
      of random bytes. This ensures heavy read and write load across
      all of guest memory to stress the migration performance. While
      running the 'stress' program will record how long it takes to
      xor each GB of memory and print this data for later reporting.
      
      The test engine will spawn a pair of QEMU processes, either on
      the same host, or with the target on a remote host via ssh,
      using the host kernel and a custom initrd built with 'stress'
      as the /init binary. Kernel command line args are set to ensure
      a fast kernel boot time (< 1 second) between launching QEMU and
      the stress program starting execution.
      
      None the less, the test engine will initially wait N seconds for
      the guest workload to stablize, before starting the migration
      operation. When migration is running, the engine will use pause,
      post-copy, autoconverge, xbzrle compression and multithread
      compression features, as well as downtime & bandwidth tuning
      to encourage completion. If migration completes, the test engine
      will wait N seconds again for the guest workooad to stablize on
      the target host. If migration does not complete after a preset
      number of iterations, it will be aborted.
      
      While the QEMU process is running on the source host, the test
      engine will sample the host CPU usage of QEMU as a whole, and
      each vCPU thread. While migration is running, it will record
      all the stats reported by 'query-migration'. Finally, it will
      capture the output of the stress program running in the guest.
      
      All the data produced from a single test execution is recorded
      in a structured JSON file. A separate program is then able to
      create interactive charts using the "plotly" python + javascript
      libraries, showing the characteristics of the migration.
      
      The data output provides visualization of the effect on guest
      vCPU workloads from the migration process, the corresponding
      vCPU utilization on the host, and the overall CPU hit from
      QEMU on the host. This is correlated from statistics from the
      migration process, such as downtime, vCPU throttling and iteration
      number.
      
      While the tests can be run individually with arbitrary parameters,
      there is also a facility for producing batch reports for a number
      of pre-defined scenarios / comparisons, in order to be able to
      get standardized results across different hardware configurations
      (eg TCP vs RDMA, or comparing different VCPU counts / memory
      sizes, etc).
      
      To use this, first you must build the initrd image
      
       $ make tests/migration/initrd-stress.img
      
      To run a a one-shot test with all default parameters
      
       $ ./tests/migration/guestperf.py > result.json
      
      This has many command line args for varying its behaviour.
      For example, to increase the RAM size and CPU count and
      bind it to specific host NUMA nodes
      
       $ ./tests/migration/guestperf.py \
             --mem 4 --cpus 2 \
             --src-mem-bind 0 --src-cpu-bind 0,1 \
             --dst-mem-bind 1 --dst-cpu-bind 2,3 \
             > result.json
      
      Using mem + cpu binding is strongly recommended on NUMA
      machines, otherwise the guest performance results will
      vary wildly between runs of the test due to lucky/unlucky
      NUMA placement, making sensible data analysis impossible.
      
      To make it run across separate hosts:
      
       $ ./tests/migration/guestperf.py \
             --dst-host somehostname > result.json
      
      To request that post-copy is enabled, with switchover
      after 5 iterations
      
       $ ./tests/migration/guestperf.py \
             --post-copy --post-copy-iters 5 > result.json
      
      Once a result.json file is created, a graph of the data
      can be generated, showing guest workload performance per
      thread and the migration iteration points:
      
       $ ./tests/migration/guestperf-plot.py --output result.html \
              --migration-iters --split-guest-cpu result.json
      
      To further include host vCPU utilization and overall QEMU
      utilization
      
       $ ./tests/migration/guestperf-plot.py --output result.html \
              --migration-iters --split-guest-cpu \
      	--qemu-cpu --vcpu-cpu result.json
      
      NB, the 'guestperf-plot.py' command requires that you have
      the plotly python library installed. eg you must do
      
       $ pip install --user  plotly
      
      Viewing the result.html file requires that you have the
      plotly.min.js file in the same directory as the HTML
      output. This js file is installed as part of the plotly
      python library, so can be found in
      
        $HOME/.local/lib/python2.7/site-packages/plotly/offline/plotly.min.js
      
      The guestperf-plot.py program can accept multiple json files
      to plot, enabling results from different configurations to
      be compared.
      
      Finally, to run the entire standardized set of comparisons
      
        $ ./tests/migration/guestperf-batch.py \
             --dst-host somehost \
             --mem 4 --cpus 2 \
             --src-mem-bind 0 --src-cpu-bind 0,1 \
             --dst-mem-bind 1 --dst-cpu-bind 2,3
             --output tcp-somehost-4gb-2cpu
      
      will store JSON files from all scenarios in the directory
      named tcp-somehost-4gb-2cpu
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      Message-Id: <1469020993-29426-7-git-send-email-berrange@redhat.com>
      Signed-off-by: NAmit Shah <amit.shah@redhat.com>
      409437e1
  10. 19 7月, 2016 2 次提交
    • E
      qapi: Implement boxed types for commands/events · c818408e
      Eric Blake 提交于
      Turn on the ability to pass command and event arguments in
      a single boxed parameter, which must name a non-empty type
      (although the type can be a struct with all optional members).
      For structs, it makes it possible to pass a single qapi type
      instead of a breakout of all struct members (useful if the
      arguments are already in a struct or if the number of members
      is large); for other complex types, it is now possible to use
      a union or alternate as the data for a command or event.
      
      The empty type may be technically feasible if needed down the
      road, but it's easier to forbid it now and relax things to allow
      it later, than it is to allow it now and have to special case
      how the generated 'q_empty' type is handled (see commit 7ce106a9
      for reasons why nothing is generated for the empty type).  An
      alternate type is never considered empty, but now that a boxed
      type can be either an object or an alternate, we have to provide
      a trivial QAPISchemaAlternateType.is_empty().  The new call to
      arg_type.is_empty() during QAPISchemaCommand.check() requires
      that we first check the type in question; but there is no chance
      of introducing a cycle since objects do not refer back to commands.
      
      We still have a split in syntax checking between ad-hoc parsing
      up front (merely validates that 'boxed' has a sane value) and
      during .check() methods (if 'boxed' is set, then 'data' must name
      a non-empty user-defined type).
      
      Generated code is unchanged, as long as no client uses the
      new feature.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Message-Id: <1468468228-27827-10-git-send-email-eblake@redhat.com>
      Reviewed-by: NMarkus Armbruster <armbru@redhat.com>
      [Test files renamed to *-boxed-*]
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      c818408e
    • E
      qapi: Require all branches of flat union enum to be covered · d0b18239
      Eric Blake 提交于
      We were previously enforcing that all flat union branches were
      found in the corresponding enum, but not that all enum values
      were covered by branches.  The resulting generated code would
      abort() if the user passes the uncovered enum value.
      
      We don't automatically treat non-present branches in a flat
      union as empty types, for symmetry with simple unions (there,
      the enum type is generated from the list of all branches, so
      there is no way to omit a branch but still have it be part of
      the union).
      
      A later patch will add shorthand so that branches that are empty
      in flat unions can be declared as 'branch':{} instead of
      'branch':'Empty', to avoid the need for an otherwise useless
      explicit empty type.  [Such shorthand for simple unions is a bit
      harder to justify, since we would still have to generate a
      wrapper type that parses 'data':{}, rather than truly being an
      empty branch with no additional siblings to the 'type' member.]
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Message-Id: <1468468228-27827-3-git-send-email-eblake@redhat.com>
      Reviewed-by: NMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      d0b18239
  11. 06 7月, 2016 1 次提交
    • E
      qapi: Add new clone visitor · a15fcc3c
      Eric Blake 提交于
      We have a couple places in the code base that want to deep-clone
      one QAPI object into another, and they were resorting to serializing
      the struct out to QObject then reparsing it.  A much more efficient
      version can be done by adding a new clone visitor.
      
      Since cloning is still relatively uncommon, expose the use of the
      new visitor via a QAPI_CLONE() macro that takes care of type-punning
      the underlying function pointer, rather than generating lots of
      unused functions for types that won't be cloned.  And yes, we're
      relying on the compiler treating all pointers equally, even though
      a strict C program cannot portably do so - but we're not the first
      one in the qemu code base to expect it to work (hello, glib!).
      
      The choice of adding a fourth visitor type deserves some explanation.
      On the surface, the clone visitor is mostly an input visitor (it
      takes arbitrary input - in this case, another QAPI object - and
      creates a new QAPI object during the course of the visit).  But
      ever since commit da72ab0 consolidated enum visits based on the
      visitor type, using VISITOR_INPUT would cause us to run
      visit_type_str(), even though for cloning there is nothing to do
      (we just copy the enum value across, without regards to its mapping
      to strings).   Also, since our input happens to be a QAPI object,
      we can also satisfy the internal checks for VISITOR_OUTPUT.  So in
      the end, I settled with a new VISITOR_CLONE, and chose its value
      such that many internal checks can use 'v->type & mask', sticking
      to 'v->type == value' where the difference matters.
      
      Note that we can only clone objects (including alternates) and lists,
      not built-ins or enums.  The visitor core hides integer width from
      the actual visitor (since commit 04e070d2), and as long as that's the
      case, we can't clone top-level integers.  Then again, those can
      always be cloned by direct copy, since they are not objects with
      deep pointers, so it's no real loss.  And restricting cloning to
      just objects and lists is cleaner than restricting it to non-integers.
      As such, I documented that the clone visitor is for direct use only
      by code internal to QAPI, and should not be used on incomplete objects
      (other than a hack to work around the fact that we allow NULL in place
      of "" in visit_type_str() in other output visitors).  Note that as
      written, the clone visitor will never fail on a complete object.
      
      Scalars (including enums) not at the root of the clone copy just fine
      with no additional effort while visiting the scalar, by virtue of a
      g_memdup() each time we push another struct onto the stack.  Cloning
      a string requires deduplication of a pointer, which means it can also
      provide the guarantee of an input visitor of never producing NULL
      even when still accepting NULL in place of "" the way the QMP output
      visitor does.
      
      Cloning an 'any' type could be possible by incrementing the QObject
      refcnt, but it's not obvious whether that is better than implementing
      a QObject deep clone.  So for now, we document it as unsupported,
      and intentionally omit the .type_any() callback to let a developer
      know their usage needs implementation.
      
      Add testsuite coverage for several different clone situations, to
      ensure that the code is working.  I also tested that valgrind was
      happy with the test.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Message-Id: <1465490926-28625-14-git-send-email-eblake@redhat.com>
      Reviewed-by: NMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      a15fcc3c
  12. 04 7月, 2016 1 次提交
    • D
      crypto: switch hash code to use nettle/gcrypt directly · 0c16c056
      Daniel P. Berrange 提交于
      Currently the internal hash code is using the gnutls hash APIs.
      GNUTLS in turn is wrapping either nettle or gcrypt. Not only
      were the GNUTLS hash APIs not added until GNUTLS 2.9.10, but
      they don't expose support for all the algorithms QEMU needs
      to use with LUKS.
      
      Address this by directly wrapping nettle/gcrypt in QEMU and
      avoiding GNUTLS's extra layer of indirection. This gives us
      support for hash functions on a much wider range of platforms
      and opens up ability to support more hash functions. It also
      avoids a GNUTLS bug which would not correctly handle hashing
      of large data blocks if int != size_t.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      0c16c056
  13. 27 6月, 2016 1 次提交
  14. 22 6月, 2016 1 次提交
  15. 17 6月, 2016 2 次提交
  16. 16 6月, 2016 1 次提交
    • D
      test: Postcopy · ea0c6d62
      Dr. David Alan Gilbert 提交于
      This is a postcopy test (x86 only) that actually runs the guest
      and checks the memory contents.
      
      The test runs from an x86 boot block with the hex embedded in the test;
      the source for this is:
      
      ...........
      
      .code16
      .org 0x7c00
      	.file	"fill.s"
      	.text
      	.globl	start
      	.type	start, @function
      start:             # at 0x7c00 ?
              cli
              lgdt gdtdesc
              mov $1,%eax
              mov %eax,%cr0  # Protected mode enable
              data32 ljmp $8,$0x7c20
      
      .org 0x7c20
      .code32
              # A20 enable - not sure I actually need this
              inb $0x92,%al
              or  $2,%al
              outb %al, $0x92
      
              # set up DS for the whole of RAM (needed on KVM)
              mov $16,%eax
              mov %eax,%ds
      
              mov $65,%ax
              mov $0x3f8,%dx
              outb %al,%dx
      
              # bl keeps a counter so we limit the output speed
              mov $0, %bl
      mainloop:
              # Start from 1MB
              mov $(1024*1024),%eax
      innerloop:
              incb (%eax)
              add $4096,%eax
              cmp $(100*1024*1024),%eax
              jl innerloop
      
              inc %bl
              jnz mainloop
      
              mov $66,%ax
              mov $0x3f8,%dx
              outb %al,%dx
      
      	jmp mainloop
      
              # GDT magic from old (GPLv2)  Grub startup.S
              .p2align        2       /* force 4-byte alignment */
      gdt:
              .word   0, 0
              .byte   0, 0, 0, 0
      
              /* -- code segment --
               * base = 0x00000000, limit = 0xFFFFF (4 KiB Granularity), present
               * type = 32bit code execute/read, DPL = 0
               */
              .word   0xFFFF, 0
              .byte   0, 0x9A, 0xCF, 0
      
              /* -- data segment --
               * base = 0x00000000, limit 0xFFFFF (4 KiB Granularity), present
               * type = 32 bit data read/write, DPL = 0
               */
              .word   0xFFFF, 0
              .byte   0, 0x92, 0xCF, 0
      
      gdtdesc:
              .word   0x27                    /* limit */
              .long   gdt                     /* addr */
      
      /* I'm a bootable disk */
      .org 0x7dfe
              .byte 0x55
              .byte 0xAA
      
      ...........
      
      and that can be assembled by the following magic:
          as --32 -march=i486 fill.s -o fill.o
          objcopy -O binary fill.o fill.boot
          dd if=fill.boot of=bootsect bs=256 count=2 skip=124
          xxd -i bootsect
      Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
      Reviewed-by: NMarcel Apfelbaum <marcel@redhat.com>
      Message-id: 1465816605-29488-5-git-send-email-dgilbert@redhat.com
      Message-Id: <1465816605-29488-5-git-send-email-dgilbert@redhat.com>
      Signed-off-by: NAmit Shah <amit.shah@redhat.com>
      ea0c6d62
  17. 12 6月, 2016 4 次提交
    • E
      qht: add test-qht-par to invoke qht-bench from 'check' target · 896a9ee9
      Emilio G. Cota 提交于
      Signed-off-by: NEmilio G. Cota <cota@braap.org>
      Message-Id: <1465412133-3029-14-git-send-email-cota@braap.org>
      Signed-off-by: NRichard Henderson <rth@twiddle.net>
      896a9ee9
    • E
      qht: add qht-bench, a performance benchmark · 515864a0
      Emilio G. Cota 提交于
      This serves as a performance benchmark as well as a stress test
      for QHT. We can tweak quite a number of things, including the
      number of resize threads and how frequently resizes are triggered.
      
      A performance comparison of QHT vs CLHT[1] and ck_hs[2] using
      this same benchmark program can be found here:
        http://imgur.com/a/0Bms4
      
      The tests are run on a 64-core AMD Opteron 6376, pinning threads
      to cores favoring same-socket cores. For each run, qht-bench is
      invoked with:
        $ tests/qht-bench -d $duration -n $n -u $u -g $range
      , where $duration is in seconds, $n is the number of threads,
      $u is the update rate (0.0 to 100.0), and $range is the number
      of keys.
      
      Note that ck_hs's performance drops significantly as writes go
      up, since it requires an external lock (I used a ck_spinlock)
      around every write.
      
      Also, note that CLHT instead of using a seqlock, relies on an
      allocator that does not ever return the same address during the
      same read-critical section. This gives it a slight performance
      advantage over QHT on read-heavy workloads, since the seqlock
      writes aren't there.
      
      [1] CLHT: https://github.com/LPD-EPFL/CLHT
                https://infoscience.epfl.ch/record/207109/files/ascy_asplos15.pdf
      
      [2] ck_hs: http://concurrencykit.org/
                 http://backtrace.io/blog/blog/2015/03/13/workload-specialization/
      
      A few of those plots are shown in text here, since that site
      might not be online forever. Throughput is on Mops/s on the Y axis.
      
                                   200K keys, 0 % updates
      
        450 ++--+------+------+-------+-------+-------+-------+------+-------+--++
            |   +      +      +       +       +       +       +      +      +N+  |
        400 ++                                                           ---+E+ ++
            |                                                       +++----      |
        350 ++          9 ++------+------++                       --+E+    -+H+ ++
            |             |      +H+-     |                 -+N+----   ---- +++  |
        300 ++          8 ++     +E+     ++             -----+E+  --+H+         ++
            |             |      +++      |         -+N+-----+H+--               |
        250 ++          7 ++------+------++  +++-----+E+----                    ++
        200 ++                    1         -+E+-----+H+                        ++
            |                           ----                     qht +-E--+      |
        150 ++                      -+E+                        clht +-H--+     ++
            |                   ----                              ck +-N--+      |
        100 ++               +E+                                                ++
            |            ----                                                    |
         50 ++       -+E+                                                       ++
            |   +E+E+  +      +       +       +       +       +      +       +   |
          0 ++--E------+------+-------+-------+-------+-------+------+-------+--++
                1      8      16      24      32      40      48     56      64
                                      Number of threads
      
                                   200K keys, 1 % updates
      
        350 ++--+------+------+-------+-------+-------+-------+------+-------+--++
            |   +      +      +       +       +       +       +      +     -+E+  |
        300 ++                                                         -----+H+ ++
            |                                                       +E+--        |
            |           9 ++------+------++                  +++----             |
        250 ++            |      +E+   -- |                 -+E+                ++
            |           8 ++         --  ++             ----                     |
        200 ++            |      +++-     |  +++  ---+E+                        ++
            |           7 ++------N------++ -+E+--               qht +-E--+      |
            |                     1  +++----                    clht +-H--+      |
        150 ++                      -+E+                          ck +-N--+     ++
            |                   ----                                             |
        100 ++               +E+                                                ++
            |            ----                                                    |
            |        -+E+                                                        |
         50 ++    +H+-+N+----+N+-----+N+------                                  ++
            |   +E+E+  +      +       +      +N+-----+N+-----+N+----+N+-----+N+  |
          0 ++--E------+------+-------+-------+-------+-------+------+-------+--++
                1      8      16      24      32      40      48     56      64
                                      Number of threads
      
                                   200K keys, 20 % updates
      
        300 ++--+------+------+-------+-------+-------+-------+------+-------+--++
            |   +      +      +       +       +       +       +      +       +   |
            |                                                              -+H+  |
        250 ++                                                         ----     ++
            |           9 ++------+------++                       --+H+  ---+E+  |
            |           8 ++     +H+--   ++                 -+H+----+E+--        |
        200 ++            |      +E+    --|             -----+E+--  +++         ++
            |           7 ++      + ---- ++       ---+H+---- +++ qht +-E--+      |
        150 ++          6 ++------N------++ -+H+-----+E+        clht +-H--+     ++
            |                     1     -----+E+--                ck +-N--+      |
            |                       -+H+----                                     |
        100 ++                  -----+E+                                        ++
            |                +E+--                                               |
            |            ----+++                                                 |
         50 ++       -+E+                                                       ++
            |     +E+ +++                                                        |
            |   +E+N+-+N+-----+       +       +       +       +      +       +   |
          0 ++--E------+------N-------N-------N-------N-------N------N-------N--++
                1      8      16      24      32      40      48     56      64
                                      Number of threads
      
                                  200K keys, 100 % updates       qht +-E--+
                                                                clht +-H--+
        160 ++--+------+------+-------+-------+-------+-------+---ck-+-N-----+--++
            |   +      +      +       +       +       +       +      +   ----H   |
        140 ++                                                      +H+--  -+E+ ++
            |                                                +++----   ----      |
        120 ++          8 ++------+------++                 -+H+    +E+         ++
            |           7 ++     +H+---- ++             ---- +++----             |
        100 ++            |      +E+      |  +++  ---+H+    -+E+                ++
            |           6 ++     +++     ++ -+H+--   +++----                     |
         80 ++          5 ++------N----------+E+-----+E+                        ++
            |                     1 -+H+---- +++                                 |
            |                   -----+E+                                         |
         60 ++               +H+---- +++                                        ++
            |            ----+E+                                                 |
         40 ++        +H+----                                                   ++
            |       --+E+                                                        |
         20 ++    +E+                                                           ++
            |  +EE+    +      +       +       +       +       +      +       +   |
          0 ++--+N-N---N------N-------N-------N-------N-------N------N-------N--++
                1      8      16      24      32      40      48     56      64
                                      Number of threads
      Signed-off-by: NEmilio G. Cota <cota@braap.org>
      Message-Id: <1465412133-3029-13-git-send-email-cota@braap.org>
      Signed-off-by: NRichard Henderson <rth@twiddle.net>
      515864a0
    • E
      qht: add test program · 1a95404f
      Emilio G. Cota 提交于
      Acked-by: NSergey Fedorov <sergey.fedorov@linaro.org>
      Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: NRichard Henderson <rth@twiddle.net>
      Signed-off-by: NEmilio G. Cota <cota@braap.org>
      Message-Id: <1465412133-3029-12-git-send-email-cota@braap.org>
      Signed-off-by: NRichard Henderson <rth@twiddle.net>
      1a95404f
    • E
      qdist: add test program · ff9249b7
      Emilio G. Cota 提交于
      Acked-by: NSergey Fedorov <sergey.fedorov@linaro.org>
      Reviewed-by: NRichard Henderson <rth@twiddle.net>
      Signed-off-by: NEmilio G. Cota <cota@braap.org>
      Message-Id: <1465412133-3029-10-git-send-email-cota@braap.org>
      Signed-off-by: NRichard Henderson <rth@twiddle.net>
      ff9249b7
  18. 07 6月, 2016 1 次提交
  19. 02 6月, 2016 2 次提交
  20. 26 5月, 2016 3 次提交
  21. 23 5月, 2016 1 次提交
  22. 12 5月, 2016 1 次提交
    • E
      tests: Add check-qnull · 7d7a337e
      Eric Blake 提交于
      Add a new test, for checking reference counting of qnull(). As
      part of the new file, move a previous reference counting change
      added in commit a8615640 to a more logical place.
      
      Note that while most of the check-q*.c leave visitor stuff to
      the test-qmp-*-visitor.c, in this case we actually want the
      visitor tests in our new file because we are validating the
      reference count of qnull_, which is an internal detail that
      test-qmp-*-visitor should not be peeking into (or put another
      way, qnull() is the only special case where we don't have
      independent allocation of a QObject, so none of the other
      visitor tests require the layering violation present in this
      test).
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Message-Id: <1461879932-9020-14-git-send-email-eblake@redhat.com>
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      7d7a337e
  23. 20 4月, 2016 1 次提交
    • Y
      qemu-ga: do not run qga test when guest agent disabled · fb91f30b
      Yang Hongyang 提交于
      When configure with --disable-guest-agent, make check will fail with:
      ERROR:tests/test-qga.c:74:fixture_setup: assertion failed (error == NULL):
       Failed to execute child process "/home/xx/qemu/qemu-ga" (No such file or
      directory) (g-exec-error-quark, 8)
      make: *** [check-tests/test-qga] Error 1
      
      This check was commented out by bab47d9a. I think that was by
      mistake, because the commit message of that commit didn't mention
      this change.
      Signed-off-by: NYang Hongyang <hongyang.yang@easystack.cn>
      Cc: Gerd Hoffmann <kraxel@redhat.com>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Michael Roth <mdroth@linux.vnet.ibm.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NMichael Roth <mdroth@linux.vnet.ibm.com>
      Cc: qemu-stable@nongnu.org
      fb91f30b
  24. 08 4月, 2016 1 次提交
    • G
      Sort the fw_cfg file list · bab47d9a
      Gerd Hoffmann 提交于
      Entries are inserted in filename order instead of being
      appended to the end in case sorting is enabled.
      
      This will avoid any future issues of moving the file creation
      around, it doesn't matter what order they are created now,
      the will always be in filename order.
      Signed-off-by: NGerd Hoffmann <kraxel@redhat.com>
      
      Added machine type handling for compatibility.  This was
      a fairly complex change, this will preserve the order of fw_cfg
      for older versions no matter what order the firmware files
      actually come in.  A list is kept of the correct legacy order
      and the entries will be inserted based upon their order in
      the list.  Except that some entries are ordered (in a specific
      area of the list) based upon what order they appear on the
      command line.  Special handling is added for those entries.
      Signed-off-by: NCorey Minyard <cminyard@mvista.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      bab47d9a
  25. 05 4月, 2016 1 次提交
  26. 30 3月, 2016 2 次提交
    • Z
      tests/test-filter-redirector: Add unit test for filter-redirector · 9fd3c5d5
      Zhang Chen 提交于
      In this unit test,we will test the filter redirector function.
      
      Case 1, tx traffic flow:
      
      qemu side              | test side
                             |
      +---------+            |  +-------+
      | backend <---------------+ sock0 |
      +----+----+            |  +-------+
           |                 |
      +----v----+  +-------+ |
      |  rd0    +->+chardev| |
      +---------+  +---+---+ |
                       |     |
      +---------+      |     |
      |  rd1    <------+     |
      +----+----+            |
           |                 |
      +----v----+            |  +-------+
      |  rd2    +--------------->sock1  |
      +---------+            |  +-------+
                             +
      
      a. we(sock0) inject packet to qemu socket backend
      b. backend pass packet to filter redirector0(rd0)
      c. rd0 redirect packet to out_dev(chardev) which is connected with
      filter redirector1's(rd1) in_dev
      d. rd1 read this packet from in_dev, and pass to next filter redirector2(rd2)
      e. rd2 redirect packet to rd2's out_dev which is connected with an opened socketed(sock1)
      f. we read packet from sock1 and compare to what we inject
      
      Start qemu with:
      
      "-netdev socket,id=qtest-bn0,fd=%d "
      "-device rtl8139,netdev=qtest-bn0,id=qtest-e0 "
      "-chardev socket,id=redirector0,path=%s,server,nowait "
      "-chardev socket,id=redirector1,path=%s,server,nowait "
      "-chardev socket,id=redirector2,path=%s,nowait "
      "-object filter-redirector,id=qtest-f0,netdev=qtest-bn0,"
      "queue=tx,outdev=redirector0 "
      "-object filter-redirector,id=qtest-f1,netdev=qtest-bn0,"
      "queue=tx,indev=redirector2 "
      "-object filter-redirector,id=qtest-f2,netdev=qtest-bn0,"
      "queue=tx,outdev=redirector1 "
      
      --------------------------------------
      Case 2, rx traffic flow
      qemu side              | test side
                             |
      +---------+            |  +-------+
      | backend +---------------> sock1 |
      +----^----+            |  +-------+
           |                 |
      +----+----+  +-------+ |
      |  rd0    +<-+chardev| |
      +---------+  +---+---+ |
                       ^     |
      +---------+      |     |
      |  rd1    +------+     |
      +----^----+            |
           |                 |
      +----+----+            |  +-------+
      |  rd2    <---------------+sock0  |
      +---------+            |  +-------+
      
      a. we(sock0) insert packet to filter redirector2(rd2)
      b. rd2 pass packet to filter redirector1(rd1)
      c. rd1 redirect packet to out_dev(chardev) which is connected with
         filter redirector0's(rd0) in_dev
      d. rd0 read this packet from in_dev, and pass ti to qemu backend which is
         connected with an opened socketed(sock1)
      e. we read packet from sock1 and compare to what we inject
      
      Start qemu with:
      
      "-netdev socket,id=qtest-bn0,fd=%d "
      "-device rtl8139,netdev=qtest-bn0,id=qtest-e0 "
      "-chardev socket,id=redirector0,path=%s,server,nowait "
      "-chardev socket,id=redirector1,path=%s,server,nowait "
      "-chardev socket,id=redirector2,path=%s,nowait "
      "-object filter-redirector,id=qtest-f0,netdev=qtest-bn0,"
      "queue=rx,outdev=redirector0 "
      "-object filter-redirector,id=qtest-f1,netdev=qtest-bn0,"
      "queue=rx,indev=redirector2 "
      "-object filter-redirector,id=qtest-f2,netdev=qtest-bn0,"
      "queue=rx,outdev=redirector1 "
      Signed-off-by: NZhang Chen <zhangchen.fnst@cn.fujitsu.com>
      Signed-off-by: NWen Congyang <wency@cn.fujitsu.com>
      Signed-off-by: NLi Zhijian <lizhijian@cn.fujitsu.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      9fd3c5d5
    • Z
      tests/test-filter-mirror:add filter-mirror unit test · 06809ecf
      Zhang Chen 提交于
      In this unit test we will test the mirror function.
      
      start qemu with:
            -netdev socket,id=qtest-bn0,fd=sockfd
            -device e1000,netdev=qtest-bn0,id=qtest-e0
            -chardev socket,id=mirror0,path=/tmp/filter-mirror-test.sock,server,nowait
            -object filter-mirror,id=qtest-f0,netdev=qtest-bn0,queue=tx,outdev=mirror0
      
      We inject packet to netdev socket id = qtest-bn0,
      filter-mirror will copy and mirror the packet to mirror0.
      we read packet from mirror0 and then compare to what we injected.
      Signed-off-by: NZhang Chen <zhangchen.fnst@cn.fujitsu.com>
      Signed-off-by: NWen Congyang <wency@cn.fujitsu.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      06809ecf
  27. 23 3月, 2016 1 次提交
    • A
      qemu-log: new option -dfilter to limit output · 3514552e
      Alex Bennée 提交于
      When debugging big programs or system emulation sometimes you want both
      the verbosity of cpu,exec et all but don't want to generate lots of logs
      for unneeded stuff. This patch adds a new option -dfilter which allows
      you to specify interesting address ranges in the form:
      
        -dfilter 0x8000..0x8fff,0xffffffc000080000+0x200,...
      
      Then logging code can use the new qemu_log_in_addr_range() function to
      decide if it will output logging information for the given range.
      Signed-off-by: NAlex Bennée <alex.bennee@linaro.org>
      Message-Id: <1458052224-9316-7-git-send-email-alex.bennee@linaro.org>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      3514552e
  28. 22 3月, 2016 1 次提交
  29. 17 3月, 2016 1 次提交
    • D
      crypto: add block encryption framework · 7d969014
      Daniel P. Berrange 提交于
      Add a generic framework for supporting different block encryption
      formats. Upon instantiating a QCryptoBlock object, it will read
      the encryption header and extract the encryption keys. It is
      then possible to call methods to encrypt/decrypt data buffers.
      
      There is also a mode whereby it will create/initialize a new
      encryption header on a previously unformatted volume.
      
      The initial framework comes with support for the legacy QCow
      AES based encryption. This enables code in the QCow driver to
      be consolidated later.
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      7d969014