1. 18 2月, 2019 3 次提交
  2. 05 2月, 2019 1 次提交
    • P
      monitor: do not use QTAILQ_FOREACH_SAFE across critical sections · 82e870ba
      Paolo Bonzini 提交于
      monitor_qmp_requests_pop_any_with_lock cannot modify the monitor list
      concurrently with monitor_cleanup, since the dispatch bottom half
      runs in the main thread, but anyway it is a bit ugly to keep
      "next" live across critical sections of monitor_lock and Coverity
      complains (CID 1397072).
      
      Replace QTAILQ_FOREACH_SAFE with a while loop and QTAILQ_FIRST,
      it is cleaner and more future-proof.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      82e870ba
  3. 24 1月, 2019 1 次提交
    • M
      qapi: Eliminate indirection through qmp_event_get_func_emit() · a9529100
      Markus Armbruster 提交于
      The qapi_event_send_FOO() functions emit events like this:
      
          QMPEventFuncEmit emit;
      
          emit = qmp_event_get_func_emit();
          if (!emit) {
              return;
          }
      
          qmp = qmp_event_build_dict("FOO");
          [put event arguments into @qmp...]
      
          emit(QAPI_EVENT_FOO, qmp);
      
      The value of qmp_event_get_func_emit() depends only on the program:
      
      * In qemu-system-FOO, it's always monitor_qapi_event_queue.
      
      * In tests/test-qmp-event, it's always event_test_emit.
      
      * In all other programs, it's always null.
      
      This is exactly the kind of dependence the linker is supposed to
      resolve; we don't actually need an indirection.
      
      Note that things would fall apart if we linked more than one QAPI
      schema into a single program: each set of qapi_event_send_FOO() uses
      its own event enumeration, yet they share a single emit function.
      Which takes the event enumeration as an argument.  Which one if
      there's more than one?
      
      More seriously: how does this work even now?  qemu-system-FOO wants
      QAPIEvent, and passes a function taking that to
      qmp_event_set_func_emit().  test-qmp-event wants test_QAPIEvent, and
      passes a function taking that to qmp_event_set_func_emit().
      
      It works by type trickery, of course:
      
          typedef void (*QMPEventFuncEmit)(unsigned event, QDict *dict);
      
          void qmp_event_set_func_emit(QMPEventFuncEmit emit);
      
          QMPEventFuncEmit qmp_event_get_func_emit(void);
      
      We use unsigned instead of the enumeration type.  Relies on both
      enumerations boiling down to unsigned, which happens to be true for
      the compilers we use.
      
      Clean this up as follows:
      
      * Generate qapi_event_send_FOO() that call PREFIX_qapi_event_emit()
        instead of the value of qmp_event_set_func_emit().
      
      * Generate a prototype for PREFIX_qapi_event_emit() into
        qapi-events.h.
      
      * PREFIX_ is empty for qapi/qapi-schema.json, and test_ for
        tests/qapi-schema/qapi-schema-test.json.  It's qga_ for
        qga/qapi-schema.json, and doc-good- for
        tests/qapi-schema/doc-good.json, but those don't define any events.
      
      * Rename monitor_qapi_event_queue() to qapi_event_emit() instead of
        passing it to qmp_event_set_func_emit().  This takes care of
        qemu-system-FOO.
      
      * Rename event_test_emit() to test_qapi_event_emit() instead of
        passing it to qmp_event_set_func_emit().  This takes care of
        tests/test-qmp-event.
      
      * Add a qapi_event_emit() that does nothing to stubs/monitor.c.  This
        takes care of all other programs that link code emitting QMP events.
      
      * Drop qmp_event_set_func_emit(), qmp_event_get_func_emit().
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      Message-Id: <20181218182234.28876-3-armbru@redhat.com>
      Reviewed-by: NMarc-André Lureau <marcandre.lureau@redhat.com>
      [Commit message typos fixed]
      a9529100
  4. 11 1月, 2019 1 次提交
    • P
      qemu/queue.h: leave head structs anonymous unless necessary · b58deb34
      Paolo Bonzini 提交于
      Most list head structs need not be given a name.  In most cases the
      name is given just in case one is going to use QTAILQ_LAST, QTAILQ_PREV
      or reverse iteration, but this does not apply to lists of other kinds,
      and even for QTAILQ in practice this is only rarely needed.  In addition,
      we will soon reimplement those macros completely so that they do not
      need a name for the head struct.  So clean up everything, not giving a
      name except in the rare case where it is necessary.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b58deb34
  5. 14 12月, 2018 3 次提交
  6. 12 12月, 2018 7 次提交
    • P
      monitor: Remove "x-oob", offer capability "oob" unconditionally · 8258292e
      Peter Xu 提交于
      Out-of-band command execution was introduced in commit cf869d53.
      Unfortunately, we ran into a regression, and had to turn it into an
      experimental option for 2.12 (commit be933ffc).
      
        http://lists.gnu.org/archive/html/qemu-devel/2018-03/msg06231.html
      
      The regression has since been fixed (commit 951702f3 "monitor: bind
      dispatch bh to iohandler context").  A thorough re-review of OOB
      commands led to a few more issues, which have also been addressed.
      
      This patch partly reverts be933ffc (monitor: new parameter "x-oob"),
      and makes QMP monitors again offer capability "oob" whenever they can
      provide it, i.e. when the monitor's character device is capable of
      running in an I/O thread.
      
      Some trivial touch-up in the test code is required to make sure qmp-test
      won't break.
      Reviewed-by: NMarkus Armbruster <armbru@redhat.com>
      Reviewed-by: NMarc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Message-Id: <20181009062718.1914-4-peterx@redhat.com>
      [Conflict with "monitor: check if chardev can switch gcontext for OOB"
      resolved, commit message updated]
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      8258292e
    • P
      monitor: Suspend monitor instead dropping commands · 9ab84470
      Peter Xu 提交于
      When a QMP client sends in-band commands more quickly that we can
      process them, we can either queue them without limit (QUEUE), drop
      commands when the queue is full (DROP), or suspend receiving commands
      when the queue is full (SUSPEND).  None of them is ideal:
      
      * QUEUE lets a misbehaving client make QEMU eat memory without bounds.
      Not such a hot idea.
      
      * With DROP, the client has to cope with dropped in-band commands.  To
      inform the client, we send a COMMAND_DROPPED event then.  The event is
      flawed by design in two ways: it's ambiguous (see commit d621cfe0),
      and it brings back the "eat memory without bounds" problem.
      
      * With SUSPEND, the client has to manage the flow of in-band commands to
      keep the monitor available for out-of-band commands.
      
      We currently DROP.  Switch to SUSPEND.
      
      Managing the flow of in-band commands to keep the monitor available for
      out-of-band commands isn't really hard: just count the number of
      "outstanding" in-band commands (commands sent minus replies received),
      and if it exceeds the limit, hold back additional ones until it drops
      below the limit again.
      
      Note that we need to be careful pairing the suspend with a resume, or
      else the monitor will hang, possibly forever.  And here since we need to
      make sure both:
      
           (1) popping request from the req queue, and
           (2) reading length of the req queue
      
      will be in the same critical section, we let the pop function take the
      corresponding queue lock when there is a request, then we release the
      lock from the caller.
      Reviewed-by: NMarc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Message-Id: <20181009062718.1914-2-peterx@redhat.com>
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      9ab84470
    • M
      monitor: avoid potential dead-lock when cleaning up · 34f1f3e0
      Marc-André Lureau 提交于
      When a monitor is connected to a Spice chardev, the monitor cleanup
      can dead-lock:
      
       #0  0x00007f43446637fd in __lll_lock_wait () at /lib64/libpthread.so.0
       #1  0x00007f434465ccf4 in pthread_mutex_lock () at /lib64/libpthread.so.0
       #2  0x0000556dd79f22ba in qemu_mutex_lock_impl (mutex=0x556dd81c9220 <monitor_lock>, file=0x556dd7ae3648 "/home/elmarco/src/qq/monitor.c", line=645) at /home/elmarco/src/qq/util/qemu-thread-posix.c:66
       #3  0x0000556dd7431bd5 in monitor_qapi_event_queue (event=QAPI_EVENT_SPICE_DISCONNECTED, qdict=0x556dd9abc850, errp=0x7fffb7bbddd8) at /home/elmarco/src/qq/monitor.c:645
       #4  0x0000556dd79d476b in qapi_event_send_spice_disconnected (server=0x556dd98ee760, client=0x556ddaaa8560, errp=0x556dd82180d0 <error_abort>) at qapi/qapi-events-ui.c:149
       #5  0x0000556dd7870fc1 in channel_event (event=3, info=0x556ddad1b590) at /home/elmarco/src/qq/ui/spice-core.c:235
       #6  0x00007f434560a6bb in reds_handle_channel_event (reds=<optimized out>, event=3, info=0x556ddad1b590) at reds.c:316
       #7  0x00007f43455f393b in main_dispatcher_self_handle_channel_event (info=0x556ddad1b590, event=3, self=0x556dd9a7d8c0) at main-dispatcher.c:197
       #8  0x00007f43455f393b in main_dispatcher_channel_event (self=0x556dd9a7d8c0, event=event@entry=3, info=0x556ddad1b590) at main-dispatcher.c:197
       #9  0x00007f4345612833 in red_stream_push_channel_event (s=s@entry=0x556ddae2ef40, event=event@entry=3) at red-stream.c:414
       #10 0x00007f434561286b in red_stream_free (s=0x556ddae2ef40) at red-stream.c:388
       #11 0x00007f43455f9ddc in red_channel_client_finalize (object=0x556dd9bb21a0) at red-channel-client.c:347
       #12 0x00007f434b5f9fb9 in g_object_unref () at /lib64/libgobject-2.0.so.0
       #13 0x00007f43455fc212 in red_channel_client_push (rcc=0x556dd9bb21a0) at red-channel-client.c:1341
       #14 0x0000556dd76081ba in spice_port_set_fe_open (chr=0x556dd9925e20, fe_open=0) at /home/elmarco/src/qq/chardev/spice.c:241
       #15 0x0000556dd796d74a in qemu_chr_fe_set_open (be=0x556dd9a37c00, fe_open=0) at /home/elmarco/src/qq/chardev/char-fe.c:340
       #16 0x0000556dd796d4d9 in qemu_chr_fe_set_handlers (b=0x556dd9a37c00, fd_can_read=0x0, fd_read=0x0, fd_event=0x0, be_change=0x0, opaque=0x0, context=0x0, set_open=true) at /home/elmarco/src/qq/chardev/char-fe.c:280
       #17 0x0000556dd796d359 in qemu_chr_fe_deinit (b=0x556dd9a37c00, del=false) at /home/elmarco/src/qq/chardev/char-fe.c:233
       #18 0x0000556dd7432240 in monitor_data_destroy (mon=0x556dd9a37c00) at /home/elmarco/src/qq/monitor.c:786
       #19 0x0000556dd743b968 in monitor_cleanup () at /home/elmarco/src/qq/monitor.c:4683
       #20 0x0000556dd75ce776 in main (argc=3, argv=0x7fffb7bbe458, envp=0x7fffb7bbe478) at /home/elmarco/src/qq/vl.c:4660
      
      Because spice code tries to emit a "disconnected" signal on the
      monitors. Fix this dead-lock by releasing the monitor lock for
      flush/destroy.
      
      monitor_lock protects mon_list, monitor_qapi_event_state and
      monitor_destroyed. monitor_flush() and monitor_data_destroy() don't
      access any of those variables.
      
      monitor_cleanup()'s loop is safe because it uses
      QTAILQ_FOREACH_SAFE(), and no further monitor can be added after
      calling monitor_cleanup() thanks to monitor_destroyed check in
      monitor_list_append().
      Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com>
      Reviewed-by: NMarkus Armbruster <armbru@redhat.com>
      Message-Id: <20181205203737.9011-8-marcandre.lureau@redhat.com>
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      34f1f3e0
    • M
      monitor: prevent inserting new monitors after cleanup · 8dac00bb
      Marc-André Lureau 提交于
      monitor_cleanup() is one of the last things main() calls before it
      returns.  In the following patch, monitor_cleanup() will release the
      monitor_lock during flushing. There may be pending commands to insert
      new monitors, which would modify the mon_list during iteration, and
      the clean-up could thus miss those new insertions.
      
      Add a monitor_destroyed global to check if monitor_cleanup() has been
      already called. In this case, don't insert the new monitor in the
      list, but free it instead. A cleaner solution would involve the main
      thread telling other threads to terminate, waiting for their
      termination.
      Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com>
      Reviewed-by: NMarkus Armbruster <armbru@redhat.com>
      Message-Id: <20181205203737.9011-7-marcandre.lureau@redhat.com>
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      8dac00bb
    • M
      monitor: check if chardev can switch gcontext for OOB · a9a0d9b9
      Marc-André Lureau 提交于
      Not all backends are able to switch gcontext. Those backends cannot
      drive a OOB monitor (the monitor would then be blocking on main
      thread).
      
      For example, ringbuf, spice, or more esoteric input chardevs like
      braille or MUX.
      
      We already forbid MUX because not all frontends are ready to run outside
      main loop.  Replace that by a context-switching feature check.
      Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com>
      Message-Id: <20181205203737.9011-5-marcandre.lureau@redhat.com>
      [Error condition simplified, commit message adjusted accordingly]
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      a9a0d9b9
    • M
      monitor: accept chardev input from iothread · ef12a703
      Marc-André Lureau 提交于
      Chardev backends may not handle safely IO events from concurrent
      threads (may not handle I/O events from concurrent threads safely,
      only the write path is since commit >
      9005b2a7). Better to wake up the
      chardev from the monitor IO thread if it's being used as the chardev
      context.
      
      Unify code paths by using a BH in all cases.
      
      Drop the now redundant aio_notify() call.
      
      Clean up control flow not to rely on mon->use_io_thread implying
      monitor_is_qmp(mon).
      Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com>
      Reviewed-by: NPeter Xu <peterx@redhat.com>
      Reviewed-by: NMarkus Armbruster <armbru@redhat.com>
      Message-Id: <20181205203737.9011-3-marcandre.lureau@redhat.com>
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      ef12a703
    • M
      monitor: inline ambiguous helper functions · 88e40e43
      Marc-André Lureau 提交于
      The function were not named with "mon_iothread", or following the AIO
      vs GMainContext distinction. Inline them instead.
      Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com>
      Reviewed-by: NPeter Xu <peterx@redhat.com>
      Reviewed-by: NMarkus Armbruster <armbru@redhat.com>
      Message-Id: <20181205203737.9011-2-marcandre.lureau@redhat.com>
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      88e40e43
  7. 07 11月, 2018 2 次提交
  8. 19 10月, 2018 1 次提交
    • E
      tcg: distribute tcg_time into TCG contexts · 72fd2efb
      Emilio G. Cota 提交于
      When we implemented per-vCPU TCG contexts, we forgot to also
      distribute the tcg_time counter, which has remained as a global
      accessed without any serialization, leading to potentially missed
      counts.
      
      Fix it by distributing the field over the TCG contexts, embedding
      it into TCGProfile with a field called "cpu_exec_time", which is more
      descriptive than "tcg_time". Add a function to query this value
      directly, and for completeness, fill in the field in
      tcg_profile_snapshot, even though its callers do not use it.
      Signed-off-by: NEmilio G. Cota <cota@braap.org>
      Message-Id: <20181010144853.13005-5-cota@braap.org>
      Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
      72fd2efb
  9. 05 10月, 2018 1 次提交
  10. 25 9月, 2018 1 次提交
  11. 30 8月, 2018 3 次提交
  12. 29 8月, 2018 3 次提交
  13. 25 8月, 2018 3 次提交
    • M
      json: Clean up headers · 86cdf9ec
      Markus Armbruster 提交于
      The JSON parser has three public headers, json-lexer.h, json-parser.h,
      json-streamer.h.  They all contain stuff that is of no interest
      outside qobject/json-*.c.
      
      Collect the public interface in include/qapi/qmp/json-parser.h, and
      everything else in qobject/json-parser-int.h.
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Message-Id: <20180823164025.12553-54-armbru@redhat.com>
      86cdf9ec
    • M
      json: Pass lexical errors and limit violations to callback · 84a56f38
      Markus Armbruster 提交于
      The callback to consume JSON values takes QObject *json, Error *err.
      If both are null, the callback is supposed to make up an error by
      itself.  This sucks.
      
      qjson.c's consume_json() neglects to do so, which makes
      qobject_from_json() null instead of failing.  I consider that a bug.
      
      The culprit is json_message_process_token(): it passes two null
      pointers when it runs into a lexical error or a limit violation.  Fix
      it to pass a proper Error object then.  Update the callbacks:
      
      * monitor.c's handle_qmp_command(): the code to make up an error is
        now dead, drop it.
      
      * qga/main.c's process_event(): lumps the "both null" case together
        with the "not a JSON object" case.  The former is now gone.  The
        error message "Invalid JSON syntax" is misleading for the latter.
        Improve it to "Input must be a JSON object".
      
      * qobject/qjson.c's consume_json(): no update; check-qjson
        demonstrates qobject_from_json() now sets an error on lexical
        errors, but still doesn't on some other errors.
      
      * tests/libqtest.c's qmp_response(): the Error object is now reliable,
        so use it to improve the error message.
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Message-Id: <20180823164025.12553-40-armbru@redhat.com>
      84a56f38
    • M
      json: Redesign the callback to consume JSON values · 62815d85
      Markus Armbruster 提交于
      The classical way to structure parser and lexer is to have the client
      call the parser to get an abstract syntax tree, the parser call the
      lexer to get the next token, and the lexer call some function to get
      input characters.
      
      Another way to structure them would be to have the client feed
      characters to the lexer, the lexer feed tokens to the parser, and the
      parser feed abstract syntax trees to some callback provided by the
      client.  This way is more easily integrated into an event loop that
      dispatches input characters as they arrive.
      
      Our JSON parser is kind of between the two.  The lexer feeds tokens to
      a "streamer" instead of a real parser.  The streamer accumulates
      tokens until it got the sequence of tokens that comprise a single JSON
      value (it counts curly braces and square brackets to decide).  It
      feeds those token sequences to a callback provided by the client.  The
      callback passes each token sequence to the parser, and gets back an
      abstract syntax tree.
      
      I figure it was done that way to make a straightforward recursive
      descent parser possible.  "Get next token" becomes "pop the first
      token off the token sequence".  Drawback: we need to store a complete
      token sequence.  Each token eats 13 + input characters + malloc
      overhead bytes.
      
      Observations:
      
      1. This is not the only way to use recursive descent.  If we replaced
         "get next token" by a coroutine yield, we could do without a
         streamer.
      
      2. The lexer reports errors by passing a JSON_ERROR token to the
         streamer.  This communicates the offending input characters and
         their location, but no more.
      
      3. The streamer reports errors by passing a null token sequence to the
         callback.  The (already poor) lexical error information is thrown
         away.
      
      4. Having the callback receive a token sequence duplicates the code to
         convert token sequence to abstract syntax tree in every callback.
      
      5. Known bug: the streamer silently drops incomplete token sequences.
      
      This commit rectifies 4. by lifting the call of the parser from the
      callbacks into the streamer.  Later commits will address 3. and 5.
      
      The lifting removes a bug from qjson.c's parse_json(): it passed a
      pointer to a non-null Error * in certain cases, as demonstrated by
      check-qjson.c.
      
      json_parser_parse() is now unused.  It's a stupid wrapper around
      json_parser_parse_err().  Drop it, and rename json_parser_parse_err()
      to json_parser_parse().
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Message-Id: <20180823164025.12553-35-armbru@redhat.com>
      62815d85
  14. 24 8月, 2018 1 次提交
    • E
      hmp-commands-info: add sync-profile · 97bfafe2
      Emilio G. Cota 提交于
      The command introduced here is just for developers. This means that:
      
      - the info displayed and the output format could change in the future
      - the command is only meant to be used from HMP, not from QMP
      
      Sample output:
      
      (qemu) sync-profile
      sync-profile is off
      (qemu) info sync-profile
      Type               Object  Call site  Wait Time (s)         Count  Average (us)
      -------------------------------------------------------------------------------
      -------------------------------------------------------------------------------
      (qemu) sync-profile on
      (qemu) sync-profile
      sync-profile is on
      (qemu) info sync-profile 15
      Type               Object  Call site                 Wait Time (s)         Count  Average (us)
      ----------------------------------------------------------------------------------------------
      condvar    0x55a01813ced0  cpus.c:1165                    91.38235          2842      32154.24
      BQL mutex  0x55a0171b7140  cpus.c:1434                    12.56490          5787       2171.23
      BQL mutex  0x55a0171b7140  accel/tcg/cpu-exec.c:432        7.75846          2844       2728.01
      BQL mutex  0x55a0171b7140  accel/tcg/cputlb.c:870          5.09889          2884       1767.99
      BQL mutex  0x55a0171b7140  accel/tcg/cpu-exec.c:529        3.46140          3254       1063.74
      BQL mutex  0x55a0171b7140  accel/tcg/cputlb.c:804          0.76333          8655         88.20
      BQL mutex  0x55a0171b7140  cpus.c:1466                     0.60893          2941        207.05
      BQL mutex  0x55a0171b7140  util/main-loop.c:236            0.00894          6425          1.39
      mutex      [           3]  util/qemu-timer.c:520           0.00342         50611          0.07
      mutex      [           2]  util/qemu-timer.c:426           0.00254         31336          0.08
      mutex      [           3]  util/qemu-timer.c:234           0.00107         19275          0.06
      mutex      0x55a0171d9960  vl.c:763                        0.00043          6425          0.07
      mutex      0x55a0180d1bb0  monitor.c:458                   0.00015          1603          0.09
      mutex      0x55a0180e4c78  chardev/char.c:109              0.00002           217          0.08
      mutex      0x55a0180d1bb0  monitor.c:448                   0.00001           162          0.08
      ----------------------------------------------------------------------------------------------
      (qemu) info sync-profile -m 15
      Type               Object  Call site                 Wait Time (s)         Count  Average (us)
      ----------------------------------------------------------------------------------------------
      condvar    0x55a01813ced0  cpus.c:1165                    95.11196          3051      31174.03
      BQL mutex  0x55a0171b7140  accel/tcg/cpu-exec.c:432        7.92108          3052       2595.37
      BQL mutex  0x55a0171b7140  cpus.c:1434                    13.38253          6210       2155.00
      BQL mutex  0x55a0171b7140  accel/tcg/cputlb.c:870          5.09901          3093       1648.57
      BQL mutex  0x55a0171b7140  accel/tcg/cpu-exec.c:529        4.21123          3468       1214.31
      BQL mutex  0x55a0171b7140  cpus.c:1466                     0.60895          3156        192.95
      BQL mutex  0x55a0171b7140  accel/tcg/cputlb.c:804          0.76337          9282         82.24
      BQL mutex  0x55a0171b7140  util/main-loop.c:236            0.00944          6889          1.37
      mutex      0x55a01813ce80  tcg/tcg.c:397                   0.00000            24          0.15
      mutex      0x55a0180d1bb0  monitor.c:458                   0.00018          1922          0.09
      mutex      [           2]  util/qemu-timer.c:426           0.00266         32710          0.08
      mutex      0x55a0180e4c78  chardev/char.c:109              0.00002           260          0.08
      mutex      0x55a0180d1bb0  monitor.c:448                   0.00001           187          0.08
      mutex      0x55a0171d9960  vl.c:763                        0.00047          6889          0.07
      mutex      [           3]  util/qemu-timer.c:520           0.00362         53377          0.07
      ----------------------------------------------------------------------------------------------
      (qemu) info sync-profile -m -n 15
      Type               Object  Call site                 Wait Time (s)         Count  Average (us)
      ----------------------------------------------------------------------------------------------
      condvar    0x55a01813ced0  cpus.c:1165                   101.39331          3398      29839.12
      BQL mutex  0x55a0171b7140  accel/tcg/cpu-exec.c:432        7.92112          3399       2330.43
      BQL mutex  0x55a0171b7140  cpus.c:1434                    14.28280          6922       2063.39
      BQL mutex  0x55a0171b7140  accel/tcg/cputlb.c:870          5.77505          3445       1676.36
      BQL mutex  0x55a0171b7140  accel/tcg/cpu-exec.c:529        5.66139          3883       1457.99
      BQL mutex  0x55a0171b7140  cpus.c:1466                     0.60901          3519        173.06
      BQL mutex  0x55a0171b7140  accel/tcg/cputlb.c:804          0.76351         10338         73.85
      BQL mutex  0x55a0171b7140  util/main-loop.c:236            0.01032          7664          1.35
      mutex      0x55a0180e4f08  util/qemu-timer.c:426           0.00041           901          0.45
      mutex      0x55a01813ce80  tcg/tcg.c:397                   0.00000            24          0.15
      mutex      0x55a0180d1bb0  monitor.c:458                   0.00022          2319          0.09
      mutex      0x55a0180e4c78  chardev/char.c:109              0.00003           306          0.08
      mutex      0x55a0180e4f08  util/qemu-timer.c:520           0.00068          8565          0.08
      mutex      0x55a0180d1bb0  monitor.c:448                   0.00002           215          0.08
      mutex      0x55a0180e4f78  util/qemu-timer.c:426           0.00247         34224          0.07
      ----------------------------------------------------------------------------------------------
      (qemu) sync-profile reset
      (qemu) info sync-profile -m 2
      Type               Object  Call site               Wait Time (s)         Count  Average (us)
      --------------------------------------------------------------------------------------------
      condvar    0x55a01813ced0  cpus.c:1165                   2.78756            99      28157.12
      BQL mutex  0x55a0171b7140  accel/tcg/cputlb.c:870        0.33054           102       3240.55
      --------------------------------------------------------------------------------------------
      (qemu) sync-profile off
      (qemu) sync-profile
      sync-profile is off
      (qemu) sync-profile reset
      (qemu) info sync-profile
      Type               Object  Call site  Wait Time (s)         Count  Average (us)
      -------------------------------------------------------------------------------
      -------------------------------------------------------------------------------
      Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: NEmilio G. Cota <cota@braap.org>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      97bfafe2
  15. 15 8月, 2018 1 次提交
    • M
      monitor: fix oob command leak · cb9ec42f
      Marc-André Lureau 提交于
      Spotted by ASAN, during make check...
      
      Direct leak of 40 byte(s) in 1 object(s) allocated from:
          #0 0x7f8e27262c48 in malloc (/lib64/libasan.so.5+0xeec48)
          #1 0x7f8e26a5f3c5 in g_malloc (/lib64/libglib-2.0.so.0+0x523c5)
          #2 0x555ab67078a8 in qstring_from_str /home/elmarco/src/qq/qobject/qstring.c:67
          #3 0x555ab67071e4 in qstring_new /home/elmarco/src/qq/qobject/qstring.c:24
          #4 0x555ab6713fbf in qstring_from_escaped_str /home/elmarco/src/qq/qobject/json-parser.c:144
          #5 0x555ab671738c in parse_literal /home/elmarco/src/qq/qobject/json-parser.c:506
          #6 0x555ab67179c3 in parse_value /home/elmarco/src/qq/qobject/json-parser.c:569
          #7 0x555ab6715123 in parse_pair /home/elmarco/src/qq/qobject/json-parser.c:306
          #8 0x555ab6715483 in parse_object /home/elmarco/src/qq/qobject/json-parser.c:357
          #9 0x555ab671798b in parse_value /home/elmarco/src/qq/qobject/json-parser.c:561
          #10 0x555ab6717a6b in json_parser_parse_err /home/elmarco/src/qq/qobject/json-parser.c:592
          #11 0x555ab4fd4dcf in handle_qmp_command /home/elmarco/src/qq/monitor.c:4257
          #12 0x555ab6712c4d in json_message_process_token /home/elmarco/src/qq/qobject/json-streamer.c:105
          #13 0x555ab67e01e2 in json_lexer_feed_char /home/elmarco/src/qq/qobject/json-lexer.c:323
          #14 0x555ab67e0af6 in json_lexer_feed /home/elmarco/src/qq/qobject/json-lexer.c:373
          #15 0x555ab6713010 in json_message_parser_feed /home/elmarco/src/qq/qobject/json-streamer.c:124
          #16 0x555ab4fd58ec in monitor_qmp_read /home/elmarco/src/qq/monitor.c:4337
          #17 0x555ab6559df2 in qemu_chr_be_write_impl /home/elmarco/src/qq/chardev/char.c:175
          #18 0x555ab6559e95 in qemu_chr_be_write /home/elmarco/src/qq/chardev/char.c:187
          #19 0x555ab6560127 in fd_chr_read /home/elmarco/src/qq/chardev/char-fd.c:66
          #20 0x555ab65d9c73 in qio_channel_fd_source_dispatch /home/elmarco/src/qq/io/channel-watch.c:84
          #21 0x7f8e26a598ac in g_main_context_dispatch (/lib64/libglib-2.0.so.0+0x4c8ac)
      Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com>
      Message-Id: <20180809114417.28718-4-marcandre.lureau@redhat.com>
      [Screwed up in commit b2731456]
      Cc: qemu-stable@nongnu.org
      Reviewed-by: NMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      cb9ec42f
  16. 31 7月, 2018 1 次提交
    • M
      monitor: temporary fix for dead-lock on event recursion · 9a105406
      Marc-André Lureau 提交于
      With a Spice port chardev, it is possible to reenter
      monitor_qapi_event_queue() (when the client disconnects for
      example). This will dead-lock on monitor_lock.
      
      Instead, use some TLS variables to check for recursion and queue the
      events.
      
      Fixes:
       (gdb) bt
       #0  0x00007fa69e7217fd in __lll_lock_wait () at /lib64/libpthread.so.0
       #1  0x00007fa69e71acf4 in pthread_mutex_lock () at /lib64/libpthread.so.0
       #2  0x0000563303567619 in qemu_mutex_lock_impl (mutex=0x563303d3e220 <monitor_lock>, file=0x5633036589a8 "/home/elmarco/src/qq/monitor.c", line=645) at /home/elmarco/src/qq/util/qemu-thread-posix.c:66
       #3  0x0000563302fa6c25 in monitor_qapi_event_queue (event=QAPI_EVENT_SPICE_DISCONNECTED, qdict=0x56330602bde0, errp=0x7ffc6ab5e728) at /home/elmarco/src/qq/monitor.c:645
       #4  0x0000563303549aca in qapi_event_send_spice_disconnected (server=0x563305afd630, client=0x563305745360, errp=0x563303d8d0f0 <error_abort>) at qapi/qapi-events-ui.c:149
       #5  0x00005633033e600f in channel_event (event=3, info=0x5633061b0050) at /home/elmarco/src/qq/ui/spice-core.c:235
       #6  0x00007fa69f6c86bb in reds_handle_channel_event (reds=<optimized out>, event=3, info=0x5633061b0050) at reds.c:316
       #7  0x00007fa69f6b193b in main_dispatcher_self_handle_channel_event (info=0x5633061b0050, event=3, self=0x563304e088c0) at main-dispatcher.c:197
       #8  0x00007fa69f6b193b in main_dispatcher_channel_event (self=0x563304e088c0, event=event@entry=3, info=0x5633061b0050) at main-dispatcher.c:197
       #9  0x00007fa69f6d0833 in red_stream_push_channel_event (s=s@entry=0x563305ad8f50, event=event@entry=3) at red-stream.c:414
       #10 0x00007fa69f6d086b in red_stream_free (s=0x563305ad8f50) at red-stream.c:388
       #11 0x00007fa69f6b7ddc in red_channel_client_finalize (object=0x563304df2360) at red-channel-client.c:347
       #12 0x00007fa6a56b7fb9 in g_object_unref () at /lib64/libgobject-2.0.so.0
       #13 0x00007fa69f6ba212 in red_channel_client_push (rcc=0x563304df2360) at red-channel-client.c:1341
       #14 0x00007fa69f68b259 in red_char_device_send_msg_to_client (client=<optimized out>, msg=0x5633059b6310, dev=0x563304e08bc0) at char-device.c:305
       #15 0x00007fa69f68b259 in red_char_device_send_msg_to_clients (msg=0x5633059b6310, dev=0x563304e08bc0) at char-device.c:305
       #16 0x00007fa69f68b259 in red_char_device_read_from_device (dev=0x563304e08bc0) at char-device.c:353
       #17 0x000056330317d01d in spice_chr_write (chr=0x563304cafe20, buf=0x563304cc50b0 "{\"timestamp\": {\"seconds\": 1532944763, \"microseconds\": 326636}, \"event\": \"SHUTDOWN\", \"data\": {\"guest\": false}}\r\n", len=111) at /home/elmarco/src/qq/chardev/spice.c:199
       #18 0x00005633034deee7 in qemu_chr_write_buffer (s=0x563304cafe20, buf=0x563304cc50b0 "{\"timestamp\": {\"seconds\": 1532944763, \"microseconds\": 326636}, \"event\": \"SHUTDOWN\", \"data\": {\"guest\": false}}\r\n", len=111, offset=0x7ffc6ab5ea70, write_all=false) at /home/elmarco/src/qq/chardev/char.c:112
       #19 0x00005633034df054 in qemu_chr_write (s=0x563304cafe20, buf=0x563304cc50b0 "{\"timestamp\": {\"seconds\": 1532944763, \"microseconds\": 326636}, \"event\": \"SHUTDOWN\", \"data\": {\"guest\": false}}\r\n", len=111, write_all=false) at /home/elmarco/src/qq/chardev/char.c:147
       #20 0x00005633034e1e13 in qemu_chr_fe_write (be=0x563304dbb800, buf=0x563304cc50b0 "{\"timestamp\": {\"seconds\": 1532944763, \"microseconds\": 326636}, \"event\": \"SHUTDOWN\", \"data\": {\"guest\": false}}\r\n", len=111) at /home/elmarco/src/qq/chardev/char-fe.c:42
       #21 0x0000563302fa6334 in monitor_flush_locked (mon=0x563304dbb800) at /home/elmarco/src/qq/monitor.c:425
       #22 0x0000563302fa6520 in monitor_puts (mon=0x563304dbb800, str=0x563305de7e9e "") at /home/elmarco/src/qq/monitor.c:468
       #23 0x0000563302fa680c in qmp_send_response (mon=0x563304dbb800, rsp=0x563304df5730) at /home/elmarco/src/qq/monitor.c:517
       #24 0x0000563302fa6905 in qmp_queue_response (mon=0x563304dbb800, rsp=0x563304df5730) at /home/elmarco/src/qq/monitor.c:538
       #25 0x0000563302fa6b5b in monitor_qapi_event_emit (event=QAPI_EVENT_SHUTDOWN, qdict=0x563304df5730) at /home/elmarco/src/qq/monitor.c:624
       #26 0x0000563302fa6c4b in monitor_qapi_event_queue (event=QAPI_EVENT_SHUTDOWN, qdict=0x563304df5730, errp=0x7ffc6ab5ed00) at /home/elmarco/src/qq/monitor.c:649
       #27 0x0000563303548cce in qapi_event_send_shutdown (guest=false, errp=0x563303d8d0f0 <error_abort>) at qapi/qapi-events-run-state.c:58
       #28 0x000056330313bcd7 in main_loop_should_exit () at /home/elmarco/src/qq/vl.c:1822
       #29 0x000056330313bde3 in main_loop () at /home/elmarco/src/qq/vl.c:1862
       #30 0x0000563303143781 in main (argc=3, argv=0x7ffc6ab5f068, envp=0x7ffc6ab5f088) at /home/elmarco/src/qq/vl.c:4644
      
      Note that error report is now moved to the first caller, which may
      receive an error for a recursed event. This is probably fine (95% of
      callers use &error_abort, the rest have NULL error and ignore it)
      Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com>
      Message-Id: <20180731150144.14022-1-marcandre.lureau@redhat.com>
      Reviewed-by: NMarkus Armbruster <armbru@redhat.com>
      [*_no_recurse renamed to *_no_reenter, local variables reordered]
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      9a105406
  17. 23 7月, 2018 1 次提交
    • P
      monitor: Fix unsafe sharing of @cur_mon among threads · 62aa1d88
      Peter Xu 提交于
      @cur_mon is null unless the main thread is running monitor code, either
      HMP code within monitor_read(), or QMP code within
      monitor_qmp_dispatch().
      
      Use of @cur_mon outside the main thread is therefore unsafe.
      
      Most of its uses are in monitor command handlers.  These run in the main
      thread.
      
      However, there are also uses hiding elsewhere, such as in
      error_vprintf(), and thus error_report(), making these functions unsafe
      outside the main thread.  No such unsafe uses are known at this time.
      Regardless, this is an unnecessary trap.  It's an ancient trap, though.
      
      More recently, commit cf869d53 "qmp: support out-of-band (oob)
      execution" spiced things up: the monitor I/O thread assigns to @cur_mon
      when executing commands out-of-band.  Having two threads save, set and
      restore @cur_mon without synchronization is definitely unsafe.  We can
      end up with @cur_mon null while the main thread runs monitor code, or
      non-null while it runs non-monitor code.
      
      We could fix this by making the I/O thread not mess with @cur_mon, but
      that would leave the trap armed and ready.
      
      Instead, make @cur_mon thread-local.  It's now reliably null unless the
      thread is running monitor code.
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Reviewed-by: NMarc-André Lureau <marcandre.lureau@redhat.com>
      Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
      [peterx: update subject and commit message written by Markus]
      Reviewed-by: NMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Message-Id: <20180720033451.32710-1-peterx@redhat.com>
      62aa1d88
  18. 16 7月, 2018 1 次提交
  19. 12 7月, 2018 1 次提交
  20. 04 7月, 2018 4 次提交