1. 15 12月, 2015 6 次提交
    • L
      qemu: add bootindex option to hostdev network interface commandline · a8e3247e
      Laine Stump 提交于
      when appropriate, of course. If the config for a domain specifies boot
      order with <boot dev='blah'/> elements, e.g.:
      
           <os>
             ...
             <boot dev='hd'/>
             <boot dev='network'/>
           </os>
      
      Then the first disk device in the config will have ",bootindex=1"
      appended to its qemu commandline -device options, and the first (and
      *only* the first) network interface device will get ",bootindex=2".
      
      However, if the first network interface device is a "hostdev" device
      (an SRIOV Virtual Function (VF) being assigned to the domain with
      vfio), then the bootindex option will *not* be appended. This happens
      because the bootindex=n option corresponding to the order of "<boot
      dev='network'/>" is added to the -device for the first network device
      when network device commandline args are constructed, but if it's a
      hostdev network device, its commandline arg is instead constructed in
      the loop for hostdevs.
      
      This patch fixes that omission by noticing (in bootHostdevNet) if the
      first network device was a hostdev, and if so passing on the proper
      bootindex to the commandline generator for hostdev devices - the
      result is that ",bootindex=2" will be properly appended to the first
      "network" device in the config even if it is really a hostdev
      (including if it is assigned from a libvirt network pool). (note that
      this is only the case if there is no <bootmenu enabled='yes'/> element
      in the config ("-boot menu-on" in qemu) , since the two are mutually
      exclusive - when the bootmenu is enabled, the individual per-device
      bootindex options can't be used by qemu, and we revert to using "-boot
      order=xyz" instead).
      
      If a greater level of control over boot order is desired (e.g., more
      than one network device should be tried, or a network device other
      than the first one encountered in the config), then <boot
      dev='network'/> in the <os> element should not be used; instead, the
      individual device elements in the config should be given a "<boot
      order='n'/>
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1278421
      a8e3247e
    • P
      qemuMonitorJSONEjectMedia: don't stringify the replay at all · cbd3d065
      Pavel Hrdina 提交于
      Commit 256496e1 introduced a detection if "is locked" in error replay
      from qemu monitor. Commit c4073657 fixed a memory leak, but it was
      pointed out by Peter, that this could be done cleaner without
      stringifing the replay.
      Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
      cbd3d065
    • M
      qemuMonitorJSONEjectMedia: Don't leak stringified reply · c4073657
      Michal Privoznik 提交于
      The return value of virJSONValueToString() should be freed when
      no longer needed. This is not the case after 256496e1.
      
      ==26902== 138 bytes in 2 blocks are definitely lost in loss record 1,051 of 1,239
      ==26902==    at 0x4C29F80: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
      ==26902==    by 0xAA5F599: strdup (in /lib64/libc-2.21.so)
      ==26902==    by 0x552BAD9: virStrdup (virstring.c:726)
      ==26902==    by 0x54F60A7: virJSONValueToString (virjson.c:1790)
      ==26902==    by 0x1DF6EBB9: qemuMonitorJSONEjectMedia (qemu_monitor_json.c:2225)
      ==26902==    by 0x1DF57A4C: qemuMonitorEjectMedia (qemu_monitor.c:1985)
      ==26902==    by 0x1DF1EF2D: qemuDomainChangeEjectableMedia (qemu_hotplug.c:199)
      ==26902==    by 0x1DF90314: qemuDomainChangeDiskLive (qemu_driver.c:7985)
      ==26902==    by 0x1DF90476: qemuDomainUpdateDeviceLive (qemu_driver.c:8030)
      ==26902==    by 0x1DF91ED7: qemuDomainUpdateDeviceFlags (qemu_driver.c:8677)
      ==26902==    by 0x561785F: virDomainUpdateDeviceFlags (libvirt-domain.c:8559)
      ==26902==    by 0x134210: remoteDispatchDomainUpdateDeviceFlags (remote_dispatch.h:10966)
      
      ==26902== 106 bytes in 1 blocks are definitely lost in loss record 1,033 of 1,239
      ==26902==    at 0x4C29F80: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
      ==26902==    by 0xAA5F599: strdup (in /lib64/libc-2.21.so)
      ==26902==    by 0x552BAD9: virStrdup (virstring.c:726)
      ==26902==    by 0x54F60A7: virJSONValueToString (virjson.c:1790)
      ==26902==    by 0x1DF6EC0C: qemuMonitorJSONEjectMedia (qemu_monitor_json.c:2227)
      ==26902==    by 0x1DF57A4C: qemuMonitorEjectMedia (qemu_monitor.c:1985)
      ==26902==    by 0x1DF1EF2D: qemuDomainChangeEjectableMedia (qemu_hotplug.c:199)
      ==26902==    by 0x1DF90314: qemuDomainChangeDiskLive (qemu_driver.c:7985)
      ==26902==    by 0x1DF90476: qemuDomainUpdateDeviceLive (qemu_driver.c:8030)
      ==26902==    by 0x1DF91ED7: qemuDomainUpdateDeviceFlags (qemu_driver.c:8677)
      ==26902==    by 0x561785F: virDomainUpdateDeviceFlags (libvirt-domain.c:8559)
      ==26902==    by 0x134210: remoteDispatchDomainUpdateDeviceFlags (remote_dispatch.h:10966)
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      c4073657
    • H
      qemu cgroups: move new threads to new cgroup after cpuset is set up · 90b721e4
      Henning Schild 提交于
      Moving tasks to cgroups implied sched_setaffinity. Changing the cpus in
      a set implies the same for all tasks in the group.
      The old code put the the thread into the cpuset inherited from the
      machine cgroup, which allowed it to run outside of vcpupin for a short
      while.
      Signed-off-by: NHenning Schild <henning.schild@siemens.com>
      90b721e4
    • H
      qemu: do not put a task into machine cgroup · a41c00b4
      Henning Schild 提交于
      The machine cgroup is a superset, a parent to the emulator and vcpuX
      cgroups. The parent cgroup should never have any tasks directly in it.
      In fact the parent cpuset might contain way more cpus than the sum of
      emulatorpin and vcpupins. So putting tasks in the superset will allow
      them to run outside of <cputune>.
      Signed-off-by: NHenning Schild <henning.schild@siemens.com>
      a41c00b4
    • H
      util: cgroups do not implicitly add task to new machine cgroup · 71ce4759
      Henning Schild 提交于
      virCgroupNewMachine used to add the pidleader to the newly created
      machine cgroup. Do not do this implicit anymore.
      Signed-off-by: NHenning Schild <henning.schild@siemens.com>
      71ce4759
  2. 14 12月, 2015 1 次提交
  3. 11 12月, 2015 3 次提交
  4. 09 12月, 2015 21 次提交
  5. 08 12月, 2015 2 次提交
    • D
      logging: change log protocol to be more reusable · 50896b28
      Daniel P. Berrange 提交于
      The current virtlogd RPC protocol provides the ability to
      handle log files associated with QEMU stdout/err. The log
      protocol messages take the virt driver, domain name and
      use that to form a log file path. This is quite restrictive
      as it prevents us re-using the same RPC protocol messages
      for logging to char device backends where the filename
      can be arbitrarily user specified. It is also bad because
      it means we have 2 separate locations which have to decide
      on logfile name.
      
      This change alters the RPC protocol so that we pass the
      desired log file path along when opening the log file
      initially. Now the virt driver is exclusively in charge
      of deciding the log filename
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      50896b28
    • D
      qemu: fix memory leak in opening log file · 0eafe995
      Daniel P. Berrange 提交于
      The qemuDomainLogContextNew method leaks the "logfile" path
      on the non-virtlogd code path.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      0eafe995
  6. 05 12月, 2015 2 次提交
  7. 04 12月, 2015 1 次提交
  8. 01 12月, 2015 1 次提交
    • W
      qemu_agent: fix deadlock in qemuProcessHandleAgentEOF · fe51174f
      Wang Yufei 提交于
      If VM A is shutdown a by qemu agent at appoximately the same time
      an agent EOF of VM A happened, there's a chance that deadlock may occur:
      
      qemuProcessHandleAgentEOF in main thread
      A)  priv->agent = NULL; //A happened before B
      
          //deadlock when we get agent lock which's held by worker thread
          qemuAgentClose(agent);
      
      qemuDomainObjExitAgent called by qemuDomainShutdownFlags in worker thread
      B)  hasRefs = virObjectUnref(priv->agent); // priv->agent is NULL,
                                                 // return false
          if (hasRefs)
              virObjectUnlock(priv->agent); //agent lock will not be released here
      
      In order to resolve, during EOF close the agent first, then set priv->agent
      to NULL to fix the deadlock.
      
      This essentially reverts commit id '1020a504'. It's also of note that commit
      id '362d0477' notes a possible/rare deadlock similar to what was seen in
      the monitor in commit id '25f582e3'. However, it seems interceding changes
      including commit id 'd960d06f' should remove the deadlock issue.
      
      With this change, if EOF is called:
      
          Get VM lock
          Check if !priv->agent || priv->beingDestroyed, then unlock VM
          Call qemuAgentClose
          Unlock VM
      
      When qemuAgentClose is called
          Get Agent lock
          If Agent->fd open, close it
          Unlock Agent
          Unref Agent
      
      qemuDomainObjEnterAgent
          Enter with VM lock
          Get Agent lock
          Increase Agent refcnt
          Unlock VM
      
      After running agent command, calling qemuDomainObjExitAgent
          Enter with Agent lock
          Unref Agent
          If not last reference, unlock Agent
          Get VM lock
      
      If we were in the middle of an EnterAgent, call Agent command, and
      ExitAgent sequence and the EOF code is triggered, then the EOF code
      can get the VM lock, make it's checks against !priv->agent ||
      priv->beingDestroyed, and call qemuAgentClose. The CloseAgent
      would wait to get agent lock. The other thread then will eventually
      call ExitAgent, release the Agent lock and unref the Agent. Once
      ExitAgent releases the Agent lock, AgentClose will get the Agent
      Agent lock, close the fd, unlock the agent, and unref the agent.
      The final unref would cause deletion of the agent.
      Signed-off-by: NWang Yufei <james.wangyufei@huawei.com>
      Reviewed-by: NRen Guannan <renguannan@huawei.com>
      fe51174f
  9. 30 11月, 2015 3 次提交