1. 19 2月, 2016 22 次提交
    • C
      qemu: parse: drop redundant AddImplicitControllers · e6ad2b69
      Cole Robinson 提交于
      PostParse handles it for us now.
      
      This causes some test suite churn; qemu's custom PostParse could is
      now invoked before the generic AddImplicitControllers, so PCI
      controllers end up sequentially in the XML before the generically
      added IDE controllers. So it's just some XML reordering
      e6ad2b69
    • C
      qemu: parse: rename qemuCaps->caps · 378a9dc6
      Cole Robinson 提交于
      Everywhere else in qemu driver code 'qemuCaps' is a virQEMUCapsPtr,
      and virCapsPtr is generally named just 'caps'. Rename the offenders
      378a9dc6
    • C
      domain: add implicit controllers from post parse · 4066c734
      Cole Robinson 提交于
      Seems like the natural fit, since we are already adding other XML bits
      in the PostParse routine.
      
      Previously AddImplicitControllers was only called at the end of XML
      parsing, meaning code that builds a DomainDef by hand had to manually
      call it. Now those PostParse callers get it for free.
      
      There's some test churn here; xen xm and sexpr test suite bits weren't
      calling this before, but now they are, so you'll see new IDE controllers.
      I don't think this will cause problems in practice, since the code already
      needs to handle these implicit controllers like in the case when a user
      defines their own XML.
      4066c734
    • J
      Check for active domain in virDomainObjWait · 5591ca50
      Jiri Denemark 提交于
      virDomainObjWait is designed to be called in a loop. Make sure we break
      the loop in case the domain dies to avoid waiting for an event which
      will never happen.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      5591ca50
    • J
      qemu: Avoid calling qemuProcessStop without a job · 81f50cb9
      Jiri Denemark 提交于
      Calling qemuProcessStop without a job opens a way to race conditions
      with qemuDomainObjExitMonitor called in another thread. A real world
      example of such a race condition:
      
        - migration thread (A) calls qemuMigrationWaitForSpice
        - another thread (B) starts processing qemuDomainAbortJob API
        - thread B signals thread A via qemuDomainObjAbortAsyncJob
        - thread B enters monitor (qemuDomainObjEnterMonitor)
        - thread B calls qemuMonitorSend
        - thread A awakens and calls qemuProcessStop
        - thread A calls qemuMonitorClose and sets priv->mon to NULL
        - thread B calls qemuDomainObjExitMonitor with priv->mon == NULL
        => monitor stays ref'ed and locked
      
      Depending on how lucky we are, the race may result in a memory leak or
      it can even deadlock libvirtd's event loop if it tries to lock the
      monitor to process an event received before qemuMonitorClose was called.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      81f50cb9
    • J
      6f08cbb8
    • J
      qemu: Process monitor EOF in a job · 8c9ff996
      Jiri Denemark 提交于
      Stopping a domain without a job risks a race condition with another
      thread which started a job a which does not expect anyone else to be
      messing around with the same domain object.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      8c9ff996
    • J
      qemu: Start an async job for processGuestPanicEvent · 1894112b
      Jiri Denemark 提交于
      Only a small portion of processGuestPanicEvent was enclosed within a
      job, let's make sure we use the job for all operations to avoid race
      conditions.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      1894112b
    • J
      qemu: Start job in qemuDomainDestroyFlags early · 26edd68c
      Jiri Denemark 提交于
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      26edd68c
    • J
      qemu: Introduce qemuProcessBeginStopJob · 4d0c535a
      Jiri Denemark 提交于
      When destroying a domain we need to make sure we will be able to start a
      job no matter what other operations are running or even stuck in a job.
      This is done by killing the domain before starting the destroy job.
      
      Let's introduce qemuProcessBeginStopJob which combines killing a domain
      and starting a job in a single API which can be called everywhere we
      need a job to stop a domain.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      4d0c535a
    • J
      qemu: Pass async job to qemuProcessInit · b7a948be
      Jiri Denemark 提交于
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      b7a948be
    • J
      qemu: End nested jobs properly · bf657dff
      Jiri Denemark 提交于
      Ending a nested job is no different from ending any other (non-async)
      job, after all the code in qemuDomainBeginJobInternal does not handle
      them differently either. Thus we should call qemuDomainObjEndJob to stop
      nested jobs.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      bf657dff
    • J
      qemu: Export qemuDomainObjBeginNestedJob · 17c4312c
      Jiri Denemark 提交于
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      17c4312c
    • P
      773f3bd3
    • P
      qemu: qemuDomainGetStatsVcpu: Fix output for possible sparse vCPU settings · 783584b5
      Peter Krempa 提交于
      qemuDomainHelperGetVcpus would correctly return an array of
      virVcpuInfoPtr structs for online vcpus even for sparse topologies, but
      the loop that fills the returned typed parameters would number the vcpus
      incorrectly. Fortunately sparse topologies aren't supported yet.
      783584b5
    • P
      qemu: vcpupin: Always set affinity even when cgroups are supported · 9958422d
      Peter Krempa 提交于
      VM startup and CPU hotplug always set the affinity regardless of cgroups
      support. Use the same approach for the pinning API.
      9958422d
    • P
      qemu: vcpupin: Don't overwrite errors from functions setting pinning · 47174130
      Peter Krempa 提交于
      Both errors from the cgroups code and from the affinity code would be
      overwritten by the API. Report the more specific error.
      47174130
    • P
      util: Use virBitmapIsBitSet in freebsd impl of virProcessSetAffinity · 9268b9ad
      Peter Krempa 提交于
      Use the helper that does not return errors to fix spuriously looking
      dead return of -1.
      9268b9ad
    • P
      virsh: cmdVcpuPin: Simplify handling of API flags · a7bc9841
      Peter Krempa 提交于
      Rather than setting flags to -1 if none were specified, move the logic
      to use the old API to the place where we need to decide. It simplifies
      the logic a bit.
      a7bc9841
    • A
      test: qemuxml2argv: Drop QEMU_CAPS_DEVICE uses · b6c40bd5
      Andrea Bolognani 提交于
      Since commit 51045df0, the QEMU_CAPS_DEVICE capability is enabled
      automatically and shouldn't be passed as an argument to DO_TEST();
      however, commit 998a936c accidentally introduced few such uses.
      b6c40bd5
    • E
      admin: Fix memory leak in remoteAdminConnectClose · e9e85655
      Erik Skultety 提交于
      When virt-admin is run with valgrind, this kind of output can be obtained:
      
      HEAP SUMMARY:
        in use at exit: 134,589 bytes in 1,031 blocks
        total heap usage: 2,667 allocs, 1,636 frees, 496,755 bytes allocated
      
      88 bytes in 1 blocks are definitely lost in loss record 82 of 128
       at 0x4C2A9C7: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
       by 0x52F6D1F: virAllocVar (viralloc.c:560)
       by 0x5350268: virObjectNew (virobject.c:193)
       by 0x53503E0: virObjectLockableNew (virobject.c:219)
       by 0x4E3BBCB: virAdmConnectNew (datatypes.c:832)
       by 0x4E38495: virAdmConnectOpen (libvirt-admin.c:209)
       by 0x10C541: vshAdmConnect (virt-admin.c:107)
       by 0x10C7B2: vshAdmReconnect (virt-admin.c:163)
       by 0x10CC7C: cmdConnect (virt-admin.c:298)
       by 0x110838: vshCommandRun (vsh.c:1224)
       by 0x10DFD8: main (virt-admin.c:862)
      
       LEAK SUMMARY:
          definitely lost: 88 bytes in 1 blocks
          indirectly lost: 0 bytes in 0 blocks
          possibly lost: 0 bytes in 0 blocks
          still reachable: 134,501 bytes in 1,030 blocks
          suppressed: 0 bytes in 0 blocks
      
      This is because virNetClientSetCloseCallback was being reinitialized
      incorrectly. By resetting the callbacks in a proper way, the leak is fixed.
      e9e85655
    • M
      esx: Avoid using vSphere SessionIsActive function · 647ac97a
      Matthias Bolte 提交于
      A login session with the vSphere API might expire after some idle time.
      The esxVI_EnsureSession function uses the SessionIsActive function to
      check if the current session has expired and a relogin needs to be done.
      
      But the SessionIsActive function needs the Sessions.ValidateSession
      privilege that is considered as an admin level privilege.
      
      Only vCenter actually provides the SessionIsActive function. This results
      in requiring an admin level privilege even for read-only operations on
      a vCenter server.
      
      ESX and VMware Server don't provide the SessionIsActive function and
      the code already works around that. Use the same workaround for vCenter
      again.
      
      This basically reverts commit 5699034b.
      647ac97a
  2. 18 2月, 2016 18 次提交