1. 20 2月, 2015 1 次提交
    • M
      virsh-edit: Make force editing usable · 1bb1de83
      Martin Kletzander 提交于
      When editing a domain with 'virsh edit' and failing validation, the
      usual message pops up:
      
        Failed. Try again? [y,n,f,?]:
      
      Turning off validation can be useful, mainly for testing (but other
      purposes too), so this patch adds support for relaxing definition in
      virsh-edit and makes 'virsh edit <domain>' more usable.
      Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
      1bb1de83
  2. 19 2月, 2015 9 次提交
    • M
      parallels: Set the first HDD from XML as bootable · 675fa6b3
      Mikhail Feoktistov 提交于
      1. Delete all boot devices for VM instance
      2. Find the first HDD from XML and set it as bootable
      
      Now we support only one boot device and it should be HDD.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      675fa6b3
    • M
      6783cf63
    • M
      parallels: code aligment · 0268eabd
      Mikhail Feoktistov 提交于
      0268eabd
    • J
      Search for schemas and cpu_map.xml in source tree · bc6e2063
      Jiri Denemark 提交于
      Not all files we want to find using virFileFindResource{,Full} are
      generated when libvirt is built, some of them (such as RNG schemas) are
      distributed with sources. The current API was not able to find source
      files if libvirt was built in VPATH.
      
      Both RNG schemas and cpu_map.xml are distributed in source tarball.
      Signed-off-by: NJiri Denemark <jdenemar@redhat.com>
      bc6e2063
    • M
      qemuMigrationDriveMirror: Listen to events · 80c5f10e
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1179678
      
      When migrating with storage, libvirt iterates over domain disks and
      instruct qemu to migrate the ones we are interested in (shared, RO and
      source-less disks are skipped). The disks are migrated in series. No
      new disk is transferred until the previous one hasn't been quiesced.
      This is checked on the qemu monitor via 'query-jobs' command. If the
      disk has been quiesced, it practically went from copying its content
      to mirroring state, where all disk writes are mirrored to the other
      side of migration too. Having said that, there's one inherent error in
      the design. The monitor command we use reports only active jobs. So if
      the job fails for whatever reason, we will not see it anymore in the
      command output. And this can happen fairly simply: just try to migrate
      a domain with storage. If the storage migration fails (e.g. due to
      ENOSPC on the destination) we resume the host on the destination and
      let it run on partly copied disk.
      
      The proper fix is what even the comment in the code says: listen for
      qemu events instead of polling. If storage migration changes state an
      event is emitted and we can act accordingly: either consider disk
      copied and continue the process, or consider disk mangled and abort
      the migration.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      80c5f10e
    • M
      qemuProcessHandleBlockJob: Take status into account · 76c61cdc
      Michal Privoznik 提交于
      Upon BLOCK_JOB_COMPLETED event delivery, we check if the job has
      completed (in qemuMonitorJSONHandleBlockJobImpl()). For better image,
      the event looks something like this:
      
      "timestamp": {"seconds": 1423582694, "microseconds": 372666}, "event":
      "BLOCK_JOB_COMPLETED", "data": {"device": "drive-virtio-disk0", "len":
      8412790784, "offset": 409993216, "speed": 8796093022207, "type":
      "mirror", "error": "No space left on device"}}
      
      If "len" does not equal "offset" it's considered an error, and we can
      clearly see "error" field filled in. However, later in the event
      processing this case was handled no differently to case of job being
      aborted via separate API. It's time that we start differentiate these
      two because of the future work.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      76c61cdc
    • M
      qemuProcessHandleBlockJob: Set disk->mirrorState more often · c37943a0
      Michal Privoznik 提交于
      Currently, upon BLOCK_JOB_* event, disk->mirrorState is not updated
      each time. The callback code handling the events checks if a blockjob
      was started via our public APIs prior to setting the mirrorState.
      However, some block jobs may be started internally (e.g. during
      storage migration), in which case we don't bother with setting
      disk->mirror (there's nothing we can set it to anyway), or other
      fields. But it will come handy if we update the mirrorState in these
      cases too. The event wasn't delivered just for fun - we've started the
      job after all.
      
      So, in this commit, the mirrorState is set to whatever job status
      we've obtained. Of course, there are some actions on some statuses
      that we want to perform. But instead of if {} else if {} else {} ...
      enumeration, let's move to switch().
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      c37943a0
    • P
      qemu: Exit job on error path of qemuDomainSetVcpusFlags() · 0df2f040
      Peter Krempa 提交于
      Commit e105dc98 moved some code but
      didn't adjust the jump labels so that the job would be terminated.
      0df2f040
    • P
      daemon: Fix segfault by reloading daemon right after start · 5c756e58
      Pavel Hrdina 提交于
      Libvirt could crash with segfault if user issue "service reload" right
      after "service start". One possible way to crash libvirt is to run reload
      during initialization of QEMU driver.
      
      It could happen when qemu driver will initialize qemu_driver_lock but
      don't have a time to set it's "config" and the SIGHUP arrives. The
      reload handler tries to get qemu_drv->config during "virStorageAutostart"
      and dereference it which ends with segfault.
      
      Let's ignore all reload requests until all drivers are initialized. In
      addition set driversInitialized before we enter virStateCleanup to
      ignore reload request while we are shutting down.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1179981Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
      5c756e58
  3. 18 2月, 2015 1 次提交
  4. 17 2月, 2015 9 次提交
  5. 16 2月, 2015 1 次提交
  6. 14 2月, 2015 5 次提交
    • J
      libxl: Resolve Coverity CHECKED_RETURN · 4438646c
      John Ferlan 提交于
      Periodically my Coverity scan will return a checked_return failure
      for libxlDomainShutdownThread call to libxlDomainStart. Followed the
      libxlAutostartDomain example in order to check the status, emit a
      message, and continue on.
      4438646c
    • J
      security: Resolve Coverity RESOURCE_LEAK · 5a36cdbc
      John Ferlan 提交于
      Introduced by commit id 'c3d9d3bb' - return from virSecurityManagerCheckModel
      wasn't VIR_FREE()'ing the virSecurityManagerGetNested allocated memory.
      5a36cdbc
    • L
      lxc: Fix container cleanup for LXCProcessStart · 8e6492f2
      Luyao Huang 提交于
      Jumping to the cleanup label prior to starting the container failed to
      properly clean everything up that is handled by the virLXCProcessCleanup
      which is called if virLXCProcessStop is called on failure after the
      container properly starts. Most importantly is prior to this patch none
      of the stop/release hooks, host device reattachment, and network cleanup
      (that is reverse of virLXCProcessSetupInterfaces).
      Signed-off-by: NLuyao Huang <lhuang@redhat.com>
      8e6492f2
    • J
      lxc: Modify/add some debug messages · 2b8e018a
      John Ferlan 提交于
      Modify the VIR_DEBUG message in virLXCProcessCleanup to make it clearer
      about the path.  Also add some more VIR_DEBUG messages in virLXCProcessStart
      in order to help debug error flow.
      2b8e018a
    • L
      lxc: Move console checks in LXCProcessStart · 72129907
      Luyao Huang 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1176503
      
      Move the two console checks - one for zero nconsoles present and the
      other for an invalid console type to earlier in the processing rather than
      getting after performing some setup that has to be undone for what amounts
      to an invalid configuration.
      
      This resolves the above bug since it's not not possible to have changed
      the security labels when we cause the configuration check failure.
      72129907
  7. 13 2月, 2015 10 次提交
  8. 12 2月, 2015 4 次提交
    • M
    • D
      Allow shrinking of file based volumes · aa9aa6a9
      Daniel P. Berrange 提交于
      While the main storage driver code allows the flag
      VIR_STORAGE_VOL_RESIZE_SHRINK to be set, none of the backend
      drivers are supporting it. At the very least this can work
      for plain file based volumes since we just ftruncate() them
      to the new size. It does not work with qcow2 volumes, but we
      can arguably delegate to qemu-img for error reporting for that
      instead of second guessing this for ourselves:
      
      $ virsh vol-resize --shrink /home/berrange/VirtualMachines/demo.qcow2 2G
      error: Failed to change size of volume 'demo.qcow2' to 2G
      
      error: internal error: Child process (/usr/bin/qemu-img resize /home/berrange/VirtualMachines/demo.qcow2 2147483648) unexpected exit status 1: qemu-img: qcow2 doesn't support shrinking images yet
      qemu-img: This image does not support resize
      
      See also https://bugzilla.redhat.com/show_bug.cgi?id=1021802
      aa9aa6a9
    • D
      qemu: do upfront check for vcpupids being null when querying pinning · 9358b63a
      Daniel P. Berrange 提交于
      The qemuDomainHelperGetVcpus attempted to report an error when the
      vcpupids info was NULL. Unfortunately earlier code would clamp the
      value of 'maxinfo' to 0 when nvcpupids was 0, so the error reporting
      would end up being skipped.
      
      This lead to 'virsh vcpuinfo <dom>' just returning an empty list
      instead of giving the user a clear error.
      9358b63a
    • D
      qemu: fix setting of VM CPU affinity with TCG · a103bb10
      Daniel P. Berrange 提交于
      If a previous commit I fixed the incorrect handling of vcpu pids
      for TCG mode QEMU:
      
        commit b07f3d82
        Author: Daniel P. Berrange <berrange@redhat.com>
        Date:   Thu Dec 18 16:34:39 2014 +0000
      
          Don't setup fake CPU pids for old QEMU
      
          The code assumes that def->vcpus == nvcpupids, so when we setup
          fake CPU pids for old QEMU with nvcpupids == 1, we cause the
          later code to read off the end of the array. This has fun results
          like sche_setaffinity(0, ...) which changes libvirtd's own CPU
          affinity, or even better sched_setaffinity($RANDOM, ...) which
          changes the affinity of a random OS process.
      
      The intent was that this would merely disable the ability to set
      per-vCPU affinity. It should still have been possible to set VM
      level host CPU affinity.
      
      Unfortunately, when you set  <vcpu cpuset='0-1'>4</vcpu>, the XML
      parser will internally take this & initialize an entry in the
      def->cputune.vcpupin array for every VCPU. IOW this is implicitly
      being treated as
      
        <cputune>
          <vcpupin cpuset='0-1' vcpu='0'/>
          <vcpupin cpuset='0-1' vcpu='1'/>
          <vcpupin cpuset='0-1' vcpu='2'/>
          <vcpupin cpuset='0-1' vcpu='3'/>
        </cputune>
      
      Even more fun, the faked cputune elements are hidden from view when
      querying the live XML, because their cpuset mask is the same as the
      VM default cpumask.
      
      The upshot was that it was impossible to set VM level CPU affinity.
      
      To fix this we must update qemuProcessSetVcpuAffinities so that it
      only reports a fatal error if the per-VCPU cpu mask is different
      from the VM level cpu mask.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      a103bb10