1. 14 2月, 2014 2 次提交
  2. 11 2月, 2014 1 次提交
  3. 10 2月, 2014 1 次提交
  4. 07 2月, 2014 1 次提交
  5. 06 2月, 2014 1 次提交
    • P
      qemu: blockjob: Print correct file name in error message · 5d2691cc
      Peter Krempa 提交于
      When attempting a blockcommit from the top layer, the base argument
      passed is NULL. This will be dereferenced when attempting a commit with
      an empty image chain. Output the real volume path instead:
      
      virsh blockcommit --verbose --path vda --domain DOMNAME --wait
      error: invalid argument: top '/path/somefile' in chain for 'vda' has no backing file
      
      instead of:
      
      error: invalid argument: top '(null)' in chain for 'vda' has no backing file
      5d2691cc
  6. 05 2月, 2014 1 次提交
    • E
      event: move event filtering to daemon (regression fix) · 11f20e43
      Eric Blake 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1058839
      
      Commit f9f56340 for CVE-2014-0028 almost had the right idea - we
      need to check the ACL rules to filter which events to send.  But
      it overlooked one thing: the event dispatch queue is running in
      the main loop thread, and therefore does not normally have a
      current virIdentityPtr.  But filter checks can be based on current
      identity, so when libvirtd.conf contains access_drivers=["polkit"],
      we ended up rejecting access for EVERY event due to failure to
      look up the current identity, even if it should have been allowed.
      
      Furthermore, even for events that are triggered by API calls, it
      is important to remember that the point of events is that they can
      be copied across multiple connections, which may have separate
      identities and permissions.  So even if events were dispatched
      from a context where we have an identity, we must change to the
      correct identity of the connection that will be receiving the
      event, rather than basing a decision on the context that triggered
      the event, when deciding whether to filter an event to a
      particular connection.
      
      If there were an easy way to get from virConnectPtr to the
      appropriate virIdentityPtr, then object_event.c could adjust the
      identity prior to checking whether to dispatch an event.  But
      setting up that back-reference is a bit invasive.  Instead, it
      is easier to delay the filtering check until lower down the
      stack, at the point where we have direct access to the RPC
      client object that owns an identity.  As such, this patch ends
      up reverting a large portion of the framework of commit f9f56340.
      We also have to teach 'make check' to special-case the fact that
      the event registration filtering is done at the point of dispatch,
      rather than the point of registration.  Note that even though we
      don't actually use virConnectDomainEventRegisterCheckACL (because
      the RegisterAny variant is sufficient), we still generate the
      function for the purposes of documenting that the filtering
      takes place.
      
      Also note that I did not entirely delete the notion of a filter
      from object_event.c; I still plan on using that for my upcoming
      patch series for qemu monitor events in libvirt-qemu.so.  In
      other words, while this patch changes ACL filtering to live in
      remote.c and therefore we have no current client of the filtering
      in object_event.c, the notion of filtering in object_event.c is
      still useful down the road.
      
      * src/check-aclrules.pl: Exempt event registration from having to
      pass checkACL filter down call stack.
      * daemon/remote.c (remoteRelayDomainEventCheckACL)
      (remoteRelayNetworkEventCheckACL): New functions.
      (remoteRelay*Event*): Use new functions.
      * src/conf/domain_event.h (virDomainEventStateRegister)
      (virDomainEventStateRegisterID): Drop unused parameter.
      * src/conf/network_event.h (virNetworkEventStateRegisterID):
      Likewise.
      * src/conf/domain_event.c (virDomainEventFilter): Delete unused
      function.
      * src/conf/network_event.c (virNetworkEventFilter): Likewise.
      * src/libxl/libxl_driver.c: Adjust caller.
      * src/lxc/lxc_driver.c: Likewise.
      * src/network/bridge_driver.c: Likewise.
      * src/qemu/qemu_driver.c: Likewise.
      * src/remote/remote_driver.c: Likewise.
      * src/test/test_driver.c: Likewise.
      * src/uml/uml_driver.c: Likewise.
      * src/vbox/vbox_tmpl.c: Likewise.
      * src/xen/xen_driver.c: Likewise.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      11f20e43
  7. 31 1月, 2014 1 次提交
    • D
      Push nwfilter update locking up to top level · 6e5c79a1
      Daniel P. Berrange 提交于
      The NWFilter code has as a deadlock race condition between
      the virNWFilter{Define,Undefine} APIs and starting of guest
      VMs due to mis-matched lock ordering.
      
      In the virNWFilter{Define,Undefine} codepaths the lock ordering
      is
      
        1. nwfilter driver lock
        2. virt driver lock
        3. nwfilter update lock
        4. domain object lock
      
      In the VM guest startup paths the lock ordering is
      
        1. virt driver lock
        2. domain object lock
        3. nwfilter update lock
      
      As can be seen the domain object and nwfilter update locks are
      not acquired in a consistent order.
      
      The fix used is to push the nwfilter update lock upto the top
      level resulting in a lock ordering for virNWFilter{Define,Undefine}
      of
      
        1. nwfilter driver lock
        2. nwfilter update lock
        3. virt driver lock
        4. domain object lock
      
      and VM start using
      
        1. nwfilter update lock
        2. virt driver lock
        3. domain object lock
      
      This has the effect of serializing VM startup once again, even if
      no nwfilters are applied to the guest. There is also the possibility
      of deadlock due to a call graph loop via virNWFilterInstantiate
      and virNWFilterInstantiateFilterLate.
      
      These two problems mean the lock must be turned into a read/write
      lock instead of a plain mutex at the same time. The lock is used to
      serialize changes to the "driver->nwfilters" hash, so the write lock
      only needs to be held by the define/undefine methods. All other
      methods can rely on a read lock which allows good concurrency.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      6e5c79a1
  8. 29 1月, 2014 1 次提交
    • P
      snapshot: Add support for specifying snapshot disk backing type · 7076b4b7
      Peter Krempa 提交于
      Add support for specifying various types when doing snapshots. This will
      later allow to do snapshots on network backed volumes. Disks of type
      'volume' are not supported by snapshots (yet).
      
      Also amend the test suite to check parsing of the various new disk
      types that can now be specified.
      7076b4b7
  9. 25 1月, 2014 1 次提交
    • J
      Block info query: Add check for transient domain · 46a0737e
      John Ferlan 提交于
      Currently the qemuDomainGetBlockInfo will return allocation == physical
      for most backing stores. For a qcow2 block backed device it's possible
      to return the highest lv extent allocated from qemu for an active guest.
      That is a value where allocation != physical and one would hope be less.
      However, if the guest is not running, then the code falls back to returning
      allocation == physical. This turns out to be problematic for rhev which
      monitors the size of the backing store. During a migration, before the
      VM has been started on the target and while it is deemed inactive on the
      source, there's a small window of time where the allocation is returned
      as physical triggering the code to extend the file unnecessarily.
      
      Since rhev uses transient domains and this is edge condition for a transient
      domain, rather than returning good status and allocation == physical when
      this "window of opportunity" exists, this patch will check for a transient
      (or non persistent) domain and return a failure to the caller rather than
      returning the defaults. For a persistent domain, the defaults will be
      returned. The description for the virDomainGetBlockInfo has been updated
      to describe the phenomena.
      46a0737e
  10. 24 1月, 2014 1 次提交
  11. 23 1月, 2014 1 次提交
    • E
      api: require write permission for guest agent interaction · 7f2d27d1
      Eric Blake 提交于
      I noticed that we allow virDomainGetVcpusFlags even for read-only
      connections, but that with a flag, it can require guest agent
      interaction.  It is feasible that a malicious guest could
      intentionally abuse the replies it sends over the guest agent
      connection to possibly trigger a bug in libvirt's JSON parser,
      or withhold an answer so as to prevent the use of the agent
      in a later command such as a shutdown request.  Although we
      don't know of any such exploits now (and therefore don't mind
      posting this patch publicly without trying to get a CVE assigned),
      it is better to err on the side of caution and explicitly require
      full access to any domain where the API requires guest interaction
      to operate correctly.
      
      I audited all commands that are marked as conditionally using a
      guest agent.  Note that at least virDomainFSTrim is documented
      as needing a guest agent, but that such use is unconditional
      depending on the hypervisor (so the existing domain:fs_trim ACL
      should be sufficient there, rather than also requirng domain:write).
      But when designing future APIs, such as the plans for obtaining
      a domain's IP addresses, we should copy the approach of this patch
      in making interaction with the guest be specified via a flag, and
      use that flag to also require stricter access checks.
      
      * src/libvirt.c (virDomainGetVcpusFlags): Forbid guest interaction
      on read-only connection.
      (virDomainShutdownFlags, virDomainReboot): Improve docs on agent
      interaction.
      * src/remote/remote_protocol.x
      (REMOTE_PROC_DOMAIN_SNAPSHOT_CREATE_XML)
      (REMOTE_PROC_DOMAIN_SET_VCPUS_FLAGS)
      (REMOTE_PROC_DOMAIN_GET_VCPUS_FLAGS, REMOTE_PROC_DOMAIN_REBOOT)
      (REMOTE_PROC_DOMAIN_SHUTDOWN_FLAGS): Require domain:write for any
      conditional use of a guest agent.
      * src/xen/xen_driver.c: Fix clients.
      * src/libxl/libxl_driver.c: Likewise.
      * src/uml/uml_driver.c: Likewise.
      * src/qemu/qemu_driver.c: Likewise.
      * src/lxc/lxc_driver.c: Likewise.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      7f2d27d1
  12. 22 1月, 2014 1 次提交
  13. 21 1月, 2014 1 次提交
  14. 20 1月, 2014 2 次提交
  15. 17 1月, 2014 1 次提交
    • E
      maint: avoid nested use of virConnect{Ref,Close} · 25221a1b
      Eric Blake 提交于
      The public virConnectRef and virConnectClose API are just thin
      wrappers around virObjectRef/virObjectRef, with added object
      validation and an error reset.  Within our backend drivers, use
      of the object validation is just an inefficiency since we always
      pass valid objects.  More important to think about is what
      happens with the error reset; our uses of virConnectRef happened
      to be safe (since we hadn't encountered any earlier errors), but
      in several cases the use of virConnectClose could lose a real
      error.
      
      Ideally, we should also avoid calling virConnectOpen() from
      within backend drivers - but that is a known situation that
      needs much more design work.
      
      * src/qemu/qemu_process.c (qemuProcessReconnectHelper)
      (qemuProcessReconnect): Avoid nested public API call.
      * src/qemu/qemu_driver.c (qemuAutostartDomains)
      (qemuStateInitialize, qemuStateStop): Likewise.
      * src/qemu/qemu_migration.c (doPeer2PeerMigrate): Likewise.
      * src/storage/storage_driver.c (storageDriverAutostart):
      Likewise.
      * src/uml/uml_driver.c (umlAutostartConfigs): Likewise.
      * src/lxc/lxc_process.c (virLXCProcessAutostartAll): Likewise.
      (virLXCProcessReboot): Likewise, and avoid leaking conn on error.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      25221a1b
  16. 16 1月, 2014 1 次提交
    • E
      event: filter global events by domain:getattr ACL [CVE-2014-0028] · f9f56340
      Eric Blake 提交于
      Ever since ACL filtering was added in commit 76397360 (v1.1.1), a
      user could still use event registration to obtain access to a
      domain that they could not normally access via virDomainLookup*
      or virConnectListAllDomains and friends.  We already have the
      framework in the RPC generator for creating the filter, and
      previous cleanup patches got us to the point that we can now
      wire the filter through the entire object event stack.
      
      Furthermore, whether or not domain:getattr is honored, use of
      global events is a form of obtaining a list of networks, which
      is covered by connect:search_domains added in a93cd08f (v1.1.0).
      Ideally, we'd have a way to enforce connect:search_domains when
      doing global registrations while omitting that check on a
      per-domain registration.  But this patch just unconditionally
      requires connect:search_domains, even when no list could be
      obtained, based on the following observations:
      1. Administrators are unlikely to grant domain:getattr for one
      or all domains while still denying connect:search_domains - a
      user that is able to manage domains will want to be able to
      manage them efficiently, but efficient management includes being
      able to list the domains they can access.  The idea of denying
      connect:search_domains while still granting access to individual
      domains is therefore not adding any real security, but just
      serves as a layer of obscurity to annoy the end user.
      2. In the current implementation, domain events are filtered
      on the client; the server has no idea if a domain filter was
      requested, and must therefore assume that all domain event
      requests are global.  Even if we fix the RPC protocol to
      allow for server-side filtering for newer client/server combos,
      making the connect:serach_domains ACL check conditional on
      whether the domain argument was NULL won't benefit older clients.
      Therefore, we choose to document that connect:search_domains
      is a pre-requisite to any domain event management.
      
      Network events need the same treatment, with the obvious
      change of using connect:search_networks and network:getattr.
      
      * src/access/viraccessperm.h
      (VIR_ACCESS_PERM_CONNECT_SEARCH_DOMAINS)
      (VIR_ACCESS_PERM_CONNECT_SEARCH_NETWORKS): Document additional
      effect of the permission.
      * src/conf/domain_event.h (virDomainEventStateRegister)
      (virDomainEventStateRegisterID): Add new parameter.
      * src/conf/network_event.h (virNetworkEventStateRegisterID):
      Likewise.
      * src/conf/object_event_private.h (virObjectEventStateRegisterID):
      Likewise.
      * src/conf/object_event.c (_virObjectEventCallback): Track a filter.
      (virObjectEventDispatchMatchCallback): Use filter.
      (virObjectEventCallbackListAddID): Register filter.
      * src/conf/domain_event.c (virDomainEventFilter): New function.
      (virDomainEventStateRegister, virDomainEventStateRegisterID):
      Adjust callers.
      * src/conf/network_event.c (virNetworkEventFilter): New function.
      (virNetworkEventStateRegisterID): Adjust caller.
      * src/remote/remote_protocol.x
      (REMOTE_PROC_CONNECT_DOMAIN_EVENT_REGISTER)
      (REMOTE_PROC_CONNECT_DOMAIN_EVENT_REGISTER_ANY)
      (REMOTE_PROC_CONNECT_NETWORK_EVENT_REGISTER_ANY): Generate a
      filter, and require connect:search_domains instead of weaker
      connect:read.
      * src/test/test_driver.c (testConnectDomainEventRegister)
      (testConnectDomainEventRegisterAny)
      (testConnectNetworkEventRegisterAny): Update callers.
      * src/remote/remote_driver.c (remoteConnectDomainEventRegister)
      (remoteConnectDomainEventRegisterAny): Likewise.
      * src/xen/xen_driver.c (xenUnifiedConnectDomainEventRegister)
      (xenUnifiedConnectDomainEventRegisterAny): Likewise.
      * src/vbox/vbox_tmpl.c (vboxDomainGetXMLDesc): Likewise.
      * src/libxl/libxl_driver.c (libxlConnectDomainEventRegister)
      (libxlConnectDomainEventRegisterAny): Likewise.
      * src/qemu/qemu_driver.c (qemuConnectDomainEventRegister)
      (qemuConnectDomainEventRegisterAny): Likewise.
      * src/uml/uml_driver.c (umlConnectDomainEventRegister)
      (umlConnectDomainEventRegisterAny): Likewise.
      * src/network/bridge_driver.c
      (networkConnectNetworkEventRegisterAny): Likewise.
      * src/lxc/lxc_driver.c (lxcConnectDomainEventRegister)
      (lxcConnectDomainEventRegisterAny): Likewise.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      f9f56340
  17. 07 1月, 2014 6 次提交
    • J
      qemu: Fix job usage in virDomainGetBlockIoTune · 3b564259
      Jiri Denemark 提交于
      CVE-2013-6458
      
      Every API that is going to begin a job should do that before fetching
      data from vm->def.
      3b564259
    • J
      qemu: Fix job usage in qemuDomainBlockCopy · ff5f30b6
      Jiri Denemark 提交于
      Every API that is going to begin a job should do that before fetching
      data from vm->def.
      ff5f30b6
    • J
      qemu: Fix job usage in qemuDomainBlockJobImpl · f93d2caa
      Jiri Denemark 提交于
      CVE-2013-6458
      
      Every API that is going to begin a job should do that before fetching
      data from vm->def.
      f93d2caa
    • J
      qemu: Avoid using stale data in virDomainGetBlockInfo · b7992595
      Jiri Denemark 提交于
      CVE-2013-6458
      
      Generally, every API that is going to begin a job should do that before
      fetching data from vm->def. However, qemuDomainGetBlockInfo does not
      know whether it will have to start a job or not before checking vm->def.
      To avoid using disk alias that might have been freed while we were
      waiting for a job, we use its copy. In case the disk was removed in the
      meantime, we will fail with "cannot find statistics for device '...'"
      error message.
      b7992595
    • J
      qemu: Do not access stale data in virDomainBlockStats · db86da5c
      Jiri Denemark 提交于
      CVE-2013-6458
      https://bugzilla.redhat.com/show_bug.cgi?id=1043069
      
      When virDomainDetachDeviceFlags is called concurrently to
      virDomainBlockStats: libvirtd may crash because qemuDomainBlockStats
      finds a disk in vm->def before getting a job on a domain and uses the
      disk pointer after getting the job. However, the domain in unlocked
      while waiting on a job condition and thus data behind the disk pointer
      may disappear. This happens when thread 1 runs
      virDomainDetachDeviceFlags and enters monitor to actually remove the
      disk. Then another thread starts running virDomainBlockStats, finds the
      disk in vm->def, and while it's waiting on the job condition (owned by
      the first thread), the first thread finishes the disk removal. When the
      second thread gets the job, the memory pointed to be the disk pointer is
      already gone.
      
      That said, every API that is going to begin a job should do that before
      fetching data from vm->def.
      db86da5c
    • E
      maint: fix comment typos in qemu numa code · 599ef94d
      Eric Blake 提交于
      Introduced in commit 81fae6b9.
      
      * src/qemu/qemu_driver.c (qemuDomainSetNumaParamsLive): Fix typos.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      599ef94d
  18. 06 1月, 2014 2 次提交
    • P
      qemu: range check numa memory placement mode · 6e7490c7
      Peter Krempa 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1047234
      
      Add a range check for supported numa memory placement modes provided by
      the user before setting them in the domain definition. Without the check
      the user is able to provide a (yet) unknown mode which is then stored in
      the domain definition. This potentially causes a NULL dereference when
      the defintion is formatted into the XML.
      
      To reproduce run:
       virsh numatune DOMNAME --mode 6 --nodeset 0
      
      The XML will then contain:
        <numatune>
            <memory mode='(null)' nodeset='0'/>
        </numatune>
      
      With this fix, the command fails:
       error: Unable to change numa parameters
       error: invalid argument: unsupported numa_mode: '6'
      6e7490c7
    • P
      qemu: Clean up qemuDomainSetNumaParameters · 8b573a6b
      Peter Krempa 提交于
      Add whitespace to separate logical code blocks, reformat error messages
      and clean up code flow.
      
      This patch changes error handling in some cases where the the loop would
      be continued to jump to cleanup instead and error out rather than modify
      the domain any further.
      8b573a6b
  19. 12 12月, 2013 2 次提交
  20. 10 12月, 2013 6 次提交
  21. 06 12月, 2013 1 次提交
  22. 04 12月, 2013 1 次提交
  23. 03 12月, 2013 2 次提交
    • L
      qemu: default to vfio for nodedev-detach · 47b9aae0
      Laine Stump 提交于
      This patch resolves:
      
        https://bugzilla.redhat.com/show_bug.cgi?id=1035188
      
      Commit f094aaac changed the PCI device assignment in qemu domains
      to default to using VFIO rather than legacy KVM device assignment
      (when VFIO is available). It didn't change which driver was used by
      default for virNodeDeviceDetachFlags(), though, so that API (and the
      virsh nodedev-detach command) was still binding to the pci-stub
      driver, used by legacy KVM assignment, by default.
      
      This patch publicizes (only within the qemu module, though, so no
      additions to the symbol exports are needed) the functions that check
      for presence of KVM and VFIO device assignment, then uses those
      functions to decide what to do when no driver is specified for
      virNodeDeviceDetachFlags(); if the vfio driver is loaded, the device
      will be bound to vfio-pci, or if legacy KVM assignment is supported on
      this system, the device will be bound to pci-stub; if neither method
      is available, the detach will fail.
      47b9aae0
    • P
      qemu: snapshots: Declare supported and unsupported snapshot configs · 26fb96d8
      Peter Krempa 提交于
      Currently the snapshot code did not check if it actually supports
      snapshots on various disk backends for domains. To avoid future problems
      add checkers that whitelist the supported configurations.
      26fb96d8
  24. 02 12月, 2013 2 次提交