1. 02 10月, 2014 1 次提交
  2. 18 9月, 2014 1 次提交
  3. 03 7月, 2014 1 次提交
    • P
      qemu: copy: Accept 'format' parameter when copying to a non-existing img · a73122a4
      Peter Krempa 提交于
      We have the following matrix of possible arguments handled by the logic
      statement touched by this patch:
             | flags & _REUSE_EXT | !(flags & _REUSE_EXT)
      -------+--------------------+----------------------
       format| (1)                | (2)
      -------+--------------------+----------------------
      !format| (3)                | (4)
      -------+--------------------+----------------------
      
      In cases 1 and 2 the user provided a format, in cases 3 and 4 not. The
      user requests to use a pre-existing image in 1 and 3 and libvirt will
      create a new image in 2 and 4.
      
      The difference between cases 3 and 4 is that for 3 the format is probed
      from the user-provided image, whereas in 4 we just use the existing disk
      format.
      
      The current code would treat cases 1,3 and 4 correctly but in case 2 the
      format provided by the user would be ignored.
      
      The particular piece of code was broken in commit 35c7701c
      but since it was introduced a few commits before that it was never
      released as working.
      
      (cherry picked from commit 42619ed0)
      Signed-off-by: NEric Blake <eblake@redhat.com>
      
      Conflicts:
      	src/qemu/qemu_driver.c - no refactoring of commits 7b7bf001, 4f202266
      a73122a4
  4. 27 6月, 2014 2 次提交
    • E
      docs: publish correct enum values · b814222d
      Eric Blake 提交于
      We publish libvirt-api.xml for others to use, and in fact, the
      libvirt-python bindings use it to generate python constants that
      correspond to our enum values.  However, we had an off-by-one bug
      that any enum that relied on C's rules for implicit initialization
      of the first enum member to 0 got listed in the xml as having a
      value of 1 (and all later members of the enum were equally
      botched).
      
      The fix is simple - since we add one to the previous value when
      encountering an enum without an initializer, the previous value
      must start at -1 so that the first enum member is assigned 0.
      
      The python generator code has had the off-by-one ever since DV
      first wrote it years ago, but most of our public enums were immune
      because they had an explicit = 0 initializer.  The only affected
      enums are:
      - virDomainEventGraphicsAddressType (such as
      VIR_DOMAIN_EVENT_GRAPHICS_ADDRESS_IPV4), since commit 987e31ed
      (libvirt v0.8.0)
      - virDomainCoreDumpFormat (such as VIR_DOMAIN_CORE_DUMP_FORMAT_RAW),
      since commit 9fbaff00 (libvirt v1.2.3)
      - virIPAddrType (such as VIR_IP_ADDR_TYPE_IPV4), since commit
      03e0e79e (not yet released)
      
      Thanks to Nehal J Wani for reporting the problem on IRC, and
      for helping me zero in on the culprit function.
      
      * docs/apibuild.py (CParser.parseEnumBlock): Fix implicit enum
      values.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      (cherry picked from commit 9b291bbe)
      b814222d
    • P
      qemu: blockcopy: Don't remove existing disk mirror info · 60e54a50
      Peter Krempa 提交于
      When creating a new disk mirror the new struct is stored in a separate
      variable until everything went well. The removed hunk would actually
      remove existing mirror information for example when the api would be run
      if a mirror still exists.
      
      (cherry picked from commit 02b364e1)
      
      This fixes a regression introduced in commit ff5f30b6.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      
      Conflicts:
      	src/qemu/qemu_driver.c - no refactoring of commits 7b7bf001, 4f202266
      60e54a50
  5. 06 5月, 2014 1 次提交
  6. 01 5月, 2014 1 次提交
  7. 10 4月, 2014 3 次提交
  8. 20 3月, 2014 1 次提交
    • M
      virNetClientSetTLSSession: Restore original signal mask · 93394f56
      Michal Privoznik 提交于
      Currently, we use pthread_sigmask(SIG_BLOCK, ...) prior to calling
      poll(). This is okay, as we don't want poll() to be interrupted.
      However, then - immediately as we fall out from the poll() - we try to
      restore the original sigmask - again using SIG_BLOCK. But as the man
      page says, SIG_BLOCK adds signals to the signal mask:
      
      SIG_BLOCK
            The set of blocked signals is the union of the current set and the set argument.
      
      Therefore, when restoring the original mask, we need to completely
      overwrite the one we set earlier and hence we should be using:
      
      SIG_SETMASK
            The set of blocked signals is set to the argument set.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      (cherry picked from commit 3d4b4f5a)
      93394f56
  9. 10 3月, 2014 1 次提交
    • D
      Add a mutex to serialize updates to firewall · d5a14a1a
      Daniel P. Berrange 提交于
      The nwfilter conf update mutex previously serialized
      updates to the internal data structures for firewall
      rules, and updates to the firewall itself. The latter
      was recently turned into a read/write lock, and filter
      instantiation allowed to proceed in parallel. It was
      believed that this was ok, since each filter is created
      on a separate iptables/ebtables chain.
      
      It turns out that there is a subtle lock ordering problem
      on virNWFilterObjPtr instances. __virNWFilterInstantiateFilter
      will hold a lock on the virNWFilterObjPtr it is instantiating.
      This in turn invokes virNWFilterInstantiate which then invokes
      virNWFilterDetermineMissingVarsRec which then invokes
      virNWFilterObjFindByName. This iterates over every single
      virNWFilterObjPtr in the list, locking them and checking their
      name. So if 2 or more threads try to instantiate a filter in
      parallel, they'll all hold 1 lock at the top level in the
      __virNWFilterInstantiateFilter method which will cause the
      other thread to deadlock in virNWFilterObjFindByName.
      
      The fix is to add an exclusive mutex to serialize the
      execution of __virNWFilterInstantiateFilter.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      (cherry picked from commit 925de19e)
      d5a14a1a
  10. 18 2月, 2014 14 次提交
  11. 05 2月, 2014 1 次提交
    • E
      event: move event filtering to daemon (regression fix) · e48e414e
      Eric Blake 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1058839
      
      Commit f9f56340 for CVE-2014-0028 almost had the right idea - we
      need to check the ACL rules to filter which events to send.  But
      it overlooked one thing: the event dispatch queue is running in
      the main loop thread, and therefore does not normally have a
      current virIdentityPtr.  But filter checks can be based on current
      identity, so when libvirtd.conf contains access_drivers=["polkit"],
      we ended up rejecting access for EVERY event due to failure to
      look up the current identity, even if it should have been allowed.
      
      Furthermore, even for events that are triggered by API calls, it
      is important to remember that the point of events is that they can
      be copied across multiple connections, which may have separate
      identities and permissions.  So even if events were dispatched
      from a context where we have an identity, we must change to the
      correct identity of the connection that will be receiving the
      event, rather than basing a decision on the context that triggered
      the event, when deciding whether to filter an event to a
      particular connection.
      
      If there were an easy way to get from virConnectPtr to the
      appropriate virIdentityPtr, then object_event.c could adjust the
      identity prior to checking whether to dispatch an event.  But
      setting up that back-reference is a bit invasive.  Instead, it
      is easier to delay the filtering check until lower down the
      stack, at the point where we have direct access to the RPC
      client object that owns an identity.  As such, this patch ends
      up reverting a large portion of the framework of commit f9f56340.
      We also have to teach 'make check' to special-case the fact that
      the event registration filtering is done at the point of dispatch,
      rather than the point of registration.  Note that even though we
      don't actually use virConnectDomainEventRegisterCheckACL (because
      the RegisterAny variant is sufficient), we still generate the
      function for the purposes of documenting that the filtering
      takes place.
      
      Also note that I did not entirely delete the notion of a filter
      from object_event.c; I still plan on using that for my upcoming
      patch series for qemu monitor events in libvirt-qemu.so.  In
      other words, while this patch changes ACL filtering to live in
      remote.c and therefore we have no current client of the filtering
      in object_event.c, the notion of filtering in object_event.c is
      still useful down the road.
      
      * src/check-aclrules.pl: Exempt event registration from having to
      pass checkACL filter down call stack.
      * daemon/remote.c (remoteRelayDomainEventCheckACL)
      (remoteRelayNetworkEventCheckACL): New functions.
      (remoteRelay*Event*): Use new functions.
      * src/conf/domain_event.h (virDomainEventStateRegister)
      (virDomainEventStateRegisterID): Drop unused parameter.
      * src/conf/network_event.h (virNetworkEventStateRegisterID):
      Likewise.
      * src/conf/domain_event.c (virDomainEventFilter): Delete unused
      function.
      * src/conf/network_event.c (virNetworkEventFilter): Likewise.
      * src/libxl/libxl_driver.c: Adjust caller.
      * src/lxc/lxc_driver.c: Likewise.
      * src/network/bridge_driver.c: Likewise.
      * src/qemu/qemu_driver.c: Likewise.
      * src/remote/remote_driver.c: Likewise.
      * src/test/test_driver.c: Likewise.
      * src/uml/uml_driver.c: Likewise.
      * src/vbox/vbox_tmpl.c: Likewise.
      * src/xen/xen_driver.c: Likewise.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      (cherry picked from commit 11f20e43)
      e48e414e
  12. 04 2月, 2014 2 次提交
    • D
      Push nwfilter update locking up to top level · c5d10b7a
      Daniel P. Berrange 提交于
      The NWFilter code has as a deadlock race condition between
      the virNWFilter{Define,Undefine} APIs and starting of guest
      VMs due to mis-matched lock ordering.
      
      In the virNWFilter{Define,Undefine} codepaths the lock ordering
      is
      
        1. nwfilter driver lock
        2. virt driver lock
        3. nwfilter update lock
        4. domain object lock
      
      In the VM guest startup paths the lock ordering is
      
        1. virt driver lock
        2. domain object lock
        3. nwfilter update lock
      
      As can be seen the domain object and nwfilter update locks are
      not acquired in a consistent order.
      
      The fix used is to push the nwfilter update lock upto the top
      level resulting in a lock ordering for virNWFilter{Define,Undefine}
      of
      
        1. nwfilter driver lock
        2. nwfilter update lock
        3. virt driver lock
        4. domain object lock
      
      and VM start using
      
        1. nwfilter update lock
        2. virt driver lock
        3. domain object lock
      
      This has the effect of serializing VM startup once again, even if
      no nwfilters are applied to the guest. There is also the possibility
      of deadlock due to a call graph loop via virNWFilterInstantiate
      and virNWFilterInstantiateFilterLate.
      
      These two problems mean the lock must be turned into a read/write
      lock instead of a plain mutex at the same time. The lock is used to
      serialize changes to the "driver->nwfilters" hash, so the write lock
      only needs to be held by the define/undefine methods. All other
      methods can rely on a read lock which allows good concurrency.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      (cherry picked from commit 6e5c79a1)
      c5d10b7a
    • D
      Add a read/write lock implementation · 822d25b2
      Daniel P. Berrange 提交于
      Add virRWLock backed up by a POSIX rwlock primitive
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      (cherry picked from commit c065984b)
      822d25b2
  13. 16 1月, 2014 9 次提交
    • D
      Release of libvirt-1.2.1 · 7b84b167
      Daniel Veillard 提交于
      * docs/news.html.in libvirt.spec.in: updated for the release
      * po/*.po*: updated localization from transifex and regenerated
      7b84b167
    • E
      event: filter global events by domain:getattr ACL [CVE-2014-0028] · f9f56340
      Eric Blake 提交于
      Ever since ACL filtering was added in commit 76397360 (v1.1.1), a
      user could still use event registration to obtain access to a
      domain that they could not normally access via virDomainLookup*
      or virConnectListAllDomains and friends.  We already have the
      framework in the RPC generator for creating the filter, and
      previous cleanup patches got us to the point that we can now
      wire the filter through the entire object event stack.
      
      Furthermore, whether or not domain:getattr is honored, use of
      global events is a form of obtaining a list of networks, which
      is covered by connect:search_domains added in a93cd08f (v1.1.0).
      Ideally, we'd have a way to enforce connect:search_domains when
      doing global registrations while omitting that check on a
      per-domain registration.  But this patch just unconditionally
      requires connect:search_domains, even when no list could be
      obtained, based on the following observations:
      1. Administrators are unlikely to grant domain:getattr for one
      or all domains while still denying connect:search_domains - a
      user that is able to manage domains will want to be able to
      manage them efficiently, but efficient management includes being
      able to list the domains they can access.  The idea of denying
      connect:search_domains while still granting access to individual
      domains is therefore not adding any real security, but just
      serves as a layer of obscurity to annoy the end user.
      2. In the current implementation, domain events are filtered
      on the client; the server has no idea if a domain filter was
      requested, and must therefore assume that all domain event
      requests are global.  Even if we fix the RPC protocol to
      allow for server-side filtering for newer client/server combos,
      making the connect:serach_domains ACL check conditional on
      whether the domain argument was NULL won't benefit older clients.
      Therefore, we choose to document that connect:search_domains
      is a pre-requisite to any domain event management.
      
      Network events need the same treatment, with the obvious
      change of using connect:search_networks and network:getattr.
      
      * src/access/viraccessperm.h
      (VIR_ACCESS_PERM_CONNECT_SEARCH_DOMAINS)
      (VIR_ACCESS_PERM_CONNECT_SEARCH_NETWORKS): Document additional
      effect of the permission.
      * src/conf/domain_event.h (virDomainEventStateRegister)
      (virDomainEventStateRegisterID): Add new parameter.
      * src/conf/network_event.h (virNetworkEventStateRegisterID):
      Likewise.
      * src/conf/object_event_private.h (virObjectEventStateRegisterID):
      Likewise.
      * src/conf/object_event.c (_virObjectEventCallback): Track a filter.
      (virObjectEventDispatchMatchCallback): Use filter.
      (virObjectEventCallbackListAddID): Register filter.
      * src/conf/domain_event.c (virDomainEventFilter): New function.
      (virDomainEventStateRegister, virDomainEventStateRegisterID):
      Adjust callers.
      * src/conf/network_event.c (virNetworkEventFilter): New function.
      (virNetworkEventStateRegisterID): Adjust caller.
      * src/remote/remote_protocol.x
      (REMOTE_PROC_CONNECT_DOMAIN_EVENT_REGISTER)
      (REMOTE_PROC_CONNECT_DOMAIN_EVENT_REGISTER_ANY)
      (REMOTE_PROC_CONNECT_NETWORK_EVENT_REGISTER_ANY): Generate a
      filter, and require connect:search_domains instead of weaker
      connect:read.
      * src/test/test_driver.c (testConnectDomainEventRegister)
      (testConnectDomainEventRegisterAny)
      (testConnectNetworkEventRegisterAny): Update callers.
      * src/remote/remote_driver.c (remoteConnectDomainEventRegister)
      (remoteConnectDomainEventRegisterAny): Likewise.
      * src/xen/xen_driver.c (xenUnifiedConnectDomainEventRegister)
      (xenUnifiedConnectDomainEventRegisterAny): Likewise.
      * src/vbox/vbox_tmpl.c (vboxDomainGetXMLDesc): Likewise.
      * src/libxl/libxl_driver.c (libxlConnectDomainEventRegister)
      (libxlConnectDomainEventRegisterAny): Likewise.
      * src/qemu/qemu_driver.c (qemuConnectDomainEventRegister)
      (qemuConnectDomainEventRegisterAny): Likewise.
      * src/uml/uml_driver.c (umlConnectDomainEventRegister)
      (umlConnectDomainEventRegisterAny): Likewise.
      * src/network/bridge_driver.c
      (networkConnectNetworkEventRegisterAny): Likewise.
      * src/lxc/lxc_driver.c (lxcConnectDomainEventRegister)
      (lxcConnectDomainEventRegisterAny): Likewise.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      f9f56340
    • E
      event: wire up RPC for server-side network event filtering · 8d9d098b
      Eric Blake 提交于
      We haven't had a release with network events yet, so we are free
      to fix the RPC so that it actually does what we want.  Doing
      client-side filtering of per-network events is inefficient if a
      connection is only interested in events on a single network out
      of hundreds available on the server.  But to do server-side
      per-network filtering, the server needs to know which network
      to filter on - so we need to pass an optional network over on
      registration.  Furthermore, it is possible to have a client with
      both a global and per-network filter; in the existing code, the
      server sends only one event and the client replicates to both
      callbacks.  But with server-side filtering, the server will send
      the event twice, so we need a way for the client to know which
      callbackID is sending an event, to ensure that the client can
      filter out events from a registration that does not match the
      callbackID from the server.  Likewise, the existing style of
      deregistering by eventID alone is fine; but in the new style,
      we have to remember which callbackID to delete.
      
      This patch fixes the RPC wire definition to contain all the
      needed pieces of information, and hooks into the server and
      client side improvements of the previous patches, in order to
      switch over to full server-side filtering of network events.
      Also, since we fixed this in time, all released versions of
      libvirtd that support network events also support per-network
      filtering, so we can hard-code that assumption into
      network_event.c.
      
      Converting domain events to server-side filtering will require
      the introduction of new RPC numbers, as well as a server
      feature bit that the client can use to tell whether to use
      old-style (server only supports global events) or new-style
      (server supports filtered events), so that is deferred to a
      later set of patches.
      
      * src/conf/network_event.c (virNetworkEventStateRegisterClient):
      Assume server-side filtering.
      * src/remote/remote_protocol.x
      (remote_connect_network_event_register_any_args): Add network
      argument.
      (remote_connect_network_event_register_any_ret): Return callbackID
      instead of count.
      (remote_connect_network_event_deregister_any_args): Pass
      callbackID instead of eventID.
      (remote_connect_network_event_deregister_any_ret): Drop unused
      type.
      (remote_network_event_lifecycle_msg): Add callbackID.
      * daemon/remote.c
      (remoteDispatchConnectNetworkEventDeregisterAny): Drop unused arg,
      and deal with callbackID from client.
      (remoteRelayNetworkEventLifecycle): Pass callbackID.
      (remoteDispatchConnectNetworkEventRegisterAny): Likewise, and
      recognize non-NULL network.
      * src/remote/remote_driver.c
      (remoteConnectNetworkEventRegisterAny): Pass network, and track
      server side id.
      (remoteConnectNetworkEventDeregisterAny): Deregister by callback id.
      (remoteNetworkBuildEventLifecycle): Pass remote id to event queue.
      * src/remote_protocol-structs: Regenerate.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      8d9d098b
    • E
      event: add notion of remoteID for filtering client network events · a59097e5
      Eric Blake 提交于
      In order to mirror a server with per-object filtering, the client
      needs to track which server callbackID is servicing the client
      callback.  This patch introduces the notion of a serverID, as
      well as the plumbing to use it for network events, although the
      actual complexity of using per-object filtering in the remote
      driver is deferred to a later patch.
      
      * src/conf/object_event.h (virObjectEventStateEventID): Add parameter.
      (virObjectEventStateQueueRemote, virObjectEventStateSetRemote):
      New prototypes.
      (virObjectEventStateRegisterID): Move...
      * src/conf/object_event_private.h: ...here, and add parameter.
      (_virObjectEvent): Add field.
      * src/conf/network_event.h (virNetworkEventStateRegisterClient): New
      prototype.
      * src/conf/object_event.c (_virObjectEventCallback): Add field.
      (virObjectEventStateSetRemote): New function.
      (virObjectEventStateQueue): Make wrapper around...
      (virObjectEventStateQueueRemote): New function.
      (virObjectEventCallbackListCount): Tweak return count when remote
      id matching is used.
      (virObjectEventCallbackLookup, virObjectEventStateRegisterID):
      Tweak registration when remote id matching will be used.
      (virObjectEventNew): Default to no remote id.
      (virObjectEventCallbackListAddID): Likewise, but set remote id
      when one is available.
      (virObjectEventCallbackListRemoveID)
      (virObjectEventCallbackListMarkDeleteID): Adjust return value when
      remote id was set.
      (virObjectEventStateEventID): Query existing id.
      (virObjectEventDispatchMatchCallback): Require matching event id.
      (virObjectEventStateCallbackID): Adjust caller.
      * src/conf/network_event.c (virNetworkEventStateRegisterClient): New
      function.
      (virNetworkEventStateRegisterID): Update caller.
      * src/conf/domain_event.c (virDomainEventStateRegister)
      (virDomainEventStateRegisterID): Update callers.
      * src/remote/remote_driver.c
      (remoteConnectNetworkEventRegisterAny)
      (remoteConnectNetworkEventDeregisterAny)
      (remoteConnectDomainEventDeregisterAny): Likewise.
      (remoteEventQueue): Hoist earlier to avoid forward declaration,
      and add parameter.  Adjust all callers.
      * src/libvirt_private.syms (conf/object_event.h): Drop function.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      a59097e5
    • E
      event: track callbackID on daemon side of RPC · b9d14ef0
      Eric Blake 提交于
      Right now, the daemon side of RPC events is hard-coded to at most
      one callback per eventID.  But when there are hundreds of domains
      or networks coupled and multiple conections, then sending every
      event to every connection that wants an event, even for the
      connections that only care about events for a particular object,
      is inefficient.  In order to track more than one callback in the
      server, we need to store callbacks by more than just their
      eventID.  This patch rearranges the daemon side to store network
      callbacks in a dynamic array, which can eventually be used for
      multiple callbacks of the same eventID, although actual behavior
      is unchanged without further patches to the RPC protocol.  For
      ease of review, domain events are saved for a later patch, as
      they touch more code.
      
      While at it, fix a bug where a malicious client could send a
      negative eventID to cause network event registration to access
      outside of array bounds (thankfully not a CVE, since domain
      events were already doing the bounds check, and since network
      events have not been released).
      
      * daemon/libvirtd.h (daemonClientPrivate): Alter the tracking of
      network events.
      * daemon/remote.c (daemonClientEventCallback): New struct.
      (remoteEventCallbackFree): New function.
      (remoteClientInitHook, remoteRelayNetworkEventLifecycle)
      (remoteClientFreeFunc)
      (remoteDispatchConnectNetworkEventRegisterAny): Track network
      callbacks differently.
      (remoteDispatchConnectNetworkEventDeregisterAny): Enforce bounds.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      b9d14ef0
    • P
      qemu: Avoid operations on NULL monitor if VM fails early · b952cbbc
      Peter Krempa 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1047659
      
      If a VM dies very early during an attempted connect to the guest agent
      while the locks are down the domain monitor object will be freed. The
      object is then accessed later as any failure during guest agent startup
      isn't considered fatal.
      
      In the current upstream version this doesn't lead to a crash as
      virObjectLock called when entering the monitor in
      qemuProcessDetectVcpuPIDs checks the pointer before attempting to
      dereference (lock) it. The NULL pointer is then caught in the monitor
      helper code.
      
      Before the introduction of virObjectLockable - observed on 0.10.2 - the
      pointer is locked directly via virMutexLock leading to a crash.
      
      To avoid this problem we need to differentiate between the guest agent
      not being present and the VM quitting when the locks were down. The fix
      reorganizes the code in qemuConnectAgent to add the check and then adds
      special handling to the callers.
      b952cbbc
    • E
      tests: be more explicit on qcow2 versions in virstoragetest · 974e5914
      Eric Blake 提交于
      While working on v1.0.5-maint (the branch in use on Fedora 19)
      with the host at Fedora 20, I got a failure in virstoragetest.
      I traced it to the fact that we were using qemu-img to create a
      qcow2 file, but qemu-img changed from creating v2 files by
      default in F19 to creating v3 files in F20.  Rather than leaving
      it up to qemu-img, it is better to write the test to force
      testing of BOTH file formats (better code coverage and all).
      
      This patch alone does not fix all the failures in v1.0.5-maint;
      for that, we must decide to either teach the older branch to
      understand v3 files, or to reject them outright as unsupported.
      But for upstream, making the test less dependent on changing
      qemu-img defaults is always a good thing.
      
      * tests/virstoragetest.c (testPrepImages): Simplify creation of
      raw file; check if qemu supports compat and if so use it.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      974e5914
    • E
      docs: mention maintenance branches · 908903b3
      Eric Blake 提交于
      Mitre tried to assign us two separate CVEs for the fix for
      https://bugzilla.redhat.com/show_bug.cgi?id=1047577, on the
      grounds that the fixes were separated by more than an hour
      and thus triggered different hourly snapshots.  But we
      explicitly do NOT want to treat transient security bugs as
      CVEs if they can only be triggered by patches in libvirt.git
      but where the problem is cleaned up before a formal release.
      
      Meanwhile, I noticed that while our wiki mentioned maintenance
      branches and releases, our formal documentation did not.
      
      * docs/downloads.html.in: Contrast hourly snapshots with
      maintenance branches.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      908903b3
    • C
      Fix docs for PMWakeup/PMSuspend callback types · e8eb8d84
      Claudio Bley 提交于
      s/is waken up/is woken up/
      
      A registered PMSuspendCallback is called when the domain is suspended, not
      when it is woken up.
      e8eb8d84
  14. 15 1月, 2014 2 次提交