1. 12 12月, 2018 1 次提交
  2. 09 12月, 2018 2 次提交
    • L
      lxc: don't forbid <interface type='direct'> · c55ff370
      Laine Stump 提交于
      Commit 017dfa27 changed a few switch statements in the LXC code to
      have all possible enum values, and in the process changed the switch
      statement in virLXCControllerGetNICIndexes() to return an error status
      for unsupported interface types, but it erroneously put type='direct'
      on the list of unsupported types.
      
      type='direct' (implemented with a macvlan interface) is supported on
      LXC, but it's interface shouldn't be placed on the list of interfaces
      given to CreateMachineWithNetwork() because the interface is put
      inside the container, while CreateMachineWithNetwork() only wants to
      know about the parent veths of veth pairs (the parent veth remains on
      the host side, while the child veth is put into the container).
      
      Resolves: https://bugzilla.redhat.com/1656463Signed-off-by: NLaine Stump <laine@laine.org>
      Reviewed-by: NJán Tomko <jtomko@redhat.com>
      c55ff370
    • L
      lxc: check actual type of interface not config type · 59603b62
      Laine Stump 提交于
      virLXCControllerGetNICIndexes() was deciding whether or not to add the
      ifindex for an interface's ifname to the list of ifindexes sent to
      CreateMachineWithNetwork based on the interface type stored in the
      config. This would be incorrect in the case of <interface
      type='network'> where the network was giving out macvlan interfaces
      tied to a physical device (i.e. when the actual interface type was
      "direct").
      
      Instead of checking the setting of "net->type", we should be checking
      the setting of virDomainNetGetActualType(net).
      
      I don't think this caused any actual misbehavior, it was just
      technically wrong.
      Signed-off-by: NLaine Stump <laine@laine.org>
      Reviewed-by: NJán Tomko <jtomko@redhat.com>
      59603b62
  3. 05 12月, 2018 1 次提交
  4. 16 11月, 2018 1 次提交
  5. 15 11月, 2018 1 次提交
  6. 08 11月, 2018 1 次提交
  7. 02 10月, 2018 1 次提交
  8. 25 9月, 2018 2 次提交
    • M
      lxc_monitor: Avoid AB / BA lock race · 7882c6ec
      Mark Asselstine 提交于
      A deadlock situation can occur when autostarting a LXC domain 'guest'
      due to two threads attempting to take opposing locks while holding
      opposing locks (AB BA problem). Thread A takes and holds the 'vm' lock
      while attempting to take the 'client' lock, meanwhile, thread B takes
      and holds the 'client' lock while attempting to take the 'vm' lock.
      
      The potential for this can be seen as follows:
      
      Thread A:
      virLXCProcessAutostartDomain (takes vm lock)
       --> virLXCProcessStart
        --> virLXCProcessConnectMonitor
         --> virLXCMonitorNew
          --> virNetClientSetCloseCallback (wants client lock)
      
      Thread B:
      virNetClientIncomingEvent (takes client lock)
       --> virNetClientIOHandleInput
        --> virNetClientCallDispatch
         --> virNetClientCallDispatchMessage
          --> virNetClientProgramDispatch
           --> virLXCMonitorHandleEventInit
            --> virLXCProcessMonitorInitNotify (wants vm lock)
      
      Since these threads are scheduled independently and are preemptible it
      is possible for the deadlock scenario to occur where each thread locks
      their first lock but both will fail to get their second lock and just
      spin forever. You get something like:
      
      virLXCProcessAutostartDomain (takes vm lock)
       --> virLXCProcessStart
        --> virLXCProcessConnectMonitor
         --> virLXCMonitorNew
      <...>
      virNetClientIncomingEvent (takes client lock)
       --> virNetClientIOHandleInput
        --> virNetClientCallDispatch
         --> virNetClientCallDispatchMessage
          --> virNetClientProgramDispatch
           --> virLXCMonitorHandleEventInit
            --> virLXCProcessMonitorInitNotify (wants vm lock but spins)
      <...>
          --> virNetClientSetCloseCallback (wants client lock but spins)
      
      Neither thread ever gets the lock it needs to be able to continue
      while holding the lock that the other thread needs.
      
      The actual window for preemption which can cause this deadlock is
      rather small, between the calls to virNetClientProgramNew() and
      execution of virNetClientSetCloseCallback(), both in
      virLXCMonitorNew(). But it can be seen in real world use that this
      small window is enough.
      
      By moving the call to virNetClientSetCloseCallback() ahead of
      virNetClientProgramNew() we can close any possible chance of the
      deadlock taking place. There should be no other implications to the
      move since the close callback (in the unlikely event was called) will
      spin on the vm lock. The remaining work that takes place between the
      old call location of virNetClientSetCloseCallback() and the new
      location is unaffected by the move.
      Signed-off-by: NMark Asselstine <mark.asselstine@windriver.com>
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      7882c6ec
    • P
      vircgroup: rename virCgroupAdd.*Task to virCgroupAdd.*Process · 0772c346
      Pavel Hrdina 提交于
      In cgroup v2 we need to handle processes and threads differently,
      following patch will introduce virCgroupAddThread.
      Reviewed-by: NFabiano Fidêncio <fidencio@redhat.com>
      Reviewed-by: NJán Tomko <jtomko@redhat.com>
      Signed-off-by: NPavel Hrdina <phrdina@redhat.com>
      0772c346
  9. 24 9月, 2018 2 次提交
  10. 20 9月, 2018 2 次提交
  11. 18 9月, 2018 1 次提交
    • M
      security_manager: Load lock plugin on init · 3e26b476
      Michal Privoznik 提交于
      Now that we know what metadata lock manager user wishes to use we
      can load it when initializing security driver. This is achieved
      by adding new argument to virSecurityManagerNewDriver() and
      subsequently to all functions that end up calling it.
      
      The cfg.mk change is needed in order to allow lock_manager.h
      inclusion in security driver without 'syntax-check' complaining.
      This is safe thing to do as locking APIs will always exist (it's
      only backend implementation that changes). However, instead of
      allowing the include for all other drivers (like cpu, network,
      and so on) allow it only for security driver. This will still
      trigger the error if including from other drivers.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      Reviewed-by: NJohn Ferlan <jferlan@redhat.com>
      3e26b476
  12. 13 8月, 2018 1 次提交
  13. 30 7月, 2018 1 次提交
  14. 27 7月, 2018 4 次提交
  15. 26 7月, 2018 5 次提交
  16. 23 7月, 2018 1 次提交
    • A
      src: Make virStr*cpy*() functions return an int · 6c0d0210
      Andrea Bolognani 提交于
      Currently, the functions return a pointer to the
      destination buffer on success or NULL on failure.
      
      Not only does this kind of error handling look quite
      alien in the context of libvirt, where most functions
      return zero on success and a negative int on failure,
      but it's also somewhat pointless because unless there's
      been a failure the returned pointer will be the same
      one passed in by the user, thus offering no additional
      value.
      
      Change the functions so that they return an int
      instead.
      Signed-off-by: NAndrea Bolognani <abologna@redhat.com>
      6c0d0210
  17. 03 7月, 2018 3 次提交
  18. 27 6月, 2018 4 次提交
  19. 12 6月, 2018 1 次提交
  20. 06 6月, 2018 1 次提交
  21. 05 6月, 2018 1 次提交
  22. 14 5月, 2018 1 次提交
  23. 04 5月, 2018 2 次提交
    • P
    • J
      conf: Clean up object referencing for Add and Remove · b04629b6
      John Ferlan 提交于
      When adding a new object to the domain object list, there should
      have been 2 virObjectRef calls made one for each list into which
      the object was placed to match the 2 virObjectUnref calls that
      would occur during Remove as part of virHashRemoveEntry when
      virObjectFreeHashData is called when the element is removed from
      the hash table as set up in virDomainObjListNew.
      
      Some drivers (libxl, lxc, qemu, and vz) handled this inconsistency
      by calling virObjectRef upon successful return from virDomainObjListAdd
      in order to use virDomainObjEndAPI when done with the returned @vm.
      While others (bhyve, openvz, test, and vmware) handled this via only
      calling virObjectUnlock upon successful return from virDomainObjListAdd.
      
      This patch will "unify" the approach to use virDomainObjEndAPI
      for any @vm successfully returned from virDomainObjListAdd.
      
      Because list removal is so tightly coupled with list addition,
      this patch fixes the list removal algorithm to return the object
      as entered - "locked and reffed".  This way, the callers can then
      decide how to uniformly handle add/remove success and failure.
      This removes the onus on the caller to "specially handle" the
      @vm during removal processing.
      
      The Add/Remove logic allows for some logic simplification such
      as in libxl where we can Remove the @vm directly rather than
      needing to set a @remove_dom boolean and removing after the
      libxlDomainObjEndJob completes as the @vm is locked/reffed.
      Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
      Reviewed-by: NErik Skultety <eskultet@redhat.com>
      b04629b6