1. 23 8月, 2019 2 次提交
    • M
      storage: Drop and reacquire pool obj lock in some backends · 985f035f
      Michal Privoznik 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1711789
      
      Starting up or building some types of pools may take a very long
      time (e.g. a misconfigured NFS). Holding the pool object locked
      throughout the whole time hurts concurrency, e.g. if there's
      another thread that is listing all the pools.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      Reviewed-by: NJán Tomko <jtomko@redhat.com>
      985f035f
    • M
      storage_driver: Protect pool def during startup and build · 13284a6b
      Michal Privoznik 提交于
      In near future the storage pool object lock will be released
      during startPool and buildPool callback (in some backends). But
      this means that another thread may acquire the pool object lock
      and change its definition rendering the former thread access not
      only stale definition but also access freed memory
      (virStoragePoolObjAssignDef() will free old def when setting a
      new one).
      
      One way out of this would be to have the pool appear as active
      because our code deals with obj->def and obj->newdef just fine.
      But we can't declare a pool as active if it's not started or
      still building up. Therefore, have a boolean flag that is very
      similar and forces virStoragePoolObjAssignDef() to store new
      definition in obj->newdef even for an inactive pool. In turn, we
      have to move the definition to correct place when unsetting the
      flag. But that's as easy as calling
      virStoragePoolUpdateInactive().
      
      Technically speaking, change made to
      storageDriverAutostartCallback() is not needed because until
      storage driver is initialized no storage API can run therefore
      there can't be anyone wanting to change the pool's definition.
      But I'm doing the change there for consistency anyways.
      Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
      Reviewed-by: NJán Tomko <jtomko@redhat.com>
      13284a6b
  2. 22 8月, 2019 6 次提交
  3. 21 8月, 2019 5 次提交
  4. 09 8月, 2019 2 次提交
  5. 12 7月, 2019 3 次提交
  6. 11 7月, 2019 1 次提交
    • D
      storage: acquire a pidfile in the driver root directory · 44a5ba2a
      Daniel P. Berrangé 提交于
      When we allow multiple instances of the driver for the same user
      account, using a separate root directory, we need to ensure mutual
      exclusion. Use a pidfile to guarantee this.
      
      In privileged libvirtd this ends up locking
      
         /var/run/libvirt/storage/driver.pid
      
      In unprivileged libvirtd this ends up locking
      
        /run/user/$UID/libvirt/storage/run/driver.pid
      
      NB, the latter can vary depending on $XDG_RUNTIME_DIR
      Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
      44a5ba2a
  7. 25 6月, 2019 1 次提交
  8. 19 6月, 2019 1 次提交
  9. 18 6月, 2019 1 次提交
    • Y
      storage: escape ipv6 for ceph mon hosts to librados · cdd362e0
      Yi Li 提交于
      Hosts for rbd are ceph monitor daemons. These have fixed IP addresses,
      so they are often referenced by IP rather than hostname for
      convenience, or to avoid relying on DNS. Using IPv4 addresses as the
      host name works already, but IPv6 addresses require rbd-specific
      escaping because the colon is used as an option separator in the
      string passed to librados.
      
      Escape these colons, and enclose the IPv6 address in square brackets
      so it is distinguished from the port, which is currently mandatory.
      Signed-off-by: NYi Li <yili@winhong.com>
      Reviewed-by: NJán Tomko <jtomko@redhat.com>
      Signed-off-by: NJán Tomko <jtomko@redhat.com>
      cdd362e0
  10. 17 5月, 2019 1 次提交
    • D
      src: don't statically link code that's already in libvirt.so · e5df4ede
      Daniel P. Berrangé 提交于
      Various binaries are statically linking to libvirt_util.la and
      other intermediate libraries we build. These intermediate libs
      all get built into the main libvirt.so shared library eventually,
      so we can dynamically link to that instead and reduce the on disk
      footprint.
      
      In libvirt-daemon RPM:
      
                  virtlockd: 1.6 MB -> 153 KB
                   virtlogd: 1.6 MB -> 157 KB
           libvirt_iohelper: 937 KB -> 23 KB
      
      In libvirt-daemon-driver-network RPM:
      
       libvirt_leaseshelper: 940 KB -> 26 KB
      
      In libvirt-daemon-driver-storage-core RPM:
      
         libvirt_parthelper: 926 KB -> 21 KB
      
      IOW, about 5.6 MB total space saving in a build done on Fedora 30
      x86_64 architecture.
      Reviewed-by: NJán Tomko <jtomko@redhat.com>
      Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
      e5df4ede
  11. 12 4月, 2019 2 次提交
  12. 10 4月, 2019 2 次提交
  13. 03 4月, 2019 2 次提交
  14. 19 3月, 2019 2 次提交
  15. 18 3月, 2019 4 次提交
  16. 16 3月, 2019 5 次提交