1. 24 5月, 2016 1 次提交
  2. 23 5月, 2016 1 次提交
    • J
      storage: do not clear vols before volume upload · 21fdb4fe
      Ján Tomko 提交于
      Commit 5e54361c added virStoragePoolObjClearVols before refreshPool
      to prevent duplicate volume entries.
      
      However it is not needed here because we're not refreshing the pool yet,
      just checking for the existence of the refresh callback.
      
      The actual refresh is done via virStorageVolFDStreamCloseCb
      in virStorageVolPoolRefreshThread, which already calls
      virStoragePoolObjClearVols.
      21fdb4fe
  3. 20 5月, 2016 1 次提交
  4. 11 5月, 2016 2 次提交
    • J
      storage: Need to clear pool prior to calling the refreshPool · 5e54361c
      John Ferlan 提交于
      Prior to calling the 'refreshPool' during CreatePool or UploadPool
      operations, we need to clear the pool; otherwise, the pool will
      have duplicated entries.
      5e54361c
    • J
      storage: Fix regression cloning volume into a logical pool · 2c52ec43
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1318993
      
      Commit id 'dd519a29' caused a regression cloning a volume into a
      logical pool by removing just the 'allocation' adjustment during
      storageVolCreateXMLFrom. Combined with the change to not require the
      new volume input XML to have a capacity listed (commit id 'e3f1d2a8')
      left the possibility that a zero allocation value (e.g., not provided)
      would create a thin/sparse logical volume. When a thin lv becomes fully
      populated, then LVM sets the partition 'inactive' and the subsequent
      fdatasync() fails.
      
      Add a new 'has_allocation' flag to be set at XML parse time to indicate
      that allocation was provided. This is done so that if it's not provided
      the create-from code uses the capacity value since we document that if
      omitted, the volume will be fully allocated at time of creation.
      
      For a logical backend, that creation time is 'createVol', while for a
      file backend, creation doesn't set the size, but the 'createRaw' called
      during buildVolFrom will decide whether the file is sparse or not based
      on the provided capacity and allocation value.
      
      For volume clones that provide different allocation and capacity values
      to allow for sparse files, there is no change.
      2c52ec43
  5. 30 4月, 2016 1 次提交
  6. 15 4月, 2016 1 次提交
  7. 26 2月, 2016 1 次提交
    • J
      storage: Fix error path in storagePoolDefineXML · ee67069c
      John Ferlan 提交于
      Found by inspection - after calling virStoragePoolObjAssignDef the
      pool is part of the driver->pools.objs list and the failure path
      for the virStoragePoolObjSaveDef will use virStoragePoolObjRemove
      to remove the pool from the objs list which will unlock and free
      the pool pointer (as pools->objs[i] during the loop). Since the call
      doesn't clear the pool address from the callee, we need to set it
      to NULL; otherwise, the virStoragePoolObjUnlock in the cleanup: code
      will fail miserably.
      ee67069c
  8. 17 2月, 2016 1 次提交
  9. 12 2月, 2016 2 次提交
  10. 10 2月, 2016 1 次提交
  11. 05 1月, 2016 1 次提交
  12. 04 1月, 2016 1 次提交
    • M
      storage: do not leak storage pool XML filename · c494db8f
      Michael Chapman 提交于
      Valgrind complained:
      
      ==28277== 38 bytes in 1 blocks are definitely lost in loss record 298 of 957
      ==28277==    at 0x4A06A2E: malloc (vg_replace_malloc.c:270)
      ==28277==    by 0x82D7F57: __vasprintf_chk (in /lib64/libc-2.12.so)
      ==28277==    by 0x52EF16A: virVasprintfInternal (stdio2.h:199)
      ==28277==    by 0x52EF25C: virAsprintfInternal (virstring.c:514)
      ==28277==    by 0x52B1FA9: virFileBuildPath (virfile.c:2831)
      ==28277==    by 0x19B1947C: storageDriverAutostart (storage_driver.c:191)
      ==28277==    by 0x19B196A7: storageStateAutoStart (storage_driver.c:307)
      ==28277==    by 0x538527E: virStateInitialize (libvirt.c:793)
      ==28277==    by 0x11D7CF: daemonRunStateInit (libvirtd.c:947)
      ==28277==    by 0x52F4694: virThreadHelper (virthread.c:206)
      ==28277==    by 0x6E08A50: start_thread (in /lib64/libpthread-2.12.so)
      ==28277==    by 0x82BE93C: clone (in /lib64/libc-2.12.so)
      Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
      c494db8f
  13. 18 12月, 2015 1 次提交
    • J
      storage: Add flags to allow building pool during create processing · aeb1078a
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=830056
      
      Add flags handling to the virStoragePoolCreate and virStoragePoolCreateXML
      API's which will allow the caller to provide the capability for the storage
      pool create API's to also perform a pool build during creation rather than
      requiring the additional buildPool step. This will allow transient pools
      to be defined, built, and started.
      
      The new flags are:
      
          * VIR_STORAGE_POOL_CREATE_WITH_BUILD
            Perform buildPool without any flags passed.
      
          * VIR_STORAGE_POOL_CREATE_WITH_BUILD_OVERWRITE
            Perform buildPool using VIR_STORAGE_POOL_BUILD_OVERWRITE flag.
      
          * VIR_STORAGE_POOL_CREATE_WITH_BUILD_NO_OVERWRITE
            Perform buildPool using VIR_STORAGE_POOL_BUILD_NO_OVERWRITE flag.
      
      It is up to the backend to handle the processing of build flags. The
      overwrite and no-overwrite flags are mutually exclusive.
      
      NB:
      This patch is loosely based upon code originally authored by Osier
      Yang that were not reviewed and pushed, see:
      
      https://www.redhat.com/archives/libvir-list/2012-July/msg01328.html
      aeb1078a
  14. 17 12月, 2015 1 次提交
  15. 12 11月, 2015 2 次提交
  16. 04 11月, 2015 3 次提交
    • J
      storage: On 'buildVol' failure don't delete the volume · 4cd7d220
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1233003
      
      Commit id 'fdda3760' only managed a symptom where it was possible to
      create a file in a pool without libvirt's knowledge, so it was reverted.
      
      The real fix is to have all the createVol API's which actually create
      a volume (disk, logical, zfs) and the buildVol API's which handle the
      real creation of some volume file (fs, rbd, sheepdog) manage deleting
      any volume which they create when there is some sort of error in
      processing the volume.
      
      This way the onus isn't left up to the storage_driver to determine whether
      the buildVol failure was due to some failure as a result of adjustments
      made to the volume after creation such as getting sizes, changing ownership,
      changing volume protections, etc. or simple a failure in creation.
      
      Without needing to consider that the volume has to be removed, the
      buildVol failure path only needs to remove the volume from the pool.
      This way if a creation failed due to duplicate name, libvirt wouldn't
      remove a volume that it didn't create in the pool target.
      4cd7d220
    • J
      Revert "storage: Prior to creating a volume, refresh the pool" · 0a6e709c
      John Ferlan 提交于
      This reverts commit fdda3760.
      
      This commit only manages a symptom of finding a buildRet failure
      where a volume was not listed in the pool, but someone created the
      volume outside of libvirt in the pool being managed by libvirt.
      0a6e709c
    • J
      storage: Pull volume removal from pool in storageVolDeleteInternal · a1703557
      John Ferlan 提交于
      Create a helper function to remove volume from the pool.
      a1703557
  17. 14 10月, 2015 1 次提交
  18. 12 10月, 2015 1 次提交
  19. 05 10月, 2015 1 次提交
    • J
      storage: Prior to creating a volume, refresh the pool · fdda3760
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1233003
      
      Although perhaps bordering on a don't do that type scenario, if
      someone creates a volume in a pool outside of libvirt, then uses that
      same name to create a volume in the pool via libvirt, then the creation
      will fail and in some cases cause the same name volume to be deleted.
      
      This patch will refresh the pool just prior to checking whether the
      named volume exists prior to creating the volume in the pool. While
      it's still possible to have a timing window to create a file after the
      check - at least we tried.  At that point, someone is being malicious.
      fdda3760
  20. 29 9月, 2015 2 次提交
  21. 02 9月, 2015 1 次提交
    • J
      storage: Handle failure from refreshVol · db9277a3
      John Ferlan 提交于
      Commit id '155ca616' added the 'refreshVol' API. In an NFS root-squash
      environment it was possible that if the just created volume from XML wasn't
      properly created with the right uid/gid and/or mode, then the followup
      refreshVol will fail to open the volume in order to get the allocation/
      capacity values. This would leave the volume still on the server and
      cause a libvirtd crash because 'voldef' would be in the pool list, but
      the cleanup code would free it.
      db9277a3
  22. 10 7月, 2015 1 次提交
  23. 09 7月, 2015 1 次提交
  24. 08 7月, 2015 1 次提交
    • E
      storage: Fix regression in storagePoolUpdateAllState · f92f3121
      Erik Skultety 提交于
      Commit 2a31c5f0 introduced support for storage pool state XMLs, however
      it also introduced a regression:
      
      if (!virstoragePoolObjIsActive(pool)) {
          virStoragePoolObjUnlock(pool);
          continue;
      }
      
      The idea behind this was that since we've got state XMLs and the pool
      wasn't marked as active by autostart routine (if the autostart flag had been
      set earlier), the pool is inactive and we can leave it be and continue with
      other pools. However, filesystem type pools like fs,dir, possibly netfs are
      supposed to be active if the filesystem is mounted on the host. And this is
      exactly where the regression occurs, e.g. pool type 'dir' which has been
      previously destroyed and marked as !autostart gets filtered out
      by the condition above.
      The resolution should be simply to remove the condition completely,
      all pools will get their 'active' flag updated by check callback and if
      they do not support such callback, the logic doesn't change and such
      pools will be inactive by default (e.g. RBD, even if a state XML exists).
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1238610
      f92f3121
  25. 30 6月, 2015 1 次提交
  26. 15 6月, 2015 1 次提交
    • J
      storage: Need to set secrettype for direct iscsi disk volume · 1feaccf0
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1200206
      
      Commit id '1b4eaa61' added the ability to have a mode='direct' for
      an iscsi disk volume.  It relied on virStorageTranslateDiskSourcePool
      in order to copy any disk source pool authentication information to
      the direct disk volume, but it neglected to also copy the 'secrettype'
      field which ends up being used in the domain volume formatting code.
      Adding a secrettype for this case will allow for proper formatting later
      and allow disk snapshotting to work properly
      
      Additionally libvirtd restart processing would fail to find the domain
      since the translation processing code is run after domain xml processing,
      so handle the the case where the authdef could have an empty secrettype
      field when processing the auth and additionally ignore performing the
      actual and expected auth secret type checks for a DISK_VOLUME since that
      data will be reassembled later during translation processing of the
      running domain.
      1feaccf0
  27. 02 6月, 2015 1 次提交
  28. 29 5月, 2015 2 次提交
    • J
      storage: Don't adjust pool alloc/avail values for disk backend · 48809204
      John Ferlan 提交于
      Commit id '2ac0e647' for https://bugzilla.redhat.com/show_bug.cgi?id=1206521
      was meant to be a generic check for the CreateVol, CreateVolFrom, and
      DeleteVol paths to check if the storage backend's changed the pool's view
      of allocation or available values.
      
      Unfortunately as it turns out this caused a side effect when the disk backend
      created an extended partition there would be no actual storage removed from
      the pool, thus the changes would not find any change in allocation or
      available and incorrectly update the pool values using the size of the
      extended partition. A subsequent refresh of the pool would reset the
      values appropriately.
      
      This patch modifies those checks in order to specifically not update the
      pool allocation and available for only the disk backend rather than be
      generic before and after checks.
      48809204
    • J
      Revert "storage: Don't duplicate efforts of backend driver" · 6727bfd7
      John Ferlan 提交于
      This reverts commit 2ac0e647.
      6727bfd7
  29. 28 5月, 2015 2 次提交
    • J
      Fix shrinking volumes with the delta flag · 8b316fe5
      Ján Tomko 提交于
      This never worked.
      
      In 0.9.10 when this API was introduced, it was intended that
      the SHRINK flag combined with DELTA would shrink the volume by
      the specified capacity (to avoid passing negative numbers).
      See commit 055bbf45.
      
      When the SHRINK flag was finally implemented for the first backend
      in 1.2.13 (commit aa9aa6a9), it was only implemented for the absolute
      values and with the delta flag the volume is always extended,
      regardless of the SHRINK flag.
      
      Treat the SHRINK flag as a minus sign when used together with DELTA,
      to allow shrinking volumes as was documented in the API since 0.9.10.
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1220213
      8b316fe5
    • J
      Simplify allocation check in storageVolResize · 7211f66a
      Ján Tomko 提交于
      Since shrinking a volume below existing allocation is not allowed,
      it is not possible for a successful resize with VOL_RESIZE_ALLOCATE
      to increase the pool's available value.
      
      Even with the SHRINK flag it is possible to extend the current
      allocation or even the capacity. Remove the overflow when
      computing delta with this flag and do the check even if the
      flag was specified.
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1073305
      7211f66a
  30. 28 4月, 2015 3 次提交