1. 12 11月, 2015 2 次提交
  2. 04 11月, 2015 3 次提交
    • J
      storage: On 'buildVol' failure don't delete the volume · 4cd7d220
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1233003
      
      Commit id 'fdda3760' only managed a symptom where it was possible to
      create a file in a pool without libvirt's knowledge, so it was reverted.
      
      The real fix is to have all the createVol API's which actually create
      a volume (disk, logical, zfs) and the buildVol API's which handle the
      real creation of some volume file (fs, rbd, sheepdog) manage deleting
      any volume which they create when there is some sort of error in
      processing the volume.
      
      This way the onus isn't left up to the storage_driver to determine whether
      the buildVol failure was due to some failure as a result of adjustments
      made to the volume after creation such as getting sizes, changing ownership,
      changing volume protections, etc. or simple a failure in creation.
      
      Without needing to consider that the volume has to be removed, the
      buildVol failure path only needs to remove the volume from the pool.
      This way if a creation failed due to duplicate name, libvirt wouldn't
      remove a volume that it didn't create in the pool target.
      4cd7d220
    • J
      Revert "storage: Prior to creating a volume, refresh the pool" · 0a6e709c
      John Ferlan 提交于
      This reverts commit fdda3760.
      
      This commit only manages a symptom of finding a buildRet failure
      where a volume was not listed in the pool, but someone created the
      volume outside of libvirt in the pool being managed by libvirt.
      0a6e709c
    • J
      storage: Pull volume removal from pool in storageVolDeleteInternal · a1703557
      John Ferlan 提交于
      Create a helper function to remove volume from the pool.
      a1703557
  3. 14 10月, 2015 1 次提交
  4. 12 10月, 2015 1 次提交
  5. 05 10月, 2015 1 次提交
    • J
      storage: Prior to creating a volume, refresh the pool · fdda3760
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1233003
      
      Although perhaps bordering on a don't do that type scenario, if
      someone creates a volume in a pool outside of libvirt, then uses that
      same name to create a volume in the pool via libvirt, then the creation
      will fail and in some cases cause the same name volume to be deleted.
      
      This patch will refresh the pool just prior to checking whether the
      named volume exists prior to creating the volume in the pool. While
      it's still possible to have a timing window to create a file after the
      check - at least we tried.  At that point, someone is being malicious.
      fdda3760
  6. 29 9月, 2015 2 次提交
  7. 02 9月, 2015 1 次提交
    • J
      storage: Handle failure from refreshVol · db9277a3
      John Ferlan 提交于
      Commit id '155ca616' added the 'refreshVol' API. In an NFS root-squash
      environment it was possible that if the just created volume from XML wasn't
      properly created with the right uid/gid and/or mode, then the followup
      refreshVol will fail to open the volume in order to get the allocation/
      capacity values. This would leave the volume still on the server and
      cause a libvirtd crash because 'voldef' would be in the pool list, but
      the cleanup code would free it.
      db9277a3
  8. 10 7月, 2015 1 次提交
  9. 09 7月, 2015 1 次提交
  10. 08 7月, 2015 1 次提交
    • E
      storage: Fix regression in storagePoolUpdateAllState · f92f3121
      Erik Skultety 提交于
      Commit 2a31c5f0 introduced support for storage pool state XMLs, however
      it also introduced a regression:
      
      if (!virstoragePoolObjIsActive(pool)) {
          virStoragePoolObjUnlock(pool);
          continue;
      }
      
      The idea behind this was that since we've got state XMLs and the pool
      wasn't marked as active by autostart routine (if the autostart flag had been
      set earlier), the pool is inactive and we can leave it be and continue with
      other pools. However, filesystem type pools like fs,dir, possibly netfs are
      supposed to be active if the filesystem is mounted on the host. And this is
      exactly where the regression occurs, e.g. pool type 'dir' which has been
      previously destroyed and marked as !autostart gets filtered out
      by the condition above.
      The resolution should be simply to remove the condition completely,
      all pools will get their 'active' flag updated by check callback and if
      they do not support such callback, the logic doesn't change and such
      pools will be inactive by default (e.g. RBD, even if a state XML exists).
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1238610
      f92f3121
  11. 30 6月, 2015 1 次提交
  12. 15 6月, 2015 1 次提交
    • J
      storage: Need to set secrettype for direct iscsi disk volume · 1feaccf0
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1200206
      
      Commit id '1b4eaa61' added the ability to have a mode='direct' for
      an iscsi disk volume.  It relied on virStorageTranslateDiskSourcePool
      in order to copy any disk source pool authentication information to
      the direct disk volume, but it neglected to also copy the 'secrettype'
      field which ends up being used in the domain volume formatting code.
      Adding a secrettype for this case will allow for proper formatting later
      and allow disk snapshotting to work properly
      
      Additionally libvirtd restart processing would fail to find the domain
      since the translation processing code is run after domain xml processing,
      so handle the the case where the authdef could have an empty secrettype
      field when processing the auth and additionally ignore performing the
      actual and expected auth secret type checks for a DISK_VOLUME since that
      data will be reassembled later during translation processing of the
      running domain.
      1feaccf0
  13. 02 6月, 2015 1 次提交
  14. 29 5月, 2015 2 次提交
    • J
      storage: Don't adjust pool alloc/avail values for disk backend · 48809204
      John Ferlan 提交于
      Commit id '2ac0e647' for https://bugzilla.redhat.com/show_bug.cgi?id=1206521
      was meant to be a generic check for the CreateVol, CreateVolFrom, and
      DeleteVol paths to check if the storage backend's changed the pool's view
      of allocation or available values.
      
      Unfortunately as it turns out this caused a side effect when the disk backend
      created an extended partition there would be no actual storage removed from
      the pool, thus the changes would not find any change in allocation or
      available and incorrectly update the pool values using the size of the
      extended partition. A subsequent refresh of the pool would reset the
      values appropriately.
      
      This patch modifies those checks in order to specifically not update the
      pool allocation and available for only the disk backend rather than be
      generic before and after checks.
      48809204
    • J
      Revert "storage: Don't duplicate efforts of backend driver" · 6727bfd7
      John Ferlan 提交于
      This reverts commit 2ac0e647.
      6727bfd7
  15. 28 5月, 2015 2 次提交
    • J
      Fix shrinking volumes with the delta flag · 8b316fe5
      Ján Tomko 提交于
      This never worked.
      
      In 0.9.10 when this API was introduced, it was intended that
      the SHRINK flag combined with DELTA would shrink the volume by
      the specified capacity (to avoid passing negative numbers).
      See commit 055bbf45.
      
      When the SHRINK flag was finally implemented for the first backend
      in 1.2.13 (commit aa9aa6a9), it was only implemented for the absolute
      values and with the delta flag the volume is always extended,
      regardless of the SHRINK flag.
      
      Treat the SHRINK flag as a minus sign when used together with DELTA,
      to allow shrinking volumes as was documented in the API since 0.9.10.
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1220213
      8b316fe5
    • J
      Simplify allocation check in storageVolResize · 7211f66a
      Ján Tomko 提交于
      Since shrinking a volume below existing allocation is not allowed,
      it is not possible for a successful resize with VOL_RESIZE_ALLOCATE
      to increase the pool's available value.
      
      Even with the SHRINK flag it is possible to extend the current
      allocation or even the capacity. Remove the overflow when
      computing delta with this flag and do the check even if the
      flag was specified.
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1073305
      7211f66a
  16. 28 4月, 2015 4 次提交
  17. 10 4月, 2015 2 次提交
    • J
      storage: Don't duplicate efforts of backend driver · 2ac0e647
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1206521
      
      If the backend driver updates the pool available and/or allocation values,
      then the storage_driver VolCreateXML, VolCreateXMLFrom, and VolDelete APIs
      should not change the value; otherwise, it will appear as if the values
      were "doubled" for each change.  Additionally since unsigned arithmetic will
      be used depending on the size and operation, either or both values could be
      appear to be much larger than they should be (in the EiB range).
      
      Currently only the disk pool updates the values, but other pools could.
      Assume a "fresh" disk pool of 500 MiB using /dev/sde:
      
      $ virsh pool-info disk-pool
      ...
      Capacity:       509.88 MiB
      Allocation:     0.00 B
      Available:      509.84 MiB
      
      $ virsh vol-create-as disk-pool sde1 --capacity 300M
      
      $ virsh pool-info disk-pool
      ...
      Capacity:       509.88 MiB
      Allocation:     600.47 MiB
      Available:      16.00 EiB
      
      Following assumes disk backend updated to refresh the disk pool at deletion
      of primary partition as well as extended partition:
      
      $ virsh vol-delete --pool disk-pool sde1
      Vol sde1 deleted
      
      $ virsh pool-info disk-pool
      ...
      Capacity:       509.88 MiB
      Allocation:     9.73 EiB
      Available:      6.27 EiB
      
      This patch will check if the backend updated the pool values and honor that
      update.
      2ac0e647
    • J
      storage: Fix issues in storageVolResize · 1095230d
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1073305
      
      When creating a volume in a pool, the creation allows the 'capacity'
      value to be larger than the available space in the pool. As long as
      the 'allocation' value will fit in the space, the volume will be created.
      
      However, resizing the volume checks were made with the new absolute
      capacity value against existing capacity + the available space without
      regard for whether the new absolute capacity was actually allocating
      space or not.  For example, a pool with 75G of available space creates
      a volume of 10G using a capacity of 100G and allocation of 10G will succeed;
      however, if the allocation used a capacity of 10G instead and then tried
      to resize the allocation to 100G the code would fail to allow the backend
      to try the resize.
      
      Furthermore, when updating the pool "available" and "allocation" values,
      the resize code would just "blindly" adjust them regardless of whether
      space was "allocated" or just "capacity" was being adjusted.  This left
      a scenario whereby a resize to 100G would fail; however, a resize to 50G
      followed by one to 100G would both succeed.  Again, neither was adjusting
      the allocation value, just the "capacity" value.
      
      This patch adds more logic to the resize code to understand whether the
      new capacity value is actually "allocating" space as well and whether it
      shrinking or expanding. Since unsigned arithmatic is involved, the possibility
      that we adjust the pool size values incorrectly is probable.
      
      This patch also ensures that updates to the pool values only occur if we
      actually performed the allocation.
      
      NB: The storageVolDelete, storageVolCreateXML, and storageVolCreateXMLFrom
      each only updates the pool allocation/availability values by the target
      volume allocation value.
      1095230d
  18. 07 4月, 2015 3 次提交
    • E
      storage: Introduce storagePoolUpdateAllState function · 2a31c5f0
      Erik Skultety 提交于
      The 'checkPool' callback was originally part of the storageDriverAutostart function,
      but the pools need to be checked earlier during initialization phase,
      otherwise we can't start a domain which mounts a volume after the
      libvirtd daemon restarted. This is because qemuProcessReconnect is called
      earlier than storageDriverAutostart. Therefore the 'checkPool' logic has been
      moved to storagePoolUpdateAllState which is called inside storageDriverInitialize.
      
      We also need a valid 'conn' reference to be able to execute 'refreshPool'
      during initialization phase. Though it isn't available until storageDriverAutostart
      all of our storage backends do ignore 'conn' pointer, except for RBD,
      but RBD doesn't support 'checkPool' callback, so it's safe to pass
      conn = NULL in this case.
      
      Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1177733
      2a31c5f0
    • E
      conf: Introduce virStoragePoolLoadAllState && virStoragePoolLoadState · a9700771
      Erik Skultety 提交于
      These functions operate exactly the same as their network equivalents
      virNetworkLoadAllState, virNetworkLoadState.
      a9700771
    • E
      storage: Add support for storage pool state XML · 723143a1
      Erik Skultety 提交于
      This patch introduces new virStorageDriverState element stateDir.
      Also adds necessary changes to storageStateInitialize, so that
      directories initialization becomes more generic.
      723143a1
  19. 02 4月, 2015 1 次提交
  20. 02 3月, 2015 3 次提交
  21. 31 1月, 2015 1 次提交
    • J
      storage: Need to clear pool prior to refreshPool during Autostart · 1d2e4d8c
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1176510
      
      When storageDriverAutostart is called path virStateReload via a 'service
      libvirtd reload', then because the volume list in the pool wasn't cleared
      prior to the call, each volume would be listed multiple times (as many
      times as we reload). I believe the issue would be introduced by commit
      id '9e093f0b' at least for the libvirtd reload path, although I suppose
      the introduction of virStateReload (commit id '70da0494') could be a
      different cause.
      
      Thus like other places prior to calling refreshPool, we need to call
      virStoragePoolObjClearVols
      1d2e4d8c
  22. 27 1月, 2015 2 次提交
    • C
      storage: add a flag to clone files on btrfs · 95da1913
      Chen Hanxiao 提交于
      When creating a RAW file, we don't take advantage
      of clone of btrfs.
      
      Add a VIR_STORAGE_VOL_CREATE_REFLINK flag to request
      a reflink copy.
      Signed-off-by: NChen Hanxiao <chenhanxiao@cn.fujitsu.com>
      Signed-off-by: NJán Tomko <jtomko@redhat.com>
      95da1913
    • D
      Removing probing of secondary drivers · 55ea7be7
      Daniel P. Berrange 提交于
      For stateless, client side drivers, it is never correct to
      probe for secondary drivers. It is only ever appropriate to
      use the secondary driver that is associated with the
      hypervisor in question. As a result the ESX & HyperV drivers
      have both been forced to do hacks where they register no-op
      drivers for the ones they don't implement.
      
      For stateful, server side drivers, we always just want to
      use the same built-in shared driver. The exception is
      virtualbox which is really a stateless driver and so wants
      to use its own server side secondary drivers. To deal with
      this virtualbox has to be built as 3 separate loadable
      modules to allow registration to work in the right order.
      
      This can all be simplified by introducing a new struct
      recording the precise set of secondary drivers each
      hypervisor driver wants
      
      struct _virConnectDriver {
          virHypervisorDriverPtr hypervisorDriver;
          virInterfaceDriverPtr interfaceDriver;
          virNetworkDriverPtr networkDriver;
          virNodeDeviceDriverPtr nodeDeviceDriver;
          virNWFilterDriverPtr nwfilterDriver;
          virSecretDriverPtr secretDriver;
          virStorageDriverPtr storageDriver;
      };
      
      Instead of registering the hypervisor driver, we now
      just register a virConnectDriver instead. This allows
      us to remove all probing of secondary drivers. Once we
      have chosen the primary driver, we immediately know the
      correct secondary drivers to use.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      55ea7be7
  23. 08 12月, 2014 1 次提交
  24. 04 12月, 2014 1 次提交
    • L
      storage: fix crash caused by no check return before set close · 87b9437f
      Luyao Huang 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1087104#c5
      
      When trying to use an invalid offset to virStorageVolUpload(), libvirt
      fails in virFDStreamOpenFileInternal(), although it seems libvirt does
      not check the return in storageVolUpload(), and calls
      virFDStreamSetInternalCloseCb() right after.  But stream doesn't have a
      privateData (is NULL) yet, and the daemon crashes then.
      
      0  0x00007f09429a9c10 in pthread_mutex_lock () from /lib64/libpthread.so.0
      1  0x00007f094514dbf5 in virMutexLock (m=<optimized out>) at util/virthread.c:88
      2  0x00007f09451cb211 in virFDStreamSetInternalCloseCb at fdstream.c:795
      3  0x00007f092ff2c9eb in storageVolUpload at storage/storage_driver.c:2098
      4  0x00007f09451f46e0 in virStorageVolUpload at libvirt.c:14000
      5  0x00007f0945c78fa1 in remoteDispatchStorageVolUpload at remote_dispatch.h:14339
      6  remoteDispatchStorageVolUploadHelper at remote_dispatch.h:14309
      7  0x00007f094524a192 in virNetServerProgramDispatchCall at rpc/virnetserverprogram.c:437
      Signed-off-by: NLuyao Huang <lhuang@redhat.com>
      87b9437f
  25. 03 12月, 2014 1 次提交