- 24 5月, 2016 1 次提交
-
-
由 Jovanka Gulicoska 提交于
Replace VIR_ERROR with virReportError and virReportSystemError
-
- 23 5月, 2016 1 次提交
-
-
由 Ján Tomko 提交于
Commit 5e54361c added virStoragePoolObjClearVols before refreshPool to prevent duplicate volume entries. However it is not needed here because we're not refreshing the pool yet, just checking for the existence of the refresh callback. The actual refresh is done via virStorageVolFDStreamCloseCb in virStorageVolPoolRefreshThread, which already calls virStoragePoolObjClearVols.
-
- 20 5月, 2016 1 次提交
-
-
由 Jovanka Gulicoska 提交于
Convert to virGetLastErrorMessage() in the rest of the code
-
- 11 5月, 2016 2 次提交
-
-
由 John Ferlan 提交于
Prior to calling the 'refreshPool' during CreatePool or UploadPool operations, we need to clear the pool; otherwise, the pool will have duplicated entries.
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1318993 Commit id 'dd519a29' caused a regression cloning a volume into a logical pool by removing just the 'allocation' adjustment during storageVolCreateXMLFrom. Combined with the change to not require the new volume input XML to have a capacity listed (commit id 'e3f1d2a8') left the possibility that a zero allocation value (e.g., not provided) would create a thin/sparse logical volume. When a thin lv becomes fully populated, then LVM sets the partition 'inactive' and the subsequent fdatasync() fails. Add a new 'has_allocation' flag to be set at XML parse time to indicate that allocation was provided. This is done so that if it's not provided the create-from code uses the capacity value since we document that if omitted, the volume will be fully allocated at time of creation. For a logical backend, that creation time is 'createVol', while for a file backend, creation doesn't set the size, but the 'createRaw' called during buildVolFrom will decide whether the file is sparse or not based on the provided capacity and allocation value. For volume clones that provide different allocation and capacity values to allow for sparse files, there is no change.
-
- 30 4月, 2016 1 次提交
-
-
由 Yuri Chornoivan 提交于
Signed-off-by: NYuri Chornoivan <yurchor@ukr.net>
-
- 15 4月, 2016 1 次提交
-
-
由 Olga Krishtal 提交于
In case of ploop volume, target path of the volume is the path to the directory that contains image file named root.hds and DiskDescriptor.xml. While using uploadVol and downloadVol callbacks we need to open root.hds itself. Upload or download operations with ploop volume are only allowed when images do not have snapshots. Otherwise operation fails. Signed-off-by: NOlga Krishtal <okrishtal@virtuozzo.com> Signed-off-by: NJán Tomko <jtomko@redhat.com>
-
- 26 2月, 2016 1 次提交
-
-
由 John Ferlan 提交于
Found by inspection - after calling virStoragePoolObjAssignDef the pool is part of the driver->pools.objs list and the failure path for the virStoragePoolObjSaveDef will use virStoragePoolObjRemove to remove the pool from the objs list which will unlock and free the pool pointer (as pools->objs[i] during the loop). Since the call doesn't clear the pool address from the callee, we need to set it to NULL; otherwise, the virStoragePoolObjUnlock in the cleanup: code will fail miserably.
-
- 17 2月, 2016 1 次提交
-
- 12 2月, 2016 2 次提交
-
-
由 Michal Privoznik 提交于
It is highly unlikely that a backend will know how to create a volume from a different volume (buildVolFrom) and not know how to create an empty volume (createVol). But: 1) we call the function without any prior check so if that's the case we would SIGSEGV immediatelly 2) it's better to be safe than sorry. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
由 Michal Privoznik 提交于
Firstly, we realloc internal list to hold new item (=volume that will be potentially created) and then we check whether we actually know how to create it. If we don't we consume more memory than we really need for no good reason. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 10 2月, 2016 1 次提交
-
-
由 Michal Privoznik 提交于
The virStringListLength function does not ever modify the passed string list. It merely counts the items in it. Make sure that we reflect this bit in the function header. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> (crobinso: fix up spacing and squash in sheepdog bit suggested by Andrea)
-
- 05 1月, 2016 1 次提交
-
-
由 John Ferlan 提交于
Commit id 'aeb1078a' added a buildPool option and failure path which calls virStoragePoolObjRemove, which unlocks the pool, clears the 'pool' variable, and goto cleanup. However, at cleanup virStoragePoolObjUnlock is called without check if pool is non NULL.
-
- 04 1月, 2016 1 次提交
-
-
由 Michael Chapman 提交于
Valgrind complained: ==28277== 38 bytes in 1 blocks are definitely lost in loss record 298 of 957 ==28277== at 0x4A06A2E: malloc (vg_replace_malloc.c:270) ==28277== by 0x82D7F57: __vasprintf_chk (in /lib64/libc-2.12.so) ==28277== by 0x52EF16A: virVasprintfInternal (stdio2.h:199) ==28277== by 0x52EF25C: virAsprintfInternal (virstring.c:514) ==28277== by 0x52B1FA9: virFileBuildPath (virfile.c:2831) ==28277== by 0x19B1947C: storageDriverAutostart (storage_driver.c:191) ==28277== by 0x19B196A7: storageStateAutoStart (storage_driver.c:307) ==28277== by 0x538527E: virStateInitialize (libvirt.c:793) ==28277== by 0x11D7CF: daemonRunStateInit (libvirtd.c:947) ==28277== by 0x52F4694: virThreadHelper (virthread.c:206) ==28277== by 0x6E08A50: start_thread (in /lib64/libpthread-2.12.so) ==28277== by 0x82BE93C: clone (in /lib64/libc-2.12.so) Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
-
- 18 12月, 2015 1 次提交
-
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=830056 Add flags handling to the virStoragePoolCreate and virStoragePoolCreateXML API's which will allow the caller to provide the capability for the storage pool create API's to also perform a pool build during creation rather than requiring the additional buildPool step. This will allow transient pools to be defined, built, and started. The new flags are: * VIR_STORAGE_POOL_CREATE_WITH_BUILD Perform buildPool without any flags passed. * VIR_STORAGE_POOL_CREATE_WITH_BUILD_OVERWRITE Perform buildPool using VIR_STORAGE_POOL_BUILD_OVERWRITE flag. * VIR_STORAGE_POOL_CREATE_WITH_BUILD_NO_OVERWRITE Perform buildPool using VIR_STORAGE_POOL_BUILD_NO_OVERWRITE flag. It is up to the backend to handle the processing of build flags. The overwrite and no-overwrite flags are mutually exclusive. NB: This patch is loosely based upon code originally authored by Osier Yang that were not reviewed and pushed, see: https://www.redhat.com/archives/libvir-list/2012-July/msg01328.html
-
- 17 12月, 2015 1 次提交
-
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1270709 When a volume wipe is successful, perform a volume refresh afterwards to update any volume data that may be used in future volume commands, such as volume resize. For a raw file volume, a wipe could truncate the file and a followup volume resize the capacity may fail because the volume target allocation isn't updated to reflect the wipe activity.
-
- 12 11月, 2015 2 次提交
-
-
由 John Ferlan 提交于
Add a new API to search the currently defined pool list for a pool with a matching UUID and return the locked pool object pointer.
-
由 John Ferlan 提交于
Since we treat it like a boolean, let's store it that way. At least one path had already treated as true/false anyway.
-
- 04 11月, 2015 3 次提交
-
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1233003 Commit id 'fdda3760' only managed a symptom where it was possible to create a file in a pool without libvirt's knowledge, so it was reverted. The real fix is to have all the createVol API's which actually create a volume (disk, logical, zfs) and the buildVol API's which handle the real creation of some volume file (fs, rbd, sheepdog) manage deleting any volume which they create when there is some sort of error in processing the volume. This way the onus isn't left up to the storage_driver to determine whether the buildVol failure was due to some failure as a result of adjustments made to the volume after creation such as getting sizes, changing ownership, changing volume protections, etc. or simple a failure in creation. Without needing to consider that the volume has to be removed, the buildVol failure path only needs to remove the volume from the pool. This way if a creation failed due to duplicate name, libvirt wouldn't remove a volume that it didn't create in the pool target.
-
由 John Ferlan 提交于
This reverts commit fdda3760. This commit only manages a symptom of finding a buildRet failure where a volume was not listed in the pool, but someone created the volume outside of libvirt in the pool being managed by libvirt.
-
由 John Ferlan 提交于
Create a helper function to remove volume from the pool.
-
- 14 10月, 2015 1 次提交
-
-
由 John Ferlan 提交于
Commit id '1b5685da' refactored the code to move buildvoldef inside the buildVol conditional; however, the VIR_FREE of the memory was left only when 'buildret' failed, thus we're leaking memory. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
- 12 10月, 2015 1 次提交
-
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1256999 After creating a copy of the 'authdef' in a pool -> disk translation, unconditionally clear the 'authType' in the resulting disk auth def structure since that's used for a storage pool and not a disk. This ensures virStorageAuthDefFormat will properly format the <auth> XML for a <disk> (e.g. it won't have a <auth type='%s'.../>).
-
- 05 10月, 2015 1 次提交
-
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1233003 Although perhaps bordering on a don't do that type scenario, if someone creates a volume in a pool outside of libvirt, then uses that same name to create a volume in the pool via libvirt, then the creation will fail and in some cases cause the same name volume to be deleted. This patch will refresh the pool just prior to checking whether the named volume exists prior to creating the volume in the pool. While it's still possible to have a timing window to create a file after the check - at least we tried. At that point, someone is being malicious.
-
- 29 9月, 2015 2 次提交
-
-
由 Ján Tomko 提交于
Since the previous commit, the shallow copy is only used inside the if (backend->buildVol) if.
-
由 Ján Tomko 提交于
Since commit e0139e30, we update the pool allocation with the user-provided allocation values. For qcow2, the allocation is ignored for volume building, but we still subtracted it from pool's allocation. This can result in interesting values if the user-provided allocation is large enough: Capacity: 104.71 GiB Allocation: 109.13 GiB Available: 16.00 EiB We already do a VolRefresh on volume creation. Also refresh the volume after creating and use the new value to update the pool. https://bugzilla.redhat.com/show_bug.cgi?id=1163091
-
- 02 9月, 2015 1 次提交
-
-
由 John Ferlan 提交于
Commit id '155ca616' added the 'refreshVol' API. In an NFS root-squash environment it was possible that if the just created volume from XML wasn't properly created with the right uid/gid and/or mode, then the followup refreshVol will fail to open the volume in order to get the allocation/ capacity values. This would leave the volume still on the server and cause a libvirtd crash because 'voldef' would be in the pool list, but the cleanup code would free it.
-
- 10 7月, 2015 1 次提交
-
-
由 Prerna Saxena 提交于
When virsh vol-clone is attempted on a raw file where capacity > allocation, the resulting cloned volume has a size that matches the virtual-size of the parent; in place of matching its actual, disk size. This patch fixes the cloned disk to have same _allocated_size_ as the parent file from which it was cloned. Ref: http://www.redhat.com/archives/libvir-list/2015-May/msg00050.html Also fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1130739Signed-off-by: NPrerna Saxena <prerna@linux.vnet.ibm.com> Signed-off-by: NJán Tomko <jtomko@redhat.com>
-
- 09 7月, 2015 1 次提交
-
-
由 Erik Skultety 提交于
This patch reverts commit 4749d82a which tried to tweak the logic in volume creation. We did realloc and update our object list before we executed volume building within a specific storage backend. If that failed, we had to update (again) our object list to the original state as it was before the build and delete the volume from the pool (even though it didn't exist - this truly depends on the backend). I misunderstood the base idea to be able to poll the status of the volume creation using vol-info. After commit 4749d82a this wasn't possible anymore, although no BZ has been reported yet. Commit 4749d82a also claimed to fix https://bugzilla.redhat.com/show_bug.cgi?id=1223177, but commit c8be606b of the same series as 4749d82ad (which was more of a refactor than a fix) fixes the same issue so the revert should be pretty straightforward. Further more, BZ https://bugzilla.redhat.com/show_bug.cgi?id=1241454 can be fixed with this revert.
-
- 08 7月, 2015 1 次提交
-
-
由 Erik Skultety 提交于
Commit 2a31c5f0 introduced support for storage pool state XMLs, however it also introduced a regression: if (!virstoragePoolObjIsActive(pool)) { virStoragePoolObjUnlock(pool); continue; } The idea behind this was that since we've got state XMLs and the pool wasn't marked as active by autostart routine (if the autostart flag had been set earlier), the pool is inactive and we can leave it be and continue with other pools. However, filesystem type pools like fs,dir, possibly netfs are supposed to be active if the filesystem is mounted on the host. And this is exactly where the regression occurs, e.g. pool type 'dir' which has been previously destroyed and marked as !autostart gets filtered out by the condition above. The resolution should be simply to remove the condition completely, all pools will get their 'active' flag updated by check callback and if they do not support such callback, the logic doesn't change and such pools will be inactive by default (e.g. RBD, even if a state XML exists). Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1238610
-
- 30 6月, 2015 1 次提交
-
-
由 Prerna Saxena 提交于
Libvirt periodically refreshes all volumes in a storage pool, including the volumes being cloned. While cloning a storage volume from parent, we drop pool locks. Subsequent volume refresh sometimes changes allocation for an ongoing copy, and leads to corrupt images. Fix: Introduce a shadow volume that isolates the volume object under refresh from the base which has a copy ongoing. Signed-off-by: NPrerna Saxena <prerna@linux.vnet.ibm.com> Signed-off-by: NJán Tomko <jtomko@redhat.com>
-
- 15 6月, 2015 1 次提交
-
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1200206 Commit id '1b4eaa61' added the ability to have a mode='direct' for an iscsi disk volume. It relied on virStorageTranslateDiskSourcePool in order to copy any disk source pool authentication information to the direct disk volume, but it neglected to also copy the 'secrettype' field which ends up being used in the domain volume formatting code. Adding a secrettype for this case will allow for proper formatting later and allow disk snapshotting to work properly Additionally libvirtd restart processing would fail to find the domain since the translation processing code is run after domain xml processing, so handle the the case where the authdef could have an empty secrettype field when processing the auth and additionally ignore performing the actual and expected auth secret type checks for a DISK_VOLUME since that data will be reassembled later during translation processing of the running domain.
-
- 02 6月, 2015 1 次提交
-
-
由 Erik Skultety 提交于
We do update pool volume object list before we actually create any volume. If buildVol fails, we then try to delete the volume in the storage as well as remove it from our structures. The problem is, that any backend that supports both buildVol and deleteVol would fail in this case which is completely unnecessary. This patch causes the update to take place after we know a volume has been created successfully, thus no removal in case of a buildVol failure is necessary. https://bugzilla.redhat.com/show_bug.cgi?id=1223177
-
- 29 5月, 2015 2 次提交
-
-
由 John Ferlan 提交于
Commit id '2ac0e647' for https://bugzilla.redhat.com/show_bug.cgi?id=1206521 was meant to be a generic check for the CreateVol, CreateVolFrom, and DeleteVol paths to check if the storage backend's changed the pool's view of allocation or available values. Unfortunately as it turns out this caused a side effect when the disk backend created an extended partition there would be no actual storage removed from the pool, thus the changes would not find any change in allocation or available and incorrectly update the pool values using the size of the extended partition. A subsequent refresh of the pool would reset the values appropriately. This patch modifies those checks in order to specifically not update the pool allocation and available for only the disk backend rather than be generic before and after checks.
-
由 John Ferlan 提交于
This reverts commit 2ac0e647.
-
- 28 5月, 2015 2 次提交
-
-
由 Ján Tomko 提交于
This never worked. In 0.9.10 when this API was introduced, it was intended that the SHRINK flag combined with DELTA would shrink the volume by the specified capacity (to avoid passing negative numbers). See commit 055bbf45. When the SHRINK flag was finally implemented for the first backend in 1.2.13 (commit aa9aa6a9), it was only implemented for the absolute values and with the delta flag the volume is always extended, regardless of the SHRINK flag. Treat the SHRINK flag as a minus sign when used together with DELTA, to allow shrinking volumes as was documented in the API since 0.9.10. https://bugzilla.redhat.com/show_bug.cgi?id=1220213
-
由 Ján Tomko 提交于
Since shrinking a volume below existing allocation is not allowed, it is not possible for a successful resize with VOL_RESIZE_ALLOCATE to increase the pool's available value. Even with the SHRINK flag it is possible to extend the current allocation or even the capacity. Remove the overflow when computing delta with this flag and do the check even if the flag was specified. https://bugzilla.redhat.com/show_bug.cgi?id=1073305
-
- 28 4月, 2015 3 次提交
-
-
由 Cole Robinson 提交于
If you end up with a state file for a pool that no longer starts up or refreshes correctly, the state file is never removed and adds noise to the logs everytime libvirtd is started. If the initial state syncing fails, delete the statefile.
-
由 Cole Robinson 提交于
Will simplify a future patch
-
由 Cole Robinson 提交于
After pool startup we call refreshPool(). If that fails, we leave a stale pool state file hanging around. Hit this trying to create a pool with qemu:///session containing root owned files.
-