- 12 11月, 2015 2 次提交
-
-
由 John Ferlan 提交于
Add a new API to search the currently defined pool list for a pool with a matching UUID and return the locked pool object pointer.
-
由 John Ferlan 提交于
Since we treat it like a boolean, let's store it that way. At least one path had already treated as true/false anyway.
-
- 04 11月, 2015 3 次提交
-
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1233003 Commit id 'fdda3760' only managed a symptom where it was possible to create a file in a pool without libvirt's knowledge, so it was reverted. The real fix is to have all the createVol API's which actually create a volume (disk, logical, zfs) and the buildVol API's which handle the real creation of some volume file (fs, rbd, sheepdog) manage deleting any volume which they create when there is some sort of error in processing the volume. This way the onus isn't left up to the storage_driver to determine whether the buildVol failure was due to some failure as a result of adjustments made to the volume after creation such as getting sizes, changing ownership, changing volume protections, etc. or simple a failure in creation. Without needing to consider that the volume has to be removed, the buildVol failure path only needs to remove the volume from the pool. This way if a creation failed due to duplicate name, libvirt wouldn't remove a volume that it didn't create in the pool target.
-
由 John Ferlan 提交于
This reverts commit fdda3760. This commit only manages a symptom of finding a buildRet failure where a volume was not listed in the pool, but someone created the volume outside of libvirt in the pool being managed by libvirt.
-
由 John Ferlan 提交于
Create a helper function to remove volume from the pool.
-
- 14 10月, 2015 1 次提交
-
-
由 John Ferlan 提交于
Commit id '1b5685da' refactored the code to move buildvoldef inside the buildVol conditional; however, the VIR_FREE of the memory was left only when 'buildret' failed, thus we're leaking memory. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
- 12 10月, 2015 1 次提交
-
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1256999 After creating a copy of the 'authdef' in a pool -> disk translation, unconditionally clear the 'authType' in the resulting disk auth def structure since that's used for a storage pool and not a disk. This ensures virStorageAuthDefFormat will properly format the <auth> XML for a <disk> (e.g. it won't have a <auth type='%s'.../>).
-
- 05 10月, 2015 1 次提交
-
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1233003 Although perhaps bordering on a don't do that type scenario, if someone creates a volume in a pool outside of libvirt, then uses that same name to create a volume in the pool via libvirt, then the creation will fail and in some cases cause the same name volume to be deleted. This patch will refresh the pool just prior to checking whether the named volume exists prior to creating the volume in the pool. While it's still possible to have a timing window to create a file after the check - at least we tried. At that point, someone is being malicious.
-
- 29 9月, 2015 2 次提交
-
-
由 Ján Tomko 提交于
Since the previous commit, the shallow copy is only used inside the if (backend->buildVol) if.
-
由 Ján Tomko 提交于
Since commit e0139e30, we update the pool allocation with the user-provided allocation values. For qcow2, the allocation is ignored for volume building, but we still subtracted it from pool's allocation. This can result in interesting values if the user-provided allocation is large enough: Capacity: 104.71 GiB Allocation: 109.13 GiB Available: 16.00 EiB We already do a VolRefresh on volume creation. Also refresh the volume after creating and use the new value to update the pool. https://bugzilla.redhat.com/show_bug.cgi?id=1163091
-
- 02 9月, 2015 1 次提交
-
-
由 John Ferlan 提交于
Commit id '155ca616' added the 'refreshVol' API. In an NFS root-squash environment it was possible that if the just created volume from XML wasn't properly created with the right uid/gid and/or mode, then the followup refreshVol will fail to open the volume in order to get the allocation/ capacity values. This would leave the volume still on the server and cause a libvirtd crash because 'voldef' would be in the pool list, but the cleanup code would free it.
-
- 10 7月, 2015 1 次提交
-
-
由 Prerna Saxena 提交于
When virsh vol-clone is attempted on a raw file where capacity > allocation, the resulting cloned volume has a size that matches the virtual-size of the parent; in place of matching its actual, disk size. This patch fixes the cloned disk to have same _allocated_size_ as the parent file from which it was cloned. Ref: http://www.redhat.com/archives/libvir-list/2015-May/msg00050.html Also fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1130739Signed-off-by: NPrerna Saxena <prerna@linux.vnet.ibm.com> Signed-off-by: NJán Tomko <jtomko@redhat.com>
-
- 09 7月, 2015 1 次提交
-
-
由 Erik Skultety 提交于
This patch reverts commit 4749d82a which tried to tweak the logic in volume creation. We did realloc and update our object list before we executed volume building within a specific storage backend. If that failed, we had to update (again) our object list to the original state as it was before the build and delete the volume from the pool (even though it didn't exist - this truly depends on the backend). I misunderstood the base idea to be able to poll the status of the volume creation using vol-info. After commit 4749d82a this wasn't possible anymore, although no BZ has been reported yet. Commit 4749d82a also claimed to fix https://bugzilla.redhat.com/show_bug.cgi?id=1223177, but commit c8be606b of the same series as 4749d82ad (which was more of a refactor than a fix) fixes the same issue so the revert should be pretty straightforward. Further more, BZ https://bugzilla.redhat.com/show_bug.cgi?id=1241454 can be fixed with this revert.
-
- 08 7月, 2015 1 次提交
-
-
由 Erik Skultety 提交于
Commit 2a31c5f0 introduced support for storage pool state XMLs, however it also introduced a regression: if (!virstoragePoolObjIsActive(pool)) { virStoragePoolObjUnlock(pool); continue; } The idea behind this was that since we've got state XMLs and the pool wasn't marked as active by autostart routine (if the autostart flag had been set earlier), the pool is inactive and we can leave it be and continue with other pools. However, filesystem type pools like fs,dir, possibly netfs are supposed to be active if the filesystem is mounted on the host. And this is exactly where the regression occurs, e.g. pool type 'dir' which has been previously destroyed and marked as !autostart gets filtered out by the condition above. The resolution should be simply to remove the condition completely, all pools will get their 'active' flag updated by check callback and if they do not support such callback, the logic doesn't change and such pools will be inactive by default (e.g. RBD, even if a state XML exists). Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1238610
-
- 30 6月, 2015 1 次提交
-
-
由 Prerna Saxena 提交于
Libvirt periodically refreshes all volumes in a storage pool, including the volumes being cloned. While cloning a storage volume from parent, we drop pool locks. Subsequent volume refresh sometimes changes allocation for an ongoing copy, and leads to corrupt images. Fix: Introduce a shadow volume that isolates the volume object under refresh from the base which has a copy ongoing. Signed-off-by: NPrerna Saxena <prerna@linux.vnet.ibm.com> Signed-off-by: NJán Tomko <jtomko@redhat.com>
-
- 15 6月, 2015 1 次提交
-
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1200206 Commit id '1b4eaa61' added the ability to have a mode='direct' for an iscsi disk volume. It relied on virStorageTranslateDiskSourcePool in order to copy any disk source pool authentication information to the direct disk volume, but it neglected to also copy the 'secrettype' field which ends up being used in the domain volume formatting code. Adding a secrettype for this case will allow for proper formatting later and allow disk snapshotting to work properly Additionally libvirtd restart processing would fail to find the domain since the translation processing code is run after domain xml processing, so handle the the case where the authdef could have an empty secrettype field when processing the auth and additionally ignore performing the actual and expected auth secret type checks for a DISK_VOLUME since that data will be reassembled later during translation processing of the running domain.
-
- 02 6月, 2015 1 次提交
-
-
由 Erik Skultety 提交于
We do update pool volume object list before we actually create any volume. If buildVol fails, we then try to delete the volume in the storage as well as remove it from our structures. The problem is, that any backend that supports both buildVol and deleteVol would fail in this case which is completely unnecessary. This patch causes the update to take place after we know a volume has been created successfully, thus no removal in case of a buildVol failure is necessary. https://bugzilla.redhat.com/show_bug.cgi?id=1223177
-
- 29 5月, 2015 2 次提交
-
-
由 John Ferlan 提交于
Commit id '2ac0e647' for https://bugzilla.redhat.com/show_bug.cgi?id=1206521 was meant to be a generic check for the CreateVol, CreateVolFrom, and DeleteVol paths to check if the storage backend's changed the pool's view of allocation or available values. Unfortunately as it turns out this caused a side effect when the disk backend created an extended partition there would be no actual storage removed from the pool, thus the changes would not find any change in allocation or available and incorrectly update the pool values using the size of the extended partition. A subsequent refresh of the pool would reset the values appropriately. This patch modifies those checks in order to specifically not update the pool allocation and available for only the disk backend rather than be generic before and after checks.
-
由 John Ferlan 提交于
This reverts commit 2ac0e647.
-
- 28 5月, 2015 2 次提交
-
-
由 Ján Tomko 提交于
This never worked. In 0.9.10 when this API was introduced, it was intended that the SHRINK flag combined with DELTA would shrink the volume by the specified capacity (to avoid passing negative numbers). See commit 055bbf45. When the SHRINK flag was finally implemented for the first backend in 1.2.13 (commit aa9aa6a9), it was only implemented for the absolute values and with the delta flag the volume is always extended, regardless of the SHRINK flag. Treat the SHRINK flag as a minus sign when used together with DELTA, to allow shrinking volumes as was documented in the API since 0.9.10. https://bugzilla.redhat.com/show_bug.cgi?id=1220213
-
由 Ján Tomko 提交于
Since shrinking a volume below existing allocation is not allowed, it is not possible for a successful resize with VOL_RESIZE_ALLOCATE to increase the pool's available value. Even with the SHRINK flag it is possible to extend the current allocation or even the capacity. Remove the overflow when computing delta with this flag and do the check even if the flag was specified. https://bugzilla.redhat.com/show_bug.cgi?id=1073305
-
- 28 4月, 2015 4 次提交
-
-
由 Cole Robinson 提交于
If you end up with a state file for a pool that no longer starts up or refreshes correctly, the state file is never removed and adds noise to the logs everytime libvirtd is started. If the initial state syncing fails, delete the statefile.
-
由 Cole Robinson 提交于
Will simplify a future patch
-
由 Cole Robinson 提交于
After pool startup we call refreshPool(). If that fails, we leave a stale pool state file hanging around. Hit this trying to create a pool with qemu:///session containing root owned files.
-
qemu:///session由 Cole Robinson 提交于
-
- 10 4月, 2015 2 次提交
-
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1206521 If the backend driver updates the pool available and/or allocation values, then the storage_driver VolCreateXML, VolCreateXMLFrom, and VolDelete APIs should not change the value; otherwise, it will appear as if the values were "doubled" for each change. Additionally since unsigned arithmetic will be used depending on the size and operation, either or both values could be appear to be much larger than they should be (in the EiB range). Currently only the disk pool updates the values, but other pools could. Assume a "fresh" disk pool of 500 MiB using /dev/sde: $ virsh pool-info disk-pool ... Capacity: 509.88 MiB Allocation: 0.00 B Available: 509.84 MiB $ virsh vol-create-as disk-pool sde1 --capacity 300M $ virsh pool-info disk-pool ... Capacity: 509.88 MiB Allocation: 600.47 MiB Available: 16.00 EiB Following assumes disk backend updated to refresh the disk pool at deletion of primary partition as well as extended partition: $ virsh vol-delete --pool disk-pool sde1 Vol sde1 deleted $ virsh pool-info disk-pool ... Capacity: 509.88 MiB Allocation: 9.73 EiB Available: 6.27 EiB This patch will check if the backend updated the pool values and honor that update.
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1073305 When creating a volume in a pool, the creation allows the 'capacity' value to be larger than the available space in the pool. As long as the 'allocation' value will fit in the space, the volume will be created. However, resizing the volume checks were made with the new absolute capacity value against existing capacity + the available space without regard for whether the new absolute capacity was actually allocating space or not. For example, a pool with 75G of available space creates a volume of 10G using a capacity of 100G and allocation of 10G will succeed; however, if the allocation used a capacity of 10G instead and then tried to resize the allocation to 100G the code would fail to allow the backend to try the resize. Furthermore, when updating the pool "available" and "allocation" values, the resize code would just "blindly" adjust them regardless of whether space was "allocated" or just "capacity" was being adjusted. This left a scenario whereby a resize to 100G would fail; however, a resize to 50G followed by one to 100G would both succeed. Again, neither was adjusting the allocation value, just the "capacity" value. This patch adds more logic to the resize code to understand whether the new capacity value is actually "allocating" space as well and whether it shrinking or expanding. Since unsigned arithmatic is involved, the possibility that we adjust the pool size values incorrectly is probable. This patch also ensures that updates to the pool values only occur if we actually performed the allocation. NB: The storageVolDelete, storageVolCreateXML, and storageVolCreateXMLFrom each only updates the pool allocation/availability values by the target volume allocation value.
-
- 07 4月, 2015 3 次提交
-
-
由 Erik Skultety 提交于
The 'checkPool' callback was originally part of the storageDriverAutostart function, but the pools need to be checked earlier during initialization phase, otherwise we can't start a domain which mounts a volume after the libvirtd daemon restarted. This is because qemuProcessReconnect is called earlier than storageDriverAutostart. Therefore the 'checkPool' logic has been moved to storagePoolUpdateAllState which is called inside storageDriverInitialize. We also need a valid 'conn' reference to be able to execute 'refreshPool' during initialization phase. Though it isn't available until storageDriverAutostart all of our storage backends do ignore 'conn' pointer, except for RBD, but RBD doesn't support 'checkPool' callback, so it's safe to pass conn = NULL in this case. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1177733
-
由 Erik Skultety 提交于
These functions operate exactly the same as their network equivalents virNetworkLoadAllState, virNetworkLoadState.
-
由 Erik Skultety 提交于
This patch introduces new virStorageDriverState element stateDir. Also adds necessary changes to storageStateInitialize, so that directories initialization becomes more generic.
-
- 02 4月, 2015 1 次提交
-
-
由 Erik Skultety 提交于
In order to be able to use 'checkPool' inside functions which do not have any connection reference, 'conn' attribute needs to be discarded from the checkPool's signature, since it's not used by any storage backend anyway.
-
- 02 3月, 2015 3 次提交
-
-
由 Ján Tomko 提交于
The tool creating the image can get the capacity from the backing storage. Just refresh the volume afterwards. https://bugzilla.redhat.com/show_bug.cgi?id=958510
-
由 Ján Tomko 提交于
In virStorageVolCreateXML, add VIR_VOL_XML_PARSE_NO_CAPACITY to the call parsing the XML of the new volume to make the capacity optional. If the capacity is omitted, use the capacity of the old volume. We already do that for values that are less than the original volume capacity.
-
由 Ján Tomko 提交于
Allow the callers to pass down libvirt-internal flags.
-
- 31 1月, 2015 1 次提交
-
-
由 John Ferlan 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1176510 When storageDriverAutostart is called path virStateReload via a 'service libvirtd reload', then because the volume list in the pool wasn't cleared prior to the call, each volume would be listed multiple times (as many times as we reload). I believe the issue would be introduced by commit id '9e093f0b' at least for the libvirtd reload path, although I suppose the introduction of virStateReload (commit id '70da0494') could be a different cause. Thus like other places prior to calling refreshPool, we need to call virStoragePoolObjClearVols
-
- 27 1月, 2015 2 次提交
-
-
由 Chen Hanxiao 提交于
When creating a RAW file, we don't take advantage of clone of btrfs. Add a VIR_STORAGE_VOL_CREATE_REFLINK flag to request a reflink copy. Signed-off-by: NChen Hanxiao <chenhanxiao@cn.fujitsu.com> Signed-off-by: NJán Tomko <jtomko@redhat.com>
-
由 Daniel P. Berrange 提交于
For stateless, client side drivers, it is never correct to probe for secondary drivers. It is only ever appropriate to use the secondary driver that is associated with the hypervisor in question. As a result the ESX & HyperV drivers have both been forced to do hacks where they register no-op drivers for the ones they don't implement. For stateful, server side drivers, we always just want to use the same built-in shared driver. The exception is virtualbox which is really a stateless driver and so wants to use its own server side secondary drivers. To deal with this virtualbox has to be built as 3 separate loadable modules to allow registration to work in the right order. This can all be simplified by introducing a new struct recording the precise set of secondary drivers each hypervisor driver wants struct _virConnectDriver { virHypervisorDriverPtr hypervisorDriver; virInterfaceDriverPtr interfaceDriver; virNetworkDriverPtr networkDriver; virNodeDeviceDriverPtr nodeDeviceDriver; virNWFilterDriverPtr nwfilterDriver; virSecretDriverPtr secretDriver; virStorageDriverPtr storageDriver; }; Instead of registering the hypervisor driver, we now just register a virConnectDriver instead. This allows us to remove all probing of secondary drivers. Once we have chosen the primary driver, we immediately know the correct secondary drivers to use. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 08 12月, 2014 1 次提交
-
-
由 Peter Krempa 提交于
Other parts of libvirt use "%u" for formatting uid/gid and typecast to unsigned int. Storage driver used the signed variant.
-
- 04 12月, 2014 1 次提交
-
-
由 Luyao Huang 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1087104#c5 When trying to use an invalid offset to virStorageVolUpload(), libvirt fails in virFDStreamOpenFileInternal(), although it seems libvirt does not check the return in storageVolUpload(), and calls virFDStreamSetInternalCloseCb() right after. But stream doesn't have a privateData (is NULL) yet, and the daemon crashes then. 0 0x00007f09429a9c10 in pthread_mutex_lock () from /lib64/libpthread.so.0 1 0x00007f094514dbf5 in virMutexLock (m=<optimized out>) at util/virthread.c:88 2 0x00007f09451cb211 in virFDStreamSetInternalCloseCb at fdstream.c:795 3 0x00007f092ff2c9eb in storageVolUpload at storage/storage_driver.c:2098 4 0x00007f09451f46e0 in virStorageVolUpload at libvirt.c:14000 5 0x00007f0945c78fa1 in remoteDispatchStorageVolUpload at remote_dispatch.h:14339 6 remoteDispatchStorageVolUploadHelper at remote_dispatch.h:14309 7 0x00007f094524a192 in virNetServerProgramDispatchCall at rpc/virnetserverprogram.c:437 Signed-off-by: NLuyao Huang <lhuang@redhat.com>
-
- 03 12月, 2014 1 次提交
-
-
由 Michal Privoznik 提交于
While this could be exposed as a public API, it's not done yet as there's no demand for that yet. Anyway, this is just preparing the environment for easier volume creation on the destination. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-