- 23 8月, 2019 1 次提交
-
-
由 Michal Privoznik 提交于
https://bugzilla.redhat.com/show_bug.cgi?id=1711789 Starting up or building some types of pools may take a very long time (e.g. a misconfigured NFS). Holding the pool object locked throughout the whole time hurts concurrency, e.g. if there's another thread that is listing all the pools. Signed-off-by: NMichal Privoznik <mprivozn@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
- 04 3月, 2019 1 次提交
-
-
由 Peter Krempa 提交于
Use of VIR_AUTOPTR and virString is confusing as it's a list and not a single pointer. Replace it by VIR_AUTOSTRINGLIST as string lists are basically the only sane NULL-terminated list we can have. Signed-off-by: NPeter Krempa <pkrempa@redhat.com> Reviewed-by: NErik Skultety <eskultet@redhat.com>
-
- 13 2月, 2019 1 次提交
-
-
由 John Ferlan 提交于
Let's make use of the auto __cleanup capabilities. This also allows for the cleanup of some goto paths. Signed-off-by: NJohn Ferlan <jferlan@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
- 12 2月, 2019 2 次提交
-
-
由 John Ferlan 提交于
Let's make use of the auto __cleanup capabilities cleaning up any now unnecessary goto paths. Signed-off-by: NJohn Ferlan <jferlan@redhat.com> Reviewed-by: NErik Skultety <eskultet@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
由 John Ferlan 提交于
Let's make use of the auto __cleanup capabilities cleaning up any now unnecessary goto paths. Signed-off-by: NJohn Ferlan <jferlan@redhat.com> Reviewed-by: NErik Skultety <eskultet@redhat.com> Reviewed-by: NJán Tomko <jtomko@redhat.com>
-
- 01 2月, 2018 1 次提交
-
-
由 Daniel P. Berrangé 提交于
Now that we can open connections to the secondary drivers on demand, there is no need to pass a virConnectPtr into all the backend functions. Signed-off-by: NDaniel P. Berrangé <berrange@redhat.com>
-
- 08 11月, 2017 1 次提交
-
-
由 John Ferlan 提交于
In preparation for privatizing the object, use the accessor.
-
- 19 9月, 2017 1 次提交
-
-
由 John Ferlan 提交于
Create/use virStoragePoolObjAddVol in order to add volumes onto list. Create/use virStoragePoolObjRemoveVol in order to remove volumes from list. Create/use virStoragePoolObjGetVolumesCount to get count of volumes on list. For the storage driver, the logic alters when the volumes.obj list grows to after we've fetched the volobj. This is an optimization of sorts, but also doesn't "needlessly" grow the volumes.objs list and then just decr the count if the virGetStorageVol fails. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
- 22 7月, 2017 1 次提交
-
-
由 John Ferlan 提交于
Use the < 0 rather than == -1 (consistently) for virAsprintf errors. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
- 21 2月, 2017 1 次提交
-
-
由 Peter Krempa 提交于
Add APIs that allow to dynamically register driver backends so that the list of available drivers does not need to be known during compile time. This will allow us to modularize the storage driver on runtime.
-
- 19 1月, 2017 1 次提交
-
-
由 Peter Krempa 提交于
The file became a garbage dump for all kinds of utility functions over time. Move them to a separate file so that the files can become a clean interface for the storage backends.
-
- 25 11月, 2016 1 次提交
-
-
由 Michal Privoznik 提交于
We have couple of functions that operate over NULL terminated lits of strings. However, our naming sucks: virStringJoin virStringFreeList virStringFreeListCount virStringArrayHasString virStringGetFirstWithPrefix We can do better: virStringListJoin virStringListFree virStringListFreeCount virStringListHasString virStringListGetFirstWithPrefix Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 17 4月, 2016 1 次提交
-
-
由 Richard Laager 提交于
By default, `zfs create -V ...` reserves space for the entire volsize, plus some extra (which attempts to account for overhead). If `zfs create -s -V ...` is used instead, zvols are (fully) sparse. A middle ground (partial allocation) can be achieved with `zfs create -s -o refreservation=... -V ...`. Both libvirt and ZFS support this approach, so the ZFS storage backend should support it. Signed-off-by: NRichard Laager <rlaager@wiktel.com>
-
- 14 4月, 2016 1 次提交
-
-
由 Martin Kletzander 提交于
I tried compiling libvirt with older gcc and probably because I used different configure options I got some shadowed declarations. Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 27 3月, 2016 1 次提交
-
-
由 Roman Bogorodskiy 提交于
This reverts commit bb5f2dc9. The "if (vol->target.format != VIR_STORAGE_FILE_RAW)" check in the createVol backend. This check is bogus because virStorageVolDefParseXML() in conf/storage_conf.c sets target.format only if volOptions in virStoragePoolTypeInfo has formatFromString set, and that's not the case the zfs backend. So the check always fails and breaks volume creation.
-
- 21 3月, 2016 3 次提交
-
-
由 Richard Laager 提交于
-
由 Richard Laager 提交于
-
由 Richard Laager 提交于
This improves the code consistency around freeing vol->target.path in createVol implementations.
-
- 26 2月, 2016 2 次提交
-
-
由 John Ferlan 提交于
Generates a false positive for Coverity, but it turns out there's no need to check ret == -1 since if VIR_APPEND_ELEMENT is successful, the local vol pointer is cleared anyway. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
由 John Ferlan 提交于
Found by my Coverity checker - virCheckFlags call could return -1, but not virCommandFree(destroy_cmd). Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
- 04 2月, 2016 1 次提交
-
-
由 Roman Bogorodskiy 提交于
There are slight differences in various ZFS implementations. Specifically, ZFS on FreeBSD requires to set value of 'volmode' option to 'dev' to expose volumes as raw disk device (that's what we need) rather than geom provides, for example. With ZFS on Linux, however, such option is not available and volumes exposed like we need by default. To make our implementation more flexible, only pass 'volmode' when it's supported. Support is checked by parsing usage information of the 'zfs get' command.
-
- 02 4月, 2015 1 次提交
-
-
由 Erik Skultety 提交于
In order to be able to use 'checkPool' inside functions which do not have any connection reference, 'conn' attribute needs to be discarded from the checkPool's signature, since it's not used by any storage backend anyway.
-
- 18 9月, 2014 1 次提交
-
-
由 Roman Bogorodskiy 提交于
- Provide an implementation for buildPool and deletePool operations for the ZFS storage backend. - Add VIR_STORAGE_POOL_SOURCE_DEVICE flag to ZFS pool poolOptions as now we can specify devices to build pool from - storagepool.rng: add an optional 'sourceinfodev' to 'sourcezfs' and add an optional 'target' to 'poolzfs' entity - Add a couple of tests to storagepoolxml2xmltest
-
- 30 8月, 2014 1 次提交
-
-
由 Roman Bogorodskiy 提交于
Currently, after calling commands to create a new volumes, virStorageBackendZFSCreateVol calls virStorageBackendZFSFindVols that calls virStorageBackendZFSParseVol. virStorageBackendZFSParseVol checks if a volume already exists by trying to get it using virStorageVolDefFindByName. For a just created volume it returns NULL, so volume is reported as new and appended to pool->volumes. This causes a volume to be listed twice as storageVolCreateXML appends this new volume to the list as well. Fix that by passing a new volume definition to virStorageBackendZFSParseVol so it could determine if it needs to add this volume to the list.
-
- 25 8月, 2014 1 次提交
-
-
由 Roman Bogorodskiy 提交于
Add an implementation of uploadVol and downloadVol using virStorageBackendVolUploadLocal and virStorageBackendVolDownloadLocal respectively.
-
- 12 8月, 2014 1 次提交
-
-
由 Roman Bogorodskiy 提交于
Implement ZFS storage backend driver. Currently supported only on FreeBSD because of ZFS limitations on Linux. Features supported: - pool-start, pool-stop - pool-info - vol-list - vol-create / vol-delete Pool definition looks like that: <pool type='zfs'> <name>myzfspool</name> <source> <name>actualpoolname</name> </source> </pool> The 'actualpoolname' value is a name of the pool on the system, such as shown by 'zpool list' command. Target makes no sense here because volumes path is always /dev/zvol/$poolname/$volname. User has to create a pool on his own, this driver doesn't support pool creation currently. A volume could be used with Qemu by adding an entry like this: <disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='myzfspool' volume='vol5'/> <target dev='hdc' bus='ide'/> </disk>
-