You need to sign in or sign up before continuing.
- 13 2月, 2016 3 次提交
-
-
由 Wido den Hollander 提交于
Since Ceph version Infernalis (9.2.0) the new fast-diff mechanism of RBD allows for querying actual volume usage. Prior to this version there was no easy and fast way to query how much allocation a RBD volume had inside a Ceph cluster. To use the fast-diff feature it needs to be enabled per RBD image and is only supported by Ceph cluster running version Infernalis (9.2.0) or newer. Without the fast-diff feature enabled libvirt will report an allocation identical to the image capacity. This is how libvirt behaves currently. 'virsh vol-info rbd/image2' might output for example: Name: image2 Type: network Capacity: 1,00 GiB Allocation: 124,00 MiB Newly created volumes will have the fast-diff feature enabled if the backing Ceph cluster supports it. Signed-off-by: NWido den Hollander <wido@widodh.nl>
-
由 Wido den Hollander 提交于
In commit 0b15f920 there is a #ifdef which requires LIBRBD_VERSION_CODE 266 or newer for rbd_diff_iterate2() rbd_diff_iterate2() is available since 266, so this if-statement should require anything newer than 265. Signed-off-by: NWido den Hollander <wido@widodh.nl>
-
由 Wido den Hollander 提交于
As more and more features are added to RBD volumes we will need to call this method more often. By moving it into a internal function we can re-use code inside the storage backend. Signed-off-by: NWido den Hollander <wido@widodh.nl>
-
- 06 2月, 2016 2 次提交
-
-
由 Wido den Hollander 提交于
This was only used in debugging messages and not in any real code. Ceph/RBD uses uint64_t for sizes internally and they can be printed with %zu without any need for casting. Signed-off-by: NWido den Hollander <wido@widodh.nl>
-
由 Wido den Hollander 提交于
Through the years the RBD storage pool code hasn't maintained the same or correct coding standard which applies to libvirt. This patch doesn't change any logic in the code, it only applies the proper coding standards to the code where possible without making large changes. This way the code style used in this storage pool is consistent throughout the whole file. Signed-off-by: NWido den Hollander <wido@widodh.nl>
-
- 30 1月, 2016 4 次提交
-
-
由 Wido den Hollander 提交于
By opening a RBD volume in Read-Only we do not register a watcher on the header object inside the Ceph cluster. Refreshing a volume only calls rbd_stat() which is a operation which does not write to a RBD image. This allows us to use a cephx user which has no write permissions if we would want to use the libvirt storage pool for informational purposes only. It also saves us a write into the Ceph cluster which should speed up refreshing a RBD pool. rbd_open_read_only() is available in all librbd versions which also support rbd_open(). Signed-off-by: NWido den Hollander <wido@widodh.nl>
-
由 Wido den Hollander 提交于
RBD supports cloning by creating a snapshot, protecting it and create a child image based on that snapshot afterwards. The RBD storage driver will try to find a snapshot with zero deltas between the current state of the original volume and the snapshot. If such a snapshot is found a clone/child image will be created using the rbd_clone2() function from librbd. rbd_clone2() is available in librbd since Ceph version Dumpling (0.67) which dates back to August 2013. It will use the same features, strip size and stripe count as the parent image. This implementation will only create a single snapshot on the parent image if never changes. This reduces the amount of snapshots created for that RBD image which benefits the performance of the Ceph cluster. During build the decision will be made to use either rbd_diff_iterate() or rbd_diff_iterate2(). The latter is faster, but only available on Ceph versions after 0.94 (Hammer). Cloning is only supported if RBD format 2 is used. All images created by libvirt are already format 2. If a RBD format 1 image is used as the original volume the backend will report a VIR_ERR_OPERATION_UNSUPPORTED error. Signed-off-by: NWido den Hollander <wido@widodh.nl>
-
由 Wido den Hollander 提交于
Using VIR_STORAGE_VOL_WIPE_ALG_TRIM a RBD volume can be trimmed down to 0 bytes using rbd_discard() Effectively all the data on the volume will be lost/gone, but the volume remains available for use afterwards. Starting at offset 0 the storage pool will call rbd_discard() in stripe size * count increments which is usually 4MB. Stripe size being 4MB and count 1. rbd_discard() is available since Ceph version Dumpling (0.67) which dates back to August 2013. Signed-off-by: NWido den Hollander <wido@widodh.nl>
-
由 Wido den Hollander 提交于
This new algorithm adds support for wiping volumes using TRIM. It does not overwrite all the data in a volume, but it tells the backing storage pool/driver that all bytes in a volume can be discarded. It depends on the backing storage pool how this is handled. A SCSI backend might send UNMAP commands to remove all data present on a LUN. A Ceph backend might use rbd_discard() to instruct the Ceph cluster that all data on that RBD volume can be discarded. Signed-off-by: NWido den Hollander <wido@widodh.nl>
-
- 29 1月, 2016 1 次提交
-
-
由 Wido den Hollander 提交于
When wiping the RBD image will be filled with zeros started at offset 0 and until the end of the volume. This will result in the RBD volume growing to it's full allocation on the Ceph cluster. All data on the volume will be overwritten however, making it unavailable. It does NOT take any RBD snapshots into account. The original data might still be in a snapshot of that RBD volume. Signed-off-by: NWido den Hollander <wido@widodh.nl>
-
- 18 1月, 2016 1 次提交
-
-
由 Wido den Hollander 提交于
This was reported in bug #1298024 where r would be filled with the return code of rbd_open(). Should rbd_snap_unprotect() fail for any reason the virReportSystemError call would return 'Success' since rbd_open() succeeded. https://bugzilla.redhat.com/show_bug.cgi?id=1298024Signed-off-by: NWido den Hollander <wido@widodh.nl>
-
- 06 1月, 2016 3 次提交
-
-
由 Wido den Hollander 提交于
If no port number was provided for a storage pool libvirt defaults to port 6789; however, librbd/librados already default to 6789 when no port number is provided. In the future Ceph will switch to a new port for the Ceph monitors since port 6789 is already assigned to a different application by IANA. Port 6789 is assigned to SMC-HTTPS and Ceph now has port 3300 assigned as the 'Ceph monitor' port. In this case it is the best solution to not hardcode any port number into libvirt and let librados handle the connection. Only if a user specifies a different port number we pass it down to librados, otherwise we leave it blank. Signed-off-by: NWido den Hollander <wido@widodh.nl> merge
-
由 Wido den Hollander 提交于
It could happen that rbd_list() returns X names, but that while refreshing the pool one of those RBD images is removed from Ceph through a different route then libvirt. We do not need to error out in such case, we can simply ignore the volume and continue. error : volStorageBackendRBDRefreshVolInfo:289 : failed to open the RBD image 'vol-998': No such file or directory It could also be that one or more Placement Groups (PGs) inside Ceph are inactive due to a system failure. If that happens it could be that some RBD images can not be refreshed and a timeout will be raised by librados. error : volStorageBackendRBDRefreshVolInfo:289 : failed to open the RBD image 'vol-893': Connection timed out Ignore the error and continue to refresh the rest of the pool's contents. Signed-off-by: NWido den Hollander <wido@widodh.nl>
-
由 Wido den Hollander 提交于
It could be that we error out while the RBD image has not been opened yet. This would cause us to call rbd_close() on pointer which has not been initialized. Set it to NULL by default and only close if it is not NULL. Signed-off-by: NWido den Hollander <wido@widodh.nl>
-
- 04 1月, 2016 1 次提交
-
-
由 Wido den Hollander 提交于
This used to return 'unkown' and that was not correct. A vol-dumpxml now returns: <volume type='network'> <name>image3</name> <key>libvirt/image3</key> <source> </source> <capacity unit='bytes'>10737418240</capacity> <allocation unit='bytes'>10737418240</allocation> <target> <path>libvirt/image3</path> <format type='raw'/> </target> </volume> The RBD driver will now error out if a different format than RAW is provided when creating a volume. Signed-off-by: NWido den Hollander <wido@widodh.nl>
-
- 18 12月, 2015 1 次提交
-
-
由 John Ferlan 提交于
The initial commit '74951ead' did not include the proper check for whether any flags are supported by the driver. Even though the driver doesn't support VIR_STORAGE_VOL_DELETE_ZEROED, it still checks and allows the processing to continue Also add the new VIR_STORAGE_VOL_DELETE_WITH_SNAPSHOTS since it is handled as of commit id '3c7590e0'.
-
- 28 10月, 2015 1 次提交
-
-
由 Wido den Hollander 提交于
When a RBD volume has snapshots it can not be removed. This patch introduces a new flag to force volume removal, VIR_STORAGE_VOL_DELETE_WITH_SNAPSHOTS. With this flag any existing snapshots will be removed prior to removing the volume. No existing mechanism in libvirt allowed us to pass such information, so that's why a new flag was introduced. Signed-off-by: NWido den Hollander <wido@widodh.nl>
-
- 14 10月, 2015 1 次提交
-
-
由 John Ferlan 提交于
As of commit id '155ca616' a 'refreshVol' is called after the buildVol succeeds in storageVolCreateXML, thus the volStorageBackendRBDRefreshVolInfo call in virStorageBackendRBDBuildVol is no longer necessary. Signed-off-by: NJohn Ferlan <jferlan@redhat.com>
-
- 17 7月, 2015 2 次提交
-
-
由 John Ferlan 提交于
Resolving an error reporting bug introduced by commit id '761491eb' which just took the return of virStorageBackendRBDCreateImage and used it as the basis for the message generated. This would generate EPERM regardless of error seen.
-
由 Wido den Hollander 提交于
We used to look at the librbd code version and depending on that we would invoke rbd_create3() or rbd_create(). Since librbd version 0.67.9 we can however tell RBD that it should create rbd format 2 images even if we invoke rbd_create(). The less options we pass to librbd, the more we can lean on the sane defaults it uses. For rbd_create3() we had things like the stripe count and unit hardcoded in libvirt and that might cause problems down the road. Signed-off-by: NWido den Hollander <wido@widodh.nl>
-
- 02 6月, 2015 1 次提交
-
-
由 Erik Skultety 提交于
RBD API returns negative value of errno, in that case we can silently ignore if RBD tries to delete a non-existent volume, just like FS backend does.
-
- 02 3月, 2015 1 次提交
-
-
由 Ján Tomko 提交于
The tool creating the image can get the capacity from the backing storage. Just refresh the volume afterwards. https://bugzilla.redhat.com/show_bug.cgi?id=958510
-
- 03 12月, 2014 1 次提交
-
-
由 John Ferlan 提交于
Since virSecretFree will call virObjectUnref anyway, let's just use that directly so as to avoid the possibility that we inadvertently clear out a pending error message when using the public API.
-
- 15 11月, 2014 1 次提交
-
-
由 Martin Kletzander 提交于
Signed-off-by: NMartin Kletzander <mkletzan@redhat.com>
-
- 04 7月, 2014 1 次提交
-
-
由 John Ferlan 提交于
Replace the authType, chap, and cephx unions in virStoragePoolSource with a single pointer to a virStorageAuthDefPtr. Adjust all users of the previous chap/cephx and secret unions with the source->auth data.
-
- 03 7月, 2014 2 次提交
-
-
由 Ján Tomko 提交于
Replace: if (virBufferError(&buf)) { virBufferFreeAndReset(&buf); virReportOOMError(); ... } with: if (virBufferCheckError(&buf) < 0) ... This should not be a functional change (unless some callers misused the virBuffer APIs - a different error would be reported then)
-
由 Ján Tomko 提交于
Reindent nwfilter gentech driver and one block in rbd storage backend.
-
- 29 4月, 2014 1 次提交
-
-
由 Steven McDonald 提交于
The stripe_unit and stripe_count arguments are passed to rbd_create3 in the wrong order, resulting in a stripe size of 1 byte with 4194304 stripes on newly created RBD volumes. https://bugzilla.redhat.com/show_bug.cgi?id=1092208Signed-off-by: NSteven McDonald <steven.mcdonald@anchor.net.au>
-
- 02 4月, 2014 1 次提交
-
-
由 Eric Blake 提交于
One of the features of qcow2 is that a wrapper file can have more capacity than its backing file from the guest's perspective; what's more, sparse files make tracking allocation of both the active and backing file worthwhile. As such, it makes more sense to show allocation numbers for each file in a chain, and not just the top-level file. This sets up the fields for the tracking, although it does not modify XML to display any new information. * src/util/virstoragefile.h (_virStorageSource): Add fields. * src/conf/storage_conf.h (_virStorageVolDef): Drop redundant fields. * src/storage/storage_backend.c (virStorageBackendCreateBlockFrom) (createRawFile, virStorageBackendCreateQemuImgCmd) (virStorageBackendCreateQcowCreate): Update clients. * src/storage/storage_driver.c (storageVolDelete) (storageVolCreateXML, storageVolCreateXMLFrom, storageVolResize) (storageVolWipeInternal, storageVolGetInfo): Likewise. * src/storage/storage_backend_fs.c (virStorageBackendProbeTarget) (virStorageBackendFileSystemRefresh) (virStorageBackendFileSystemVolResize) (virStorageBackendFileSystemVolRefresh): Likewise. * src/storage/storage_backend_logical.c (virStorageBackendLogicalMakeVol) (virStorageBackendLogicalCreateVol): Likewise. * src/storage/storage_backend_scsi.c (virStorageBackendSCSINewLun): Likewise. * src/storage/storage_backend_mpath.c (virStorageBackendMpathNewVol): Likewise. * src/storage/storage_backend_rbd.c (volStorageBackendRBDRefreshVolInfo) (virStorageBackendRBDCreateImage): Likewise. * src/storage/storage_backend_disk.c (virStorageBackendDiskMakeDataVol) (virStorageBackendDiskCreateVol): Likewise. * src/storage/storage_backend_sheepdog.c (virStorageBackendSheepdogBuildVol) (virStorageBackendSheepdogParseVdiList): Likewise. * src/storage/storage_backend_gluster.c (virStorageBackendGlusterRefreshVol): Likewise. * src/conf/storage_conf.c (virStorageVolDefFormat) (virStorageVolDefParseXML): Likewise. * src/test/test_driver.c (testOpenVolumesForPool) (testStorageVolCreateXML, testStorageVolCreateXMLFrom) (testStorageVolDelete, testStorageVolGetInfo): Likewise. * src/esx/esx_storage_backend_iscsi.c (esxStorageVolGetXMLDesc): Likewise. * src/esx/esx_storage_backend_vmfs.c (esxStorageVolGetXMLDesc) (esxStorageVolCreateXML): Likewise. * src/parallels/parallels_driver.c (parallelsAddHddByVolume): Likewise. * src/parallels/parallels_storage.c (parallelsDiskDescParseNode) (parallelsStorageVolDefineXML, parallelsStorageVolCreateXMLFrom) (parallelsStorageVolDefRemove, parallelsStorageVolGetInfo): Likewise. * src/vbox/vbox_tmpl.c (vboxStorageVolCreateXML) (vboxStorageVolGetXMLDesc): Likewise. * tests/storagebackendsheepdogtest.c (test_vdi_list_parser): Likewise. * src/phyp/phyp_driver.c (phypStorageVolCreateXML): Likewise.
-
- 25 3月, 2014 1 次提交
-
-
由 Ján Tomko 提交于
-
- 18 3月, 2014 1 次提交
-
-
由 Daniel P. Berrange 提交于
Any source file which calls the logging APIs now needs to have a VIR_LOG_INIT("source.name") declaration at the start of the file. This provides a static variable of the virLogSource type. Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
-
- 12 3月, 2014 1 次提交
-
-
由 John Ferlan 提交于
From commit id 'd53bbfd1' Found one core and one possible memory leak. Core seen during local virt-test/tp_libvirt run for the vol_create_from test. The memory leak was seen by inspection during a review of all VIR_APPEND_ELEMENT changes In storage_backend_disk/virStorageBackendDiskMakeDataVol(), the 'vol' needs to be kept around since it's used later, so use the _COPY macro. This caused a segv in libvirtd: Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7fffe87c3700 (LWP 6919)] virStorageBackendDiskMakeDataVol (vol=0x0, groups=0x7fffc8000d70, pool=0x7fffc8002460) at storage/storage_backend_disk.c:66 66 if (vol->target.path == NULL) { In storage_backend_rbd/virStorageBackendRBDRefreshPool() there's a failure path where the 'vol' needs to go through virStorageVolDefFree() since it wouldn't be appended.
-
- 10 3月, 2014 1 次提交
-
-
由 Michal Privoznik 提交于
Signed-off-by: NMichal Privoznik <mprivozn@redhat.com>
-
- 25 2月, 2014 2 次提交
-
-
由 Wido den Hollander 提交于
These timeout values make librados/librbd return -ETIMEDOUT when a operation is blocking due to a failing/unreachable Ceph cluster. By having the operations time out libvirt will not block.
-
由 Wido den Hollander 提交于
With this information it's easier for the user to debug what is going wrong.
-
- 11 2月, 2014 1 次提交
-
-
由 Wido den Hollander 提交于
This new RBD format supports snapshotting and cloning. By having libvirt create images in format 2 end-users of the created images can benefit from the new RBD format. Older versions of libvirt can work with this new RBD format as long as librbd supports format 2. RBD format is supported by librbd since version 0.56 (Ceph Bobtail). Signed-off-by: NWido den Hollander <wido@widodh.nl>
-
- 16 1月, 2014 1 次提交
-
-
由 Peter Krempa 提交于
Separate the steps to create libvirt's volume metadata from the actual volume building process.
-
- 11 12月, 2013 1 次提交
-
-
由 Michael Chapman 提交于
This variable shadows the stat(2) function, which only became visible in this scope as of commit 9cac8639. Rename the variable so it doesn't conflict. Signed-off-by: NMichael Chapman <mike@very.puzzling.org>
-
- 21 11月, 2013 1 次提交
-
-
由 Eric Blake 提交于
Most of our code base uses space after comma but not before; fix the remaining uses before adding a syntax check. * src/network/bridge_driver.c: Consistently use commas. * src/node_device/node_device_hal.c: Likewise. * src/node_device/node_device_udev.c: Likewise. * src/storage/storage_backend_rbd.c: Likewise. Signed-off-by: NEric Blake <eblake@redhat.com>
-
- 31 10月, 2013 1 次提交
-
-
由 Eric Blake 提交于
Using size_t counts will let us use VIR_APPEND_ELEMENT and friends. * src/conf/storage_conf.h (_virStoragePoolObjList) (_virStorageVolDefList): Track list sizes with size_t. * src/storage/storage_backend_rbd.c (virStorageBackendRBDRefreshPool): Fix type fallout. Signed-off-by: NEric Blake <eblake@redhat.com>
-