1. 18 9月, 2014 1 次提交
    • R
      docs: update zfs documentation · 05d1dd6b
      Roman Bogorodskiy 提交于
       - docs/formatstorage.html.in: document 'zfs' pool type, add it
         to a list of pool types that could use source physical devices
       - docs/storage.html.in: update a ZFS pool example XML with
         source physical devices, mention that starting from 1.2.9 a
         pool could be created from this devices by libvirt and in earlier
         versions user still has to create a pool manually
       - docs/drvbhyve.html.in: add an example with ZFS pools
      05d1dd6b
  2. 12 8月, 2014 1 次提交
    • R
      storage: ZFS support · 0257d06b
      Roman Bogorodskiy 提交于
      Implement ZFS storage backend driver. Currently supported
      only on FreeBSD because of ZFS limitations on Linux.
      
      Features supported:
      
       - pool-start, pool-stop
       - pool-info
       - vol-list
       - vol-create / vol-delete
      
      Pool definition looks like that:
      
       <pool type='zfs'>
        <name>myzfspool</name>
        <source>
          <name>actualpoolname</name>
        </source>
       </pool>
      
      The 'actualpoolname' value is a name of the pool on the system,
      such as shown by 'zpool list' command. Target makes no sense
      here because volumes path is always /dev/zvol/$poolname/$volname.
      
      User has to create a pool on his own, this driver doesn't
      support pool creation currently.
      
      A volume could be used with Qemu by adding an entry like this:
      
          <disk type='volume' device='disk'>
            <driver name='qemu' type='raw'/>
            <source pool='myzfspool' volume='vol5'/>
            <target dev='hdc' bus='ide'/>
          </disk>
      0257d06b
  3. 01 4月, 2014 1 次提交
    • P
      gluster: Fix "key" attribute for gluster volumes · 0f6c50b9
      Peter Krempa 提交于
      According to our documentation the "key" value has the following
      meaning: "Providing an identifier for the volume which identifies a
      single volume." The currently used keys for gluster volumes consist of
      the gluster volume name and file path. This can't be considered unique
      as a different storage server can serve a volume with the same name.
      
      Unfortunately I wasn't able to figure out a way to retrieve the gluster
      volume UUID which would avoid the possibility of having two distinct
      keys identifying a single volume.
      
      Use the full URI as the key for the volume to avoid the more critical
      ambiguity problem and document the possible change to UUID.
      0f6c50b9
  4. 02 12月, 2013 1 次提交
  5. 26 11月, 2013 1 次提交
    • E
      storage: document gluster pool · ed5fa7f3
      Eric Blake 提交于
      Add support for a new <pool type='gluster'>, similar to
      RBD and Sheepdog.  Terminology wise, a gluster volume
      forms a libvirt storage pool, within the gluster volume,
      individual files are treated as libvirt storage volumes.
      
      * docs/schemas/storagepool.rng (poolgluster): New pool type.
      * docs/formatstorage.html.in: Document gluster.
      * docs/storage.html.in: Likewise, and contrast it with netfs.
      * tests/storagepoolxml2xmlin/pool-gluster.xml: New test.
      * tests/storagepoolxml2xmlout/pool-gluster.xml: Likewise.
      * tests/storagepoolxml2xmltest.c (mymain): Likewise.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      ed5fa7f3
  6. 14 11月, 2013 1 次提交
    • E
      storage: fix RNG validation of gluster via netfs · 887dd362
      Eric Blake 提交于
      While trying to compare netfs against my new gluster pool, I
      discovered two things:
      
      virt-xml-validate chokes on valid xml produced by 'virsh pool-dumpxml'
      [yet another reason that ALL patches that add new xml should be adding
      corresponding tests]
      
      When using glusterfs FUSE mounts, you cannot access a subdirectory
      of a gluster volume.  The recommended workaround in the gluster
      community is to mount the volume to an intermediate location, then
      bind-mount the desired subdirectory to the final location.  Maybe
      we should teach libvirt to do bind-mounting, but for now I chose to
      just document the limitation.
      
      * docs/storage.html.in: Improve documentation.
      * docs/schemas/storagepool.rng (sourcefmtnetfs): Allow all
      formats, and drop redundant info-vendor.
      * tests/storagepoolxml2xmltest.c (mymain): New test.
      * tests/storagepoolxml2xmlin/pool-netfs-gluster.xml: New file.
      * tests/storagepoolxml2xmlout/pool-netfs-gluster.xml: Likewise.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      887dd362
  7. 03 5月, 2013 1 次提交
    • D
      Fix multiple formatting problems in HTML docs · f2f9742d
      Daniel P. Berrange 提交于
      The rule generating the HTML docs passing the --html flag
      to xsltproc. This makes it use the legacy HTML parser, which
      either ignores or tries to fix all sorts of broken XML tags.
      There's no reason why we should be writing broken XML in
      the first place, so removing --html and adding the XHTML
      doctype to all files forces us to create good XML.
      
      This adds the XHTML doc type and fixes many, many XML tag
      problems it exposes.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      f2f9742d
  8. 20 3月, 2013 1 次提交
  9. 27 2月, 2013 1 次提交
  10. 19 7月, 2012 1 次提交
    • S
      Add a sheepdog backend for the storage driver · 29bc4fe6
      Sebastian Wiedenroth 提交于
      This patch brings support to manage sheepdog pools and volumes to libvirt.
      It uses the "collie" command-line utility that comes with sheepdog for that.
      
      A sheepdog pool in libvirt maps to a sheepdog cluster.
      It needs a host and port to connect to, which in most cases
      is just going to be the default of localhost on port 7000.
      
      A sheepdog volume in libvirt maps to a sheepdog vdi.
      To create one specify the pool, a name and the capacity.
      Volumes can also be resized later.
      
      In the volume XML the vdi name has to be put into the <target><path>.
      To use the volume as a disk source for virtual machines specify
      the vdi name as "name" attribute of the <source>.
      The host and port information from the pool are specified inside the host tag.
      
        <disk type='network'>
          ...
          <source protocol="sheepdog" name="vdi_name">
            <host name="localhost" port="7000"/>
          </source>
        </disk>
      
      To work right this patch parses the output of collie,
      so it relies on the raw output option. There recently was a bug which caused
      size information to be reported wrong. This is fixed upstream already and
      will be in the next release.
      Signed-off-by: NSebastian Wiedenroth <wiedi@frubar.net>
      29bc4fe6
  11. 22 5月, 2012 1 次提交
    • W
      storage backend: Add RBD (RADOS Block Device) support · 74951ead
      Wido den Hollander 提交于
      This patch adds support for a new storage backend with RBD support.
      
      RBD is the RADOS Block Device and is part of the Ceph distributed storage
      system.
      
      It comes in two flavours: Qemu-RBD and Kernel RBD, this storage backend only
      supports Qemu-RBD, thus limiting the use of this storage driver to Qemu only.
      
      To function this backend relies on librbd and librados being present on the
      local system.
      
      The backend also supports Cephx authentication for safe authentication with
      the Ceph cluster.
      
      For storing credentials it uses the built-in secret mechanism of libvirt.
      Signed-off-by: NWido den Hollander <wido@widodh.nl>
      74951ead
  12. 03 2月, 2012 1 次提交
    • D
      Add detail to documentation on storage pools and volumes. · e68f22ae
      Dave Allan 提交于
      The storage pools page contains details about the capabilities of the
      various pool types, but not an overview of how they are intended to be
      used.  This patch adds some explanation of what pools and volumes can
      be used for and why an administrator might want to use them.
      e68f22ae
  13. 16 9月, 2010 1 次提交
  14. 23 2月, 2010 3 次提交
  15. 17 11月, 2009 1 次提交
  16. 04 12月, 2008 1 次提交
  17. 13 8月, 2008 1 次提交
  18. 12 8月, 2008 1 次提交
  19. 24 4月, 2008 1 次提交