1. 03 8月, 2017 2 次提交
  2. 24 4月, 2017 1 次提交
  3. 09 3月, 2017 1 次提交
  4. 26 1月, 2017 1 次提交
  5. 14 11月, 2016 1 次提交
  6. 27 2月, 2016 1 次提交
  7. 15 2月, 2016 1 次提交
  8. 06 1月, 2016 1 次提交
    • W
      rbd: Do not append Ceph monitor port number 6789 if not provided · 6343018f
      Wido den Hollander 提交于
      If no port number was provided for a storage pool libvirt defaults to
      port 6789; however, librbd/librados already default to 6789 when no port
      number is provided.
      
      In the future Ceph will switch to a new port for the Ceph monitors since
      port 6789 is already assigned to a different application by IANA.
      
      Port 6789 is assigned to SMC-HTTPS and Ceph now has port 3300 assigned as
      the 'Ceph monitor' port.
      
      In this case it is the best solution to not hardcode any port number into
      libvirt and let librados handle the connection.
      
      Only if a user specifies a different port number we pass it down to librados,
      otherwise we leave it blank.
      Signed-off-by: NWido den Hollander <wido@widodh.nl>
      
      merge
      6343018f
  9. 30 6月, 2015 1 次提交
    • J
      mpath: Don't allow more than one mpath pool at a time · a77056bd
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1232606
      
      Since an mpath pool contains all the Multipath devices on a host, allowing
      more than one defined on a host at a time should be disallowed under the
      policy of disallowing duplicate source pools for the host.
      
      Adjust to docs to clarify the Multipath target path value usage for both
      the storage driver (only 1 pool per host) and formatstorage references
      (ignore the target element in favor of the default target mapping of
      /dev/mapper).
      a77056bd
  10. 16 6月, 2015 1 次提交
    • J
      storage: Generate correct parameters for CIFS · 29230951
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1186969
      
      When generating the path to the dir for a CIFS/Samba driver, the code
      would generate a source path for the mount using "%s:%s" while the
      mount.cifs expects to see "//%s/%s". So check for the cifsfs and
      format the source path appropriately.
      
      Additionally, since there is no means to authenticate, the mount
      needs a "-o guest" on the command line in order to anonymously mount
      the Samba directory.
      29230951
  11. 13 5月, 2015 1 次提交
    • J
      conf: Remove source host name check for iSCSI · 4b2b53f6
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1171984
      https://bugzilla.redhat.com/show_bug.cgi?id=1188463
      
      Remove the check for the source host name for iSCSI source XML processing
      declaring duplicate sources when the source device path and if present the
      initiator of a proposed storage pool matches an existing storage pool.
      
      The backend iSCSI storage driver uses 'iscsiadm --mode session' to query
      available iscsid target sessions. The output displayed is the IP address
      and the IQN (target path) of known targets. The displayed IP address
      is a resolved address based on the session --login. Additionally, iscsid
      keeps track of the various ways to define the host name (IPv4 Address,
      IPv6 Address, /etc/hosts, etc.) for that IQN (see output of an 'iscsiadm
      --mode node'). If an incoming IQN matches and the host name provided by
      libvirt is resolved to the existing IQN, then iscsid will "reuse" the
      session. Although libvirt could do the same name resolution, if there
      is a difference, iscsid could still declare two seemingly different sources
      to be the same and not create a new session which means libvirt now has
      two storage pools looking at the same source. Thus to avoid any strange
      host name resolution issues, just rely on iscsid for that and do not
      allow multiple pools on the same host to use the same device path (IQN).
      4b2b53f6
  12. 03 3月, 2015 1 次提交
    • J
      disk: Provide a default storage source format type. · 832a9256
      John Ferlan 提交于
      https://bugzilla.redhat.com/show_bug.cgi?id=1181062
      
      According to the formatstorage.html description for <source> element
      and "format" attribute: "All drivers are required to have a default
      value for this, so it is optional."
      
      As it turns out the disk backend did not choose a default value, so I
      added a default of "msdos" if the source type is "unknown" as well as
      updating the storage.html backend disk volume driver documentation to
      indicate the default format is dos.
      832a9256
  13. 04 12月, 2014 1 次提交
  14. 29 9月, 2014 1 次提交
  15. 18 9月, 2014 1 次提交
    • R
      docs: update zfs documentation · 05d1dd6b
      Roman Bogorodskiy 提交于
       - docs/formatstorage.html.in: document 'zfs' pool type, add it
         to a list of pool types that could use source physical devices
       - docs/storage.html.in: update a ZFS pool example XML with
         source physical devices, mention that starting from 1.2.9 a
         pool could be created from this devices by libvirt and in earlier
         versions user still has to create a pool manually
       - docs/drvbhyve.html.in: add an example with ZFS pools
      05d1dd6b
  16. 12 8月, 2014 1 次提交
    • R
      storage: ZFS support · 0257d06b
      Roman Bogorodskiy 提交于
      Implement ZFS storage backend driver. Currently supported
      only on FreeBSD because of ZFS limitations on Linux.
      
      Features supported:
      
       - pool-start, pool-stop
       - pool-info
       - vol-list
       - vol-create / vol-delete
      
      Pool definition looks like that:
      
       <pool type='zfs'>
        <name>myzfspool</name>
        <source>
          <name>actualpoolname</name>
        </source>
       </pool>
      
      The 'actualpoolname' value is a name of the pool on the system,
      such as shown by 'zpool list' command. Target makes no sense
      here because volumes path is always /dev/zvol/$poolname/$volname.
      
      User has to create a pool on his own, this driver doesn't
      support pool creation currently.
      
      A volume could be used with Qemu by adding an entry like this:
      
          <disk type='volume' device='disk'>
            <driver name='qemu' type='raw'/>
            <source pool='myzfspool' volume='vol5'/>
            <target dev='hdc' bus='ide'/>
          </disk>
      0257d06b
  17. 01 4月, 2014 1 次提交
    • P
      gluster: Fix "key" attribute for gluster volumes · 0f6c50b9
      Peter Krempa 提交于
      According to our documentation the "key" value has the following
      meaning: "Providing an identifier for the volume which identifies a
      single volume." The currently used keys for gluster volumes consist of
      the gluster volume name and file path. This can't be considered unique
      as a different storage server can serve a volume with the same name.
      
      Unfortunately I wasn't able to figure out a way to retrieve the gluster
      volume UUID which would avoid the possibility of having two distinct
      keys identifying a single volume.
      
      Use the full URI as the key for the volume to avoid the more critical
      ambiguity problem and document the possible change to UUID.
      0f6c50b9
  18. 02 12月, 2013 1 次提交
  19. 26 11月, 2013 1 次提交
    • E
      storage: document gluster pool · ed5fa7f3
      Eric Blake 提交于
      Add support for a new <pool type='gluster'>, similar to
      RBD and Sheepdog.  Terminology wise, a gluster volume
      forms a libvirt storage pool, within the gluster volume,
      individual files are treated as libvirt storage volumes.
      
      * docs/schemas/storagepool.rng (poolgluster): New pool type.
      * docs/formatstorage.html.in: Document gluster.
      * docs/storage.html.in: Likewise, and contrast it with netfs.
      * tests/storagepoolxml2xmlin/pool-gluster.xml: New test.
      * tests/storagepoolxml2xmlout/pool-gluster.xml: Likewise.
      * tests/storagepoolxml2xmltest.c (mymain): Likewise.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      ed5fa7f3
  20. 14 11月, 2013 1 次提交
    • E
      storage: fix RNG validation of gluster via netfs · 887dd362
      Eric Blake 提交于
      While trying to compare netfs against my new gluster pool, I
      discovered two things:
      
      virt-xml-validate chokes on valid xml produced by 'virsh pool-dumpxml'
      [yet another reason that ALL patches that add new xml should be adding
      corresponding tests]
      
      When using glusterfs FUSE mounts, you cannot access a subdirectory
      of a gluster volume.  The recommended workaround in the gluster
      community is to mount the volume to an intermediate location, then
      bind-mount the desired subdirectory to the final location.  Maybe
      we should teach libvirt to do bind-mounting, but for now I chose to
      just document the limitation.
      
      * docs/storage.html.in: Improve documentation.
      * docs/schemas/storagepool.rng (sourcefmtnetfs): Allow all
      formats, and drop redundant info-vendor.
      * tests/storagepoolxml2xmltest.c (mymain): New test.
      * tests/storagepoolxml2xmlin/pool-netfs-gluster.xml: New file.
      * tests/storagepoolxml2xmlout/pool-netfs-gluster.xml: Likewise.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      887dd362
  21. 03 5月, 2013 1 次提交
    • D
      Fix multiple formatting problems in HTML docs · f2f9742d
      Daniel P. Berrange 提交于
      The rule generating the HTML docs passing the --html flag
      to xsltproc. This makes it use the legacy HTML parser, which
      either ignores or tries to fix all sorts of broken XML tags.
      There's no reason why we should be writing broken XML in
      the first place, so removing --html and adding the XHTML
      doctype to all files forces us to create good XML.
      
      This adds the XHTML doc type and fixes many, many XML tag
      problems it exposes.
      Signed-off-by: NDaniel P. Berrange <berrange@redhat.com>
      f2f9742d
  22. 20 3月, 2013 1 次提交
  23. 27 2月, 2013 1 次提交
  24. 19 7月, 2012 1 次提交
    • S
      Add a sheepdog backend for the storage driver · 29bc4fe6
      Sebastian Wiedenroth 提交于
      This patch brings support to manage sheepdog pools and volumes to libvirt.
      It uses the "collie" command-line utility that comes with sheepdog for that.
      
      A sheepdog pool in libvirt maps to a sheepdog cluster.
      It needs a host and port to connect to, which in most cases
      is just going to be the default of localhost on port 7000.
      
      A sheepdog volume in libvirt maps to a sheepdog vdi.
      To create one specify the pool, a name and the capacity.
      Volumes can also be resized later.
      
      In the volume XML the vdi name has to be put into the <target><path>.
      To use the volume as a disk source for virtual machines specify
      the vdi name as "name" attribute of the <source>.
      The host and port information from the pool are specified inside the host tag.
      
        <disk type='network'>
          ...
          <source protocol="sheepdog" name="vdi_name">
            <host name="localhost" port="7000"/>
          </source>
        </disk>
      
      To work right this patch parses the output of collie,
      so it relies on the raw output option. There recently was a bug which caused
      size information to be reported wrong. This is fixed upstream already and
      will be in the next release.
      Signed-off-by: NSebastian Wiedenroth <wiedi@frubar.net>
      29bc4fe6
  25. 22 5月, 2012 1 次提交
    • W
      storage backend: Add RBD (RADOS Block Device) support · 74951ead
      Wido den Hollander 提交于
      This patch adds support for a new storage backend with RBD support.
      
      RBD is the RADOS Block Device and is part of the Ceph distributed storage
      system.
      
      It comes in two flavours: Qemu-RBD and Kernel RBD, this storage backend only
      supports Qemu-RBD, thus limiting the use of this storage driver to Qemu only.
      
      To function this backend relies on librbd and librados being present on the
      local system.
      
      The backend also supports Cephx authentication for safe authentication with
      the Ceph cluster.
      
      For storing credentials it uses the built-in secret mechanism of libvirt.
      Signed-off-by: NWido den Hollander <wido@widodh.nl>
      74951ead
  26. 03 2月, 2012 1 次提交
    • D
      Add detail to documentation on storage pools and volumes. · e68f22ae
      Dave Allan 提交于
      The storage pools page contains details about the capabilities of the
      various pool types, but not an overview of how they are intended to be
      used.  This patch adds some explanation of what pools and volumes can
      be used for and why an administrator might want to use them.
      e68f22ae
  27. 16 9月, 2010 1 次提交
  28. 23 2月, 2010 3 次提交
  29. 17 11月, 2009 1 次提交
  30. 04 12月, 2008 1 次提交
  31. 13 8月, 2008 1 次提交
  32. 12 8月, 2008 1 次提交
  33. 24 4月, 2008 1 次提交