[libvirt] [PATCH] Changes to support Veritas HyperScale (VxHS) block device protocol with qemu-kvm

John Ferlan jferlan at redhat.com
Wed Jan 4 15:00:33 UTC 2017


[...]

>>> We don't anticipate a need for this.
>>>
>>>>  4. There's no VxHS Storage Pool support in this patch (OK actually an
>>>> additional set of patches in order to support). That would be expected
>>>> especially for a networked storage environment. You'll note there are
>>>> src/storage/storage_backend_{iscsi|gluster|rbd}.{c|h} files that manage
>>>> iSCSI, Gluster, and RBD protocol specific things. For starters, create,
>>>> delete, and refresh - especially things that a stat() wouldn't
>>>> necessarily return (capacity, allocation, type a/k/a target.format).
>>>> Perhaps even the ability to upload/download and wipe volumes in the
>>>> pool. Having a pool is a bit more work, but look at the genesis of
>>>> existing storage_backend_*.c files to get a sense for what needs to change.
>>>>
>>>
>>> VxHS does not need the Storage Pool functionality. Do we still need to
>>> implement this?
>>>
>>
>> It's something that's expected.  See my reasoning above.
>>
> 
> Some explanation is in order -
> 
> HyperScale is not designed to be used as a stand-alone independent
> storage. It is designed only to be used in the OpenStack environment
> with all the related Cinder/Nova changes in place. Therefore, we do
> not have a need for most of the above-mentioned functions from
> libvirt/qemu.
> 
> Even in the OpenStack environment, we do not support creating storage
> volumes independent of a guest VM. A VxHS storage volume can only be
> created for a particular guest VM. With this scheme, a user does not
> have to manage storage pools separately. VxHS automatically configures
> and consumes the direct attached SSD/HDD on the OpenStack compute when
> enabled. After this, all the requests to add storage to guest VMs are
> forwarded by OpenStack directly to the hyperscale daemon on the
> compute, and it takes care of creating the underlying storage etc.

And how does that volume creation occur? It would seem that there's some
command that does that. That's "independent" of libvirt, so in order for
a guest to use that storage - it'd need to be created first anyway. I
then assume as part of guest vm destruction you must somehow destroy the
volume too. Synchronizing with creation and ensuring deletion would seem
to be fairly important tasks and would seemingly be something you'd like
to have more integrated with the environment.

So what about hotplug? Can someone add in VxHS storage to a guest after
it's started?

And migration?  Can the guest be migrated? I haven't crawled through
that code recently, but I know there'd be changes to either allow or not
based on the storage type.

> 
> The role of libvirt is limited to opening the volume specified in the
> guest XML file. Volume creation, deletion etc is done by the VxHS
> daemon in response to messages from the OpenStack controller. Our
> OpenStack orchestration code (Nova) is responsible for updating the
> guest XML with correct volume information for libvirt to use. A
> regular user (libvirt) is not expected to know about what volume IDs
> exist on any given host. A regular user also does not have a volume
> device node on the local compute to query. The only way to get to a
> volume is via the network using the server IP-port address.
> 
> 

For me having the storage pool is a more complete solution. You don't
have to support all the storage backend volume functions (build, create,
upload, download, wipe, resize, etc.), but knowing what volumes exist
and can be used for a domain is nice.  It's also nice to know how much
storage exists (allocation, capacity, physical). It also allows a single
point of authentication - the pool authenticates at startup (see the
iSCSI and RBD code) and then the domains can use storage from the pool.

>From a guest view point, rather than having to provide:

  <source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251'>
    <host name='192.168.0.1' port='9999'/>
  </source>
  <auth username='user'>
    <secret type='vxfs' usage='somestring'/>
  </auth>

you'd have:

  <source pool='pool-name' volume='vol-name'/>

The "host" and "auth" would be part of the <pool> definition. Having to
discover the available 'names' ends up being the difference it seems for
vxhs since your model would seem to be create storage, create/start
guest, destroy guest, destroy storage. I think there's value in being
able to use the libvirt storage API's especially w/r/t integrating the
volume and guest management.

John




More information about the libvir-list mailing list