[libvirt] SolidICE integration

Daniel P. Berrange berrange at redhat.com
Thu Nov 20 11:30:11 UTC 2008


On Wed, Nov 19, 2008 at 01:50:09PM -0500, Shahar Frank wrote:
> Hi All, here is an initial version of the operations required for SolidICE
> and the proposed high level interface.
> 
> I would like to discuss how to map it to libvirt calls and what libvirt
> calls are missing.
> 
> Thanks,
> 
> Shahar Frank
> 
> HostLevel
> ---------
> enumarateLuns():
>         return:
>                 returns all luns visible by the host.
>                 In case multipathing is enabled then only the multipath
> device should   be in the return list.

libvirt does not yet have support for SCSI FiberChannel, but it will
be added following this forthcoming release. libvirt's storage model
has two core concepts, a 'pool' object, which contains multiple
'volume' objects. There are different types of pool for each type of
storage, so we intend to add a 'scsi' storage pool.

The way it will work is that each SCSI HBA will be seen as its own
storage pool. The LUNs for a HBA will then be volumes within the 
pool.

So for enumerateLuns, a combination of API calls would be relevant.


 -  virConnectFindStoragePoolSources()  with a type='scsi'
 
will discover all available HBAs, and return XML describing
them. 

For each HBA you care about, you would define a storage pool
and active it using

  - virStoragePoolDefineXML()
  - virStoragePoolStart()

Then, for each storage pool you can get the list of LUNs by asking
for its volumes with

 - virStoragePoolListVolumes()

> enumerateISCSILuns(target):
>         params:
>                 target - ISCSI target
>         Description:
>                 enumerate all ISCSI luns on a specific target
>         return:
>                 list of

This will work in exactly the same way as for SCIS, except that instead
of calling virConnectFindStoragePoolSources with type='scsi', you'd 
call it with type='iscsi'. You'd also need to give a 'srcSpec' XML
document with details of the host providing the iSCSI server. With this
libvirt will connect to the iSCSI server, and ask it for all its exported
iSCSI targets. The returned XML will provide details on each target.

For each target you wish to use, then you call virStoragePoolDefinXML
and virStoragePoolStart(), as before.


> getBlockInfo(name or UUID):
>         return:
>                 at least uuid & size

Once you have  a virStorageVolPtr object, you can use the API

  virStorageVolGetInfo
 
this gives you logical capacity, and actual allocation - allocation
may be less than capacity if querying a file based volume like a
sparse file, or qcow2.

To get a persistent name, akin to a UUID, you can use the  API

  virStorageVolGetKey() 

for SCSI LUNS, the volume key will be based off the UUID avialable.

> enumerateVGs():
>         return:
>                 VG UUID list

This is very similar to the other enumerate methods. You'd call

  virConnectFindStoragePoolSources() with type='logical'

this gives you back XML describing all available volume groups
on a machine. For each volume group you want to use, you can
then define a storage pool, and query volumes within it.

> open issues - should we assume ISCSI auto connect or FC login?

libvirt can take care of that when needed, which is why we have
a two step process for accessing a storage pool

 - virStoragePoolDefineXML()  will write out a persistent configuratipon
   file with details of the pool.

 - virStoragePoolCreate() will actually login to the iSCSI target,
   or activate the LVM volume group, or equivalent operations needed
   for the type of storage being accessed. NB 'Create' is probably
   better thought of as 'Start'

You can later call virStoragePoolDestroy() to logout if desired.

If you want login to happen automatically when the machine boots
you can call virStoragePoolSetAutostart() to turn on the autostart
flag.


Authentication is an open, unresolved issue & welcome suggestions
for how to deal with this.  Our original thoughts were to just
embed the iSCSI username/password in the XML configuration file
for the storage pool, but we've not implemented this in the backend
yet.

> StoragePool level
> -----------------
> createVolumeGroup(name, devices):
>         params:
>                 name = name
>                 devices - a list of UUIDs
>         description:
>                 Should create a volume group out of the UUID list
>                 In case multipathing is enabled then only the multipath
> device          should be used
>                 In case it is not a PV yet then a PV is to be created.
>         Return:
>                 Uuid of the VG

This is accomplished with a combination of two methods. First you
define the configuration for the pool, which requires giving the
volume group name, and list of devices to become PVs in the VG.
This is done with virStoragePoolDefineXML().

Then you actually format the PVs & construct the VG by invoking
the virStoragePoolBuild() method. Once this is done, the 
virStoragePoolCreate() method will activate the VG.

> 
> removeVolumeGroup(VG):
>         params:
>                 VG - uuid of the Volume group
>         Description:
>                 Remove all remains for the VolumeGroup (LVs, PVs ...)
>         Return:
>                 Sucsess/failure

Given a existing virStoragePoolPtr  object, you can blow away the
entire volume group using the  API

   virStoragePoolDelete
 
> mountNFS(server, remotePath, localPath, params)
>         params:
>                 server - the NFS server name or IP
>                 temotePath - remote server export path
>                 localPath - local path to mount on
>                 params - nfs mount params
>         description:
>                 mounts the remote export onto the local path using the
> mount            params
>         return:
>                 success/failure

Each NFS mount is modelled as a storage pool. So as before, you'd
create a XML document providing the configuration for server,
remotePath, localPath, and then invoke virStoragePoolDefineXML()

Then you can use  virStoragePoolCreate() to mount the NFS
server. Once done, you can use virStoragePoolListVolumes() 
to query files within the mount.

> unmount(localPath, forceFlag)
>         params:
>                 localPath - the local path to unmount
>                 forceFlag - a flag indicating whether to force unmount
>         description:
>                 unmounts local path (trivial)
>         return:
>                 success/failure

The virStoragePoolDestory() method will unmount the NFS volume.

If you wish to also remove the persistent libvirt config file
for the pool, then also issue virStoragePoolUndefine

> Open issues:
> 1. NFS configuration persistency?
> 2. What happens in case the libvirt is restarted should it be
> reconfigured?

Most objects in the libvirt API have two modes of setup, one which is
persistent, and one which is transient.

The examples I've given earlier all describe persistent use. The
general pattern is

  - virStoragePoolDefineXML()  writes the persistent config
  - virStoragePoolCreate() activates the backend from persistent config
  - virStoragePoolDestroy() de-activates the backend
  - virStoragePoolUndefine()  deletes the persistent config

If you instead prefer to use transient objects, then you can
do

   - virStoragePoolCreateXML() - activates a backend directly from XML
   - virStoragePoolDestroy() - de-actives the backend

With this style, no persistent configuration file is written out
to disk, so once destroyed, libvirt looses all knowledge of it.

If libvirtd is restarted, all information about transient objects
is lost. Persistent objects will be identified when libvirtd is
started again. For persistent objects you can also set a autostart
flag, so that libvirt will immediately activate them, if not
already running.

> VolumeGroup level
> -----------------
> listPV(VG):
>         params:
>                 VG - Volume group UUID
>         Description:
>                 Lists all PV in a specific volume group
>         Return:
>                 A list of PV UUIDs

The XML description for a virStoragePoolPtr object pointing
to a VG, will include information about the physical volumes
it contains.

> infoPV(VG, PV)
>         params:
>                 VG - Volume group UUID
>                 PV - Physical volume UUID
>         description:
>                 trivial
>         return:
>                 returns the PV information (at least size & path)

Given a device path or device key (akin to UUID) you can use one
of the methods

 virStorageVolLookupByPath
 virStorageVolLookupByKey

to retrieve a virStorageVolPtr object for it. Once you have one
of these, then you can call virStorageVolGetInfo, to find out
its capacity / allocation

> extendVolumeGroup(VG, devices):
>         params:
>                 VG - Volume Group UUID
>                 devices - a list of UUIDs
>         description:
>                 This function adds Phisical Volume to the VolumeGroup.
>                 Each UUID device is checked, In case multipathing is
> enabled
>                 then only the multipath device should be used (PV create
> in case
>                 it's not) and added to the volume group
>         return:
>                 success/fauilure
> 
> 
> removePV(VG, PV):
>         params:
>                 VG - Volume Group UUID
>                 PV - Physical volume's UUID to be removed
>         description:
>                 removes the physical volume
>         return:
>                 success failure

These two are not currently implemented in libvirt. The intent is
though, that you can re-define the XML for an existing virStoragePoolPtr
object, with PV devices added/removed from the XML. Then invoke the
virStoragePoolBuild() method to actually make the changes to the
underlying volume group.

> LV operations
> --------------
> listLVs(VG):
>         params:
>                 VG - Volume Group UUID
>         description:
>                 trivial
>         return:
>                 list of all LV UUIDs

This is again possible with the virStoragePoolListVolumes() method
once you have an active virStoragePoolPtr object for a volume
group.

> infoLV(VG, LV):
>         params:
>                 VG - Volume Group UUID
>                 LV - Logical Volume UUID
>         description:
>                 trivial
>         return:
>                 returns LV information (at least size and path)

Given a virStorageVolPtr object for a LV, you can call virStorageVolGetInfo
to get size information, or virStorageVolGetXMLDesc() for more general
metadata.

> createLV(VG, LvName, size):
>         params:
>                 VG - Volume Group UUID
>                 LvName - name of the logical volume

Given a virStoragePoolPtr object refering to a volume group, you can
use virStorageVolCreateXML() to allocate a new logical volume within
it. Similarly virStorageVolDelete will remove a logical volume.


All the method names I'm refering to here are the C apis. If you just
want to experiment & play with the APIs  to get a better understanding
of them, the 'virsh' command line tool exposes practically all of
the functionality in a fairly friendly manner.

For example, using the virConnectFindStoragePoolSources() method to
disocver LVM volume groups is possible with:

# virsh  find-storage-pool-sources-as logical
<sources>
  <source>
    <device path='/dev/sda2'/>
    <name>HostVG</name>
    <format type='lvm2'/>
  </source>
</sources>

You can then use, virStoragePoolDefineXML  with info by running the
'virsh pool-define' command, and so on - 'virsh help' will show you
all the possible commands.

Regards,
Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|




More information about the libvir-list mailing list