[libvirt-users] Ang: Re: Ang: Re: attaching storage pool error

Johan Kragsterman johan.kragsterman at capvert.se
Wed Aug 24 07:49:47 UTC 2016


Hi and thanks for your important input,Dan!


> > 
> > 
> > System centos7, system default libvirt version.
> > 
> > I've succeeded to create an npiv storage pool, which I could start without
> > problems. Though I couldn't attach it to the vm, it throwed errors when
> > trying. I want to boot from it, so I need it working from start. I read one
> > of Daniel Berrange's old(2010) blogs about attaching an iScsi pool, and
> > draw
> > my conclusions from that. Other documentation I haven't found. Someone can
> > point me to a more recent documentation of this?
> > 
> > Are there other mailing list in the libvirt/KVM communities that are more
> > focused on storage? I'd like to know about these, if so, since I'm a
> > storage
> > guy, and fiddle around a lot with these things...
> > 
> > There are quite a few things I'd like to know about, that I doubt this list
> > cares about, or have knowledge about, like multipath devices/pools,
> > virtio-scsi in combination with npiv-storagepool, etc.
> > 
> > So anyone that can point me further....?
> 
> http://libvirt.org/formatstorage.html
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/sect-NPIV_storage.html
> 
> Hope it can help you to get start with it.
> 
> 
> Unfortunatly I have already gone through these documents, several times as
> well, but these are only about the creation of storage pools, not how you
> attach them to the guest.

If the pool is ready, here are kinds of examples http://libvirt.org/formatdomain.html#elementsDisks

you can use it in guest like this:
    <disk type='volume' device='disk'>
      <driver name='qemu' type='raw'/>
      <source pool='iscsi-pool' volume='unit:0:0:1' mode='host'/>
      <auth username='myuser'>
        <secret type='iscsi' usage='libvirtiscsi'/>
      </auth>
      <target dev='vdb' bus='virtio'/>
    </disk>







As I described above, I created an npiv pool for my FC backend. I'd also like to get scsi pass through, which seems to be possible only if I use "device=lun". Can I NOT use "device=lun", and then obviously NOT get "scsi pass through", if I use an npiv storage pool? Is the only way to get "scsi pass through" to NOT use a storage pool, but instead use the host lun's?


What do you think about this?:

<disk type='volume' device='disk'>
  <driver name='qemu' type='raw'/>
  <source pool='vhbapool_host8' volume='unit:0:0:1'/>
  <target dev='hda' bus='ide'/>
</disk>


But I'd prefer to be able to use something like this instead:



<disk type='volume' device='lun'>
  <driver name='qemu' type='raw'/>
  <source pool='vhbapool_host8' volume='unit:0:0:1'/>
  <target dev='vda' bus='scsi'/>
</disk>

But that might not be possible...?



A couple of additional questions here:

* Since the target device is already defined in the pool, I don't see the reason for defining it here as well, like in your example with the iscsi pool?
* I'd like to use virtio-scsi combined with the pool, is that possible?
* If so, how do I define that? I understand I can define a controller separatly, but how do I tell the guest to use that specific controller in combination with that pool?
* Since the npiv pool obviously is a pool based on an fc initiator, the fc target can/will provision more lun's to that initiator, how will that effect the pool and the guest's access to new lun's? In this example the volume says 'unit:0:0:1', and I guess that will change if there will be more lun's in there? Or is that "volume unit" the "scsi target device", and can hold multiple lun's?


Regards Johan

> 
> > 
> > Rgrds Johan
> > 
> > 
> > _______________________________________________
> > libvirt-users mailing list
> > libvirt-users at redhat.com
> > https://www.redhat.com/mailman/listinfo/libvirt-users
> > 
> > 
> > 
> 
> 
> 
> 







More information about the libvirt-users mailing list