[rhos-list] Parallel Cinder access?

Lutz Christoph lchristoph at arago.de
Mon Sep 9 13:41:53 UTC 2013


Hi!

Thanks for the post. I had found your instructions while trying to find some that address the question if multiple Cinder Volume instances can access the same storage space in parallel.

Alas, you didn't address that. And it seems this isn't possible.

Incidentally:

I'm currently trying to lobby my local powers to use storage that avoids the additional round trip required by anything that isn't integrated with both Cinder and libvirtd. Nexenta uses iSCSI volumes of its own rather than one exported from Cinder Volume, so it allows libvirtd to access the volumes directly rather than via an intermediate. Scality has code in libvirtd that uses its API.

Ceph would be nice, too, but it currently does not play well with RHEL6. It's only useful for libvirtd. Which may also be true for Scality, I'm still trying to find out.


Best regards / Mit freundlichen Grüßen 
Lutz Christoph 

-- 

Lutz Christoph 

arago Institut für komplexes Datenmanagement AG 

Eschersheimer Landstraße 526 - 532 
60433 Frankfurt am Main 

eMail: lchristoph at arago.de - www: http://www.arago.de 
Tel: 0172/6301004 
Mobil: 0172/6301004 

 

-- 
Bankverbindung: Frankfurter Sparkasse, BLZ: 500 502 01, Kto.-Nr.: 79343 
Vorstand: Hans-Christian Boos, Martin Friedrich 
Vorsitzender des Aufsichtsrats: Dr. Bernhard Walther 
Sitz: Kronberg im Taunus - HRB 5731 - Registergericht: Königstein i.Ts 
Ust.Idnr. DE 178572359 - Steuernummer 2603 003 228 43435 
________________________________________
Von: Giulio Fidente <gfidente at redhat.com>
Gesendet: Montag, 9. September 2013 10:56
An: Lutz Christoph
Cc: rhos-list at redhat.com
Betreff: Re: [rhos-list] Parallel Cinder access?

On 09/03/2013 06:03 PM, Lutz Christoph wrote:
> Hi!
>
> I can't google up any good answer for the question if it is possible to
> have multiple Cinder  Volume instances access the same underlying
> storage (especially interesting is plain old LVM).
>
> The idea is to run Cinder Volume on the compute nodes and eliminate one
> trip over the network for iSCSI storage. So the Cinder Volume instances
> need to see the same volumes, and some central Cinder service
> (scheduler? API?) has to know that each compute node has its own local
> Cinder Volume service. The next step would of course be to eliminate the
> iSCSI export/import of the volume as it can be accessed through /dev/mapper.
>
> Since I haven't found any reference to this kind of architecture, I
> presume that it isn't viable (yet?). But then I would very much
> appreciate a "No. Won't work." from this list to convince some people
> around here.

you can actually deploy multiple instances of the cinder-volume service,
on different nodes, controlled by a single cinder-{api,scheduler}

I wrote a small blog post[1] about how to setup such a topology. I would
void the use of the local compute disks for cinder-volume though, as
that will consume your CPU for the disks I/O. You can attach some
external storage to each compute node instead (and maybe in that case
even use a specific cinder driver rather than lvm).

1.
http://giuliofidente.com/2013/04/openstack-cinder-add-more-volume-nodes.html
--
Giulio Fidente
GPG KEY: 08D733BA | IRC: giulivo




More information about the rhos-list mailing list