[rhos-list] RHOS and Ceph

Thomas Oulevey thomas.oulevey at cern.ch
Mon Apr 22 07:51:53 UTC 2013


Hi,

We evaluated glusterfs for our Openstack use case, as we did with 
proprietary NAS and we would like to do with Ceph.
I don't expect support from Redhat but I think it's nice to keep your 
option open if you get few customers requests.
I quickly looked into qemu-kvm source and with over 2850 patches applied 
to 0.12 sources I don't know how complex
  it is to backport rbd support.
After the last week Openstack Summit we will get probably more 
information on future plan for all vendors (redhat, inktank).

Now on RHS one of the requirement that kill it for VMs storage IMHO (IO 
apart, but high hope with future version and glusterfs 3.4)
  is the requirement of RAID6(+xfs). It means a lot of redundancy (and 
high cost, not mentioning hardware RAID card reliability) when you want 
to run VMs with a 3 replica volumes for a big cloud (think over 2000 
hypervisor). Some translators are going in this direction to get a kind 
of networked Raid5 so let see what will be integrated in next RHS.

Btw, Glusterfs 3.4 when released will have the same issue for block 
storage testing, a newer version of qemu is needed. (BZ 848070)

For the NAS/SAN proprietary solution, keep in mind nobody is interested 
in vendor lock-in especially at scale. (cost, renewal of 
contract/provider, specific tools, etc...)

Finally, I completely understand Redhat resources are not unlimited but 
with RHEL7 coming, it's a good opportunity to ask for features.
More hardware/software support in the stock OS, happier we are :)

Thomas.

On 04/19/2013 11:46 PM, Steven Ellis wrote:
> Wow some great discussion.
>
> I'm with Paul. Lets look at some real SAN hardware for big I/O at the 
> moment. A lot of customers already have that for their existing VMware 
> / RHEV backends.
>
> Then RHS (Gluster) is a great fit for object and other lower I/O use 
> cases.
>
> After being at Linux.conf.au back in January there was a great deal of 
> perception that Ceph is the default or is required for OpenStack and 
> it can be quite a struggle to overcome that perception once it takes hold.
>
> I'm open to other suggestions for positioning RHOS on different 
> storage backends.
>
> Steve
>
> On 04/20/2013 06:16 AM, Paul Robert Marino wrote:
>> Um hum
>> If you want hi block level IO performance why not use one of the many 
>> SAN or NAS drivers? Grizzly has quite a few of them, and honestly 
>> that's the only way you will get any real IO performance.
>>
>>
>>
>> -- Sent from my HP Pre3
>>
>> ------------------------------------------------------------------------
>> On Apr 19, 2013 1:11 PM, Joey McDonald <joey at scare.org> wrote:
>>
>> Simply enabling support for it is not the same as supporting it. Ceph 
>> is already supported via the cephfs fuse-based file system. I think 
>> the concepts are similar.
>>
>> Two things are needed: kernel module for rbd and ceph hooks in kvm. 
>> Then, let the ceph community offer 'support'.
>>
>> Is this not what was done for gluster before they were acquired? It 
>> is Linux after all... kumbaya.
>>
>>
>>
>> On Fri, Apr 19, 2013 at 10:36 AM, Pete Zaitcev <zaitcev at redhat.com 
>> <mailto:zaitcev at redhat.com>> wrote:
>>
>>     On Fri, 19 Apr 2013 18:03:12 +1200
>>     Steven Ellis <sellis at redhat.com <mailto:sellis at redhat.com>> wrote:
>>
>>     > One of their key questions is when (note when, not if) will Red
>>     Hat be
>>     > shipping Ceph as part of their Enterprise Supported Open Stack
>>     > environment. From their perspective RHS isn't a suitable scalable
>>     > backend for all their Open Stack use cases, in particular high
>>     > performance I/O block
>>
>>     Okay, since you ask, here's my take, as an engineer.
>>
>>     Firstly, I would be interested in hearing more. If someone made
>>     up their
>>     mind in such terms there's no dissuading them. But if they have a
>>     rational
>>     basis for saying that "high performance I/O block" in Gluster is
>>     somehow
>>     deficient, it would be very interesting to learn the details.
>>
>>     My sense of this is that we're quite unlikely to offer a support
>>     for Ceph any time soon. First, nobody so far presented a credible
>>     case
>>     for it, as far as I know, and second, we don't have the expertise.
>>
>>     I saw cases like that before, in a sense that customers come to
>>     us and
>>     think they have all the answers and we better do as we're told.
>>     This is difficult because on the one hand customer is always right,
>>     but on the other hand we always stand behind our supported product.
>>     It happened with reiserfs and XFS. But we refused to support
>>     reiserfs,
>>     while we support XFS. The key difference is that reiserfs was junk,
>>     and XFS is not.
>>
>>     That said, XFS took a very long time to establish -- years. We had to
>>     hire Dave Cinner to take care of it. Even if the case for Ceph gains
>>     arguments, it takes time to establish in-house expertise that we can
>>     offer as a valuable service to customers. Until that time selling
>>     Ceph would be irresponsible.
>>
>>     The door is certainly open to it. Make a rational argument, be
>>     patient,
>>     and see what comes out.
>>
>>     Note that a mere benchmark for "high performance I/O block" isn't
>>     going
>>     to cut it. Reiser was beating our preferred solution, ext3. But
>>     in the
>>     end we could not recommend a filesystem that ate customer data,
>>     and stuck
>>     with ext3 despite the lower performance. Not saying Ceph is junk
>>     at all,
>>     but you need a better argument against GlusterFS.
>>
>>     -- Pete
>>
>>     _______________________________________________
>>     rhos-list mailing list
>>     rhos-list at redhat.com <mailto:rhos-list at redhat.com>
>>     https://www.redhat.com/mailman/listinfo/rhos-list
>>
>>
>>
>>
>> _______________________________________________
>> rhos-list mailing list
>> rhos-list at redhat.com
>> https://www.redhat.com/mailman/listinfo/rhos-list
>
>
> -- 
> Steven Ellis
> Solution Architect - Red Hat New Zealand <http://www.redhat.co.nz/>
> *T:* +64 9 927 8856
> *M:* +64 21 321 673
> *E:* sellis at redhat.com <mailto:sellis at redhat.com>
>
>
> _______________________________________________
> rhos-list mailing list
> rhos-list at redhat.com
> https://www.redhat.com/mailman/listinfo/rhos-list

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/rhos-list/attachments/20130422/4f7990ea/attachment.htm>


More information about the rhos-list mailing list