[Linux-cluster] GFS2 on RHEV managed guests

Steven Whitehouse swhiteho at redhat.com
Tue Apr 30 13:30:21 UTC 2013


Hi,

On Tue, 2013-04-30 at 08:44 -0400, rhurst at bidmc.harvard.edu wrote:
> A couple of years ago, I staged a test environment using RHEL 5u1 with
> a few KVM guests that were provisioned with a direct LUN for use with
> Cluster Suite and resilient storage (GFS2).  For whatever reason (on
> reflection, I may have overlooked the hypervisor’s virtio default
> setting for cache), the GFS2 filesystem would eventually “break” and
> leave it fencing guests.  We had always run our clusters
> (dev-test-prod) on physical hosts before and since, so cluster
> configuration and operational understanding is not any issue.
> 
>  
> 
> We now have RHEV-M in place to begin a whole new provisioning process
> on newer RHEL 6 hypervisors with RHEL 6 guests.  My question (or fear)
> before embarking into this space is how resilient is resilient storage
> (GFS2) on KVM guests now?  Are there any pitfalls to avoid out there?
> 
The issue is less likely to be related to KVM and more likely to be
related to the workload that you intend to run within the guests.
Provided you are able to use a supported fencing method, then there
should be no real difference to running on bare metal in terms of what
you can expect from GFS2. The requirements are still the same in that
you'll need a shared block device that can be accesses symmetrically
from all nodes, virtual or otherwise,

Steve.

>  
> 
>  
> 
> Robert Hurst, Caché Systems Manager
> Beth Israel Deaconess Medical Center
> 1135 Tremont Street, REN-7
> Boston, Massachusetts   02120-2140
> 617-754-8754 ∙ Fax: 617-754-8730 ∙ Cell: 401-787-3154
> Any technology distinguishable from magic is insufficiently advanced.
> 
>  
> 
> 
> -- 
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster





More information about the Linux-cluster mailing list