[rhelv6-list] Home-brew SAN for virtualization

R P Herrold herrold at owlriver.com
Tue Feb 25 22:35:43 UTC 2014


On Mon, 24 Feb 2014, Chris Adams wrote:

> I have taken over a set of Xen servers somebody else built, and am now
> rebuilding the CentOS-based storage (Dell MD3000 SAS storage shelf
> connected to a couple of CentOS servers), and could use some advice.

> The Xen servers are just "plain" Xen, with no clustering, and right now
> all the VM images are local to each server 

 ... snip ... 

> The storage was previously set up running NFS to share out all the
> space, with some VMs running from raw file images over NFS.  That seems
> somewhat inefficient to me, so I was considering setting up the rebuilt
> storage with iSCSI for the VM storage, but then how do I manage it?  Do
> people create a new LUN for each VM?  We have around 75 VMs right now.

We run a mixture of both local FS , and also NFS mediated VMs.  
Several hundred live VM machines at any point in time.  Heavy 
use of NFS and LVM2 in the storage fabric.  We went to a 
custom build of xfs support on the NFS servers to get the 
reliability we needed

The 'rap' [but not backed up by publishable testing] is that 
iScsi has inadequate throughput UNLESS one is using 
Infiniband, or a fiber link, or possibly 10Ge links.  Rough 
testing with those three options has indicated that kernels 
build from the Red Hat SRPMs need additional tuning to get 
adequate throughput to get usable performance, and trellised 
links remaining usable.  Link dropout is a real 'bear' of an 
issue, under non-static networking / udev 'nailled up' 
configurations

I am working with a HPC client on this in a project atm, to 
put 'real numbers' on that 'rap' assertion of the prior 
paragraph

With best regards,
 
-- Russ herrold




More information about the rhelv6-list mailing list