[rhelv6-list] Home-brew SAN for virtualization

Bryan J Smith b.j.smith at ieee.org
Tue Feb 25 22:58:30 UTC 2014


The oVirt managed KVM+libgfapi (Gluster API) combo is really the
"killer app" here.

You just setup your management node (oVirt) for the farm.

Then adding computer and/or storage is just a matter of adding another
KVM node (with Gluster Server/API and oVirt agents) to the farm.

-- bjs

--
Bryan J Smith - UCF '97 Engr - http://www.linkedin.com/in/bjsmith
-----------------------------------------------------------------
"In a way, Bortles is the personification of the UCF football
program.  Each has many of the elements that everyone claims to
want, and yet they are nobody's first choice.  Coming out of high
school, Bortles had the size and the arm to play at a more
prestigious program.  UCF likewise has the market size and the
talent base to play in a more prestigious conference than the
American Athletic.  But timing and circumstances conspired to put
both where they are now." -- Andy Staples, CNN-Sports Illustrated


On Tue, Feb 25, 2014 at 5:35 PM, R P Herrold <herrold at owlriver.com> wrote:
> On Mon, 24 Feb 2014, Chris Adams wrote:
>
>> I have taken over a set of Xen servers somebody else built, and am now
>> rebuilding the CentOS-based storage (Dell MD3000 SAS storage shelf
>> connected to a couple of CentOS servers), and could use some advice.
>
>> The Xen servers are just "plain" Xen, with no clustering, and right now
>> all the VM images are local to each server
>
>  ... snip ...
>
>> The storage was previously set up running NFS to share out all the
>> space, with some VMs running from raw file images over NFS.  That seems
>> somewhat inefficient to me, so I was considering setting up the rebuilt
>> storage with iSCSI for the VM storage, but then how do I manage it?  Do
>> people create a new LUN for each VM?  We have around 75 VMs right now.
>
> We run a mixture of both local FS , and also NFS mediated VMs.
> Several hundred live VM machines at any point in time.  Heavy
> use of NFS and LVM2 in the storage fabric.  We went to a
> custom build of xfs support on the NFS servers to get the
> reliability we needed
>
> The 'rap' [but not backed up by publishable testing] is that
> iScsi has inadequate throughput UNLESS one is using
> Infiniband, or a fiber link, or possibly 10Ge links.  Rough
> testing with those three options has indicated that kernels
> build from the Red Hat SRPMs need additional tuning to get
> adequate throughput to get usable performance, and trellised
> links remaining usable.  Link dropout is a real 'bear' of an
> issue, under non-static networking / udev 'nailled up'
> configurations
>
> I am working with a HPC client on this in a project atm, to
> put 'real numbers' on that 'rap' assertion of the prior
> paragraph
>
> With best regards,
>
> -- Russ herrold
>
> _______________________________________________
> rhelv6-list mailing list
> rhelv6-list at redhat.com
> https://www.redhat.com/mailman/listinfo/rhelv6-list




More information about the rhelv6-list mailing list