a couple questions from a cluster newbie

Sam Folk-Williams samfw at redhat.com
Fri Mar 27 12:40:34 UTC 2009


Just to clarify this - "Red Hat's strategic direction for the future 
development of its virtualization product portfolio is based on KVM, 
making Red Hat the only virtualization vendor leveraging technology that 
is developed as part of the Linux operating system. Existing Xen-based 
deployments will continue to be supported for the full lifetime of Red 
Hat Enterprise Linux 5, and Red Hat will provide a variety of tools and 
services to enable customers to migrate from their Red Hat Enterprise 
Linux 5 Xen deployment to KVM."

Details here:
http://www.redhat.com/virtualization-strategy/

-Sam

Colin van Niekerk wrote:
> 
> Hi there,
> 
> As far as the VM goes... I'd use KVM, mainly because RH is replacing Xen 
> with KVM at some point in the future (last time i checked, it was going 
> to be during the first half of 2009) :)
> 
> I will think about the question regarding enabling of users to launch 
> processes on remote servers that house the data being processed a little 
> and get back to you with some info as soon as possible.
> 
> Regards,
> Colin
> 
> ________________________________________
> From: redhat-sysadmin-list-bounces at redhat.com 
> <mailto:redhat-sysadmin-list-bounces at redhat.com> 
> [redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Laurent 
> Wandrebeck [lw at hygeos.com]
> Sent: 26 March 2009 12:49 PM
> To: redhat-sysadmin-list at redhat.com 
> <mailto:redhat-sysadmin-list at redhat.com>
> Subject: Re: a couple questions from a cluster newbie
> 
> Le lundi 23 mars 2009 07:47, Colin van Niekerk a écrit :
>>  Hi there,
> Hi Colin,
>>
>>  Apologies is anyone has answered this already and I have missed it. This
>>  post has been out for a while now.
> You're the first, kudos :)
>>
>>  I would configure three VM's on the Failover box and add the ability to
>>  have each server failover separately. This would involve having three load
>>  balanced clusters as in the attached, again fixed sized fonts.
> Thanks for your ascii art. Which VM would you advice ? Xen as it is 
> officially
> supported on rhel, or kvm ? something else maybe ?
>>
>>  To replicate data between the virtual server and the physical server 
> within
>>  each cluster I would use DRBD (RAID1 on a network level), you can 
> configure
>>  this so that only once the data is committed to disk on both sides 
> does the
>>  kernel confirm the write. This will present the system with a new block
>>  device and data must only be read and written via this device. As long as
>>  your system is 'strong' enough and the link between the servers is fast
>>  enough (this would depend on the amount of changed to the data - how much
>>  data would need to be written to the block device on the other end of the
>>  network) it will be just like reading and writing to any other block
>>  device.
> Our network is gbps, and machines will be in the same rack, one hop 
> away. So I
> guess synchronous replication will do the trick.
>>
>>  For the backend you could use Conga with luci and ricci to manage the
>>  cluster (thinking about ways to avoid pain going forward) but I have not
>>  done this in a production environment so I'm not sure about the details.
> OK, I'll set up a couple VM soon to check the details.
>>
>>  I'm afriad I have worked very little GFS as well so I can't answer you on
>>  that side of things. Maybe the GNBD would be better for the load balanced
>>  server replication as well, but as far as I know the main reason you would
>>  use GNBD is that it exports the file system to many users and manages
>>  locking better between the users which wouldn't help in the pg/ds/ap
>>  clusters. Can anyone confirm?
>>
>>  Just so I'm clear on the backend side. It sounds like there is a level of
>>  interaction between users and the actual data on the backend servers. Do
>>  the users query a process on the storage/processing servers and then that
>>  process works on the data and gives the user a result? Or do the users
>>  interact with the data directly?
> Users interact directly with data. classic (and simplified) scheme is:
> (shell script pseudo code)
> for i in files_to_be_processed do
> processing_program $i $output_dir/$output_result
> done
> 
> Thx for helping,
> Regards,
> --
> Laurent Wandrebeck
> IT Manager / Directeur des systemes d'informations
> HYGEOS, Earth Observation Department / Observation de la Terre
> Euratechnologies
> 165 Avenue de Bretagne
> 59000 Lille, France
> tel: +33 3 20 08 24 98
> http://www.hygeos.com
> 
> --
> redhat-sysadmin-list mailing list
> redhat-sysadmin-list at redhat.com <mailto:redhat-sysadmin-list at redhat.com>
> https://www.redhat.com/mailman/listinfo/redhat-sysadmin-list
> 
> Colin van Niekerk
> 
> Technical
> Mimecast South Africa
> 
> Phone 0861 114 063 *•* Mobile +2782 557 9081 *•* Fax 086 522 6377
> 
> This email, sent at *22:10:17* on *2009-03-26* from 
> *colin.vanniekerk at mimecast.co.za* to *redhat-sysadmin-list at redhat.com* 
> has been scanned for viruses and malware by Mimecast, an innovator in 
> software as a service (SaaS) for business. *Mimecast South Africa’s* 
> email continuity, security, archiving and compliancy is managed by 
> Mimecast’s unified email management platform. To find out more, request 
> a demo. <mailto:demo at mimecast.co.za>
> 
> View our Legal Notice. 
> <http://mail3.mimecast.co.za/mimecast/click?account=CSA1A2&code=65b979085bd938249d36e5d728dac093> 
> 
> 
> 
> 	
> 
> 	
> <http://mail3.mimecast.co.za/mimecast/click?account=CSA1A2&code=dc9a2c704377142a90d634f479d35f83>
> 
> Subscribe to future
> Mimecast eshot>> 
> <http://mail3.mimecast.co.za/mimecast/click?account=CSA1A2&code=96cc2b50e09eedb41712b1e180594b35> 
> 
> 
> 
> ------------------------------------------------------------------------
> 
> --
> redhat-sysadmin-list mailing list
> redhat-sysadmin-list at redhat.com
> https://www.redhat.com/mailman/listinfo/redhat-sysadmin-list


-- 
Sam Folk-Williams
Knowledge Program Manager
Red Hat, Inc
(919) 754-4558




More information about the redhat-sysadmin-list mailing list