From jt at camalyn.org Thu Mar 12 12:41:08 2009 From: jt at camalyn.org (jt at camalyn.org) Date: Thu, 12 Mar 2009 12:41:08 +0000 Subject: =?utf-8?b?77u/Sk9COg==?= *nix (pref CentOS or RH) Sysadmin with "good" MySQL database administration skills (Reading, UK) Message-ID: <1236861668.3582.15.camel@linux-qtk6.site> ??JOB: ??hi List Members ~ ? ?I am an open source recruiter (and experienced Linux user) working with an established "international" client in Reading (Berkshire, UK) that are looking to recruit a *nix systems administrator (preferably with CentOS or Red Hat experience) that has good MySQL database administration skills, other database skills, e.g. Oracle are not a substitute as the focus is on someone who can improve the client's existing MySQL related-systems and potential move from a systems admin/ DBA mindset towards that of a database architects. It would be beneficial if you have had experience of large scale deployments (although not essential). The employer run a mixture of ?MySQL v4.1 and v5.0. Its possible that as part of the job you will be tasked with finding new opportunities to exploit new features or better use existing ones - however, right now there are ?no immediate plans to upgrade to MySQL v5.1. In terms of the number of high transactional servers, we are looking at high 30s. They do use MySQL replication but not clustering at this time. I would expect this role to pay between ?40k-?55k (at least!). Please contact me off list if you would like to discuss further using james at camalyn.org All the best, JAMES >> to learn more about Camalyn please visit http://www.camalyn.org From raaquini at gmail.com Fri Mar 13 03:25:41 2009 From: raaquini at gmail.com (Rafael Azenha Aquini) Date: Fri, 13 Mar 2009 00:25:41 -0300 Subject: =?utf-8?b?77u/Sk9COg==?= *nix (pref CentOS or RH) Sysadmin with "good" MySQL database administration skills (Reading, UK) In-Reply-To: <1236861668.3582.15.camel@linux-qtk6.site> References: <1236861668.3582.15.camel@linux-qtk6.site> Message-ID: <1236914741.3147.95.camel@latitude.tchesoft.com> Hello James, Although I have some sort of knowledge in MySQL administration, I really perform better at operating system administration, performance analysis and troubleshooting. Taking the Dreyfus Model as example, I'd rank myself at Proficient level on those listed skills. I've been working professionally with Linux system administration since 2002, when I was on duty at Brazilian Army. Nowadays I'm working at a Brazilian Credit Union called SICREDI, as an IT Analyst. Despite the fact I am working and living in Brazil at this moment, I'm open to changes and also to take new challenges in my career. Forgive me if I bothered you or wasted your time with this mail, please. Best regards. -- Rafael Azenha Aquini From robinprice at gmail.com Fri Mar 13 13:24:36 2009 From: robinprice at gmail.com (Robin Price II) Date: Fri, 13 Mar 2009 09:24:36 -0400 Subject: JOB: *nix (pref CentOS or RH) Sysadmin with "good" MySQL database administration skills (Reading, UK) In-Reply-To: <1236914741.3147.95.camel@latitude.tchesoft.com> References: <1236861668.3582.15.camel@linux-qtk6.site> <1236914741.3147.95.camel@latitude.tchesoft.com> Message-ID: This list is not for job offerings last time I checked. -- Robin On Thu, Mar 12, 2009 at 11:25 PM, Rafael Azenha Aquini wrote: > Hello James, > > Although I have some sort of knowledge in MySQL administration, I really > perform better at operating system administration, performance analysis > and troubleshooting. Taking the Dreyfus Model as example, I'd rank > myself at Proficient level on those listed skills. > > I've been working professionally with Linux system administration since > 2002, when I was on duty at Brazilian Army. Nowadays I'm working at a > Brazilian Credit Union called SICREDI, as an IT Analyst. > > Despite the fact I am working and living in Brazil at this moment, I'm > open to changes and also to take new challenges in my career. > > Forgive me if I bothered you or wasted your time with this mail, please. > > Best regards. > > -- > Rafael Azenha Aquini > > -- > redhat-sysadmin-list mailing list > redhat-sysadmin-list at redhat.com > https://www.redhat.com/mailman/listinfo/redhat-sysadmin-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From james at camalyn.org Fri Mar 13 13:41:22 2009 From: james at camalyn.org (james) Date: Fri, 13 Mar 2009 13:41:22 +0000 Subject: JOB: *nix (pref CentOS or RH) Sysadmin with "good" MySQL database administration skills (Reading, UK) In-Reply-To: References: <1236861668.3582.15.camel@linux-qtk6.site> <1236914741.3147.95.camel@latitude.tchesoft.com> Message-ID: <1236951682.3660.54.camel@linux-qtk6.site> On Fri, 2009-03-13 at 09:24 -0400, Robin Price II wrote: > This list is not for job offerings last time I checked. Robin ~ I "think" I obtained permission from the list admin before posting... / James From lw at hygeos.com Fri Mar 20 16:12:56 2009 From: lw at hygeos.com (Laurent Wandrebeck) Date: Fri, 20 Mar 2009 17:12:56 +0100 Subject: a couple questions from a cluster newbie Message-ID: <200903201712.56309.lw@hygeos.com> Hi list, our park is going to gain three new boxes, pushing storage size to 70TB. I think it's time to get rid of nfs /net automounts, and to go for some kind of a cluster. long story short: each typical server has a local storage (1 to 8TB, up to 15 soon), that are sata discs connected to a 3ware card, using hard raid 10 or 5. each of these machines is aimed at processing data from a given satellite. there are also one pgsql server, one apache server, one nis/home (via nfs) server each with a 3ware and its discs. brw, the nis/nfs server is soon to be turned into a directory server. gbps network, non administrable switches. /24 network class. now, I'd like to transform that mess into: 1) have one GFS volume for sat1...N data. So that, if needed, you can process whatever you want from whatever machine. 2) have a failover machine that could automagically take load for pg, apache and nfs/nis (the soon to be directory server) if the dedicated box fails. that means an efficient replication so data are identical on original pg/apache/etc machines and the failover one. 3) have some kind of load balancing on sat1...N, that would put processes on a box where processed data are local, without having the user to decide where to launch processes. resulting data from processes would have to be written on the local storage of the box. So that sat1 data and sat1 processed data stay on the same physical volume. That way, if a box really badly crashes, we know which data were lost (we can't afford to backup 70TB). now, questions (thx for arriving down there:) : 1) what i've read in doc is i should use gndb. am i on the right track ? It's unclear to me if it is safe to use a machine both for serving and processing data. 2) failover should be possible if i understood correctly doc. where i'm a bit stuck is the replication part part. wal shipping should do the trick for pg. directory server has some kind of failover mechanism afaik. about apache, i'm a bit in the dark. could someone enlighten me ? 3) is such a thing possible with cluster suite ? at all ? Would there be any better way to solve the problem of the boxes configuration so our DC can continue to grow without becoming a nightmare for me and users ? 4) right now, user homes follow them to whatever box they log on. should /home be another gfs volume so that every server (potentially hidden by load balancing if i understood correctly) can continue to access these data (processing codes are often on /home). Any other solution ? You'll find attached some kind of ascii art trying to describe what i'd like to get :) (open it with fixed size font) Thanks a lot for helping. Best Regards, -- Laurent -------------- next part -------------- _____ |S1|----|G |----|U1|--------| |F | | |S2|----|S |----|U2|--------| | | | |S3|----|V |----|U3|--------| |O | | |S4|----|L |----|U4|--------| |U | | . |M | . | . |E | . | . | | . | |Sn|----| |----|Un|--------| |--home GFS volume accessible by every box ? | | | | | |---------------|Ds|--| | | | | |----|Pg|--|----------| | | | | |----|Ap|--| | | | | |----|Fo|--| |___| |Sx|: boxes with dedicated storage for satellite images processing. |Ux|: user boxes. |Ds|: Directory server (serves /home to user machines) |Pg|: PostgreSQL server |Ap|: Apache server |Fo|: Failover server (can take Pg, Ds, Ap load) From nitin.gizare at wipro.com Sat Mar 21 04:39:31 2009 From: nitin.gizare at wipro.com (nitin.gizare at wipro.com) Date: Sat, 21 Mar 2009 10:09:31 +0530 Subject: (no subject) Message-ID: help Please log your queries/concerns/requests through http://edasupport.wipro.com or send mail to eda.support at wipro.com for quicker resolutions. Rgds Nitin -------------- next part -------------- An HTML attachment was scrubbed... URL: From nitin.gizare at wipro.com Sat Mar 21 16:31:07 2009 From: nitin.gizare at wipro.com (nitin.gizare at wipro.com) Date: Sat, 21 Mar 2009 22:01:07 +0530 Subject: Hello Unable to post queries Message-ID: HI I am unable to post quires pls check Rgds Nitin From nitin.gizare at wipro.com Sat Mar 21 16:33:09 2009 From: nitin.gizare at wipro.com (nitin.gizare at wipro.com) Date: Sat, 21 Mar 2009 22:03:09 +0530 Subject: Configuring multpile Nics Message-ID: HI Pls let me know how to configure multiple N/W card in red hat. I have eth0 configured during OS installation, Rgds Nitin From lists at brimer.org Sat Mar 21 16:45:07 2009 From: lists at brimer.org (Barry Brimer) Date: Sat, 21 Mar 2009 11:45:07 -0500 (CDT) Subject: Configuring multpile Nics In-Reply-To: References: Message-ID: On Sat, 21 Mar 2009 nitin.gizare at wipro.com wrote: > HI > > Pls let me know how to configure multiple N/W card in red hat. > I have eth0 configured during OS installation, system-config-network-gui or system-config-network-tui From nitin.gizare at wipro.com Sat Mar 21 17:57:50 2009 From: nitin.gizare at wipro.com (nitin.gizare at wipro.com) Date: Sat, 21 Mar 2009 23:27:50 +0530 Subject: Configuring multpile Nics In-Reply-To: References: Message-ID: Thanks Rgds Nitin -----Original Message----- From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Barry Brimer Sent: Saturday, March 21, 2009 10:15 PM To: redhat-sysadmin-list at redhat.com Subject: Re: Configuring multpile Nics On Sat, 21 Mar 2009 nitin.gizare at wipro.com wrote: > HI > > Pls let me know how to configure multiple N/W card in red hat. > I have eth0 configured during OS installation, system-config-network-gui or system-config-network-tui -- redhat-sysadmin-list mailing list redhat-sysadmin-list at redhat.com https://www.redhat.com/mailman/listinfo/redhat-sysadmin-list From Colin.vanNiekerk at mimecast.co.za Mon Mar 23 06:47:37 2009 From: Colin.vanNiekerk at mimecast.co.za (Colin van Niekerk) Date: Mon, 23 Mar 2009 08:47:37 +0200 Subject: a couple questions from a cluster newbie Message-ID: Hi there, Apologies is anyone has answered this already and I have missed it. This post has been out for a while now. I would configure three VM's on the Failover box and add the ability to have each server failover separately. This would involve having three load balanced clusters as in the attached, again fixed sized fonts. To replicate data between the virtual server and the physical server within each cluster I would use DRBD (RAID1 on a network level), you can configure this so that only once the data is committed to disk on both sides does the kernel confirm the write. This will present the system with a new block device and data must only be read and written via this device. As long as your system is 'strong' enough and the link between the servers is fast enough (this would depend on the amount of changed to the data - how much data would need to be written to the block device on the other end of the network) it will be just like reading and writing to any other block device. For the backend you could use Conga with luci and ricci to manage the cluster (thinking about ways to avoid pain going forward) but I have not done this in a production environment so I'm not sure about the details. I'm afriad I have worked very little GFS as well so I can't answer you on that side of things. Maybe the GNBD would be better for the load balanced server replication as well, but as far as I know the main reason you would use GNBD is that it exports the file system to many users and manages locking better between the users which wouldn't help in the pg/ds/ap clusters. Can anyone confirm? Just so I'm clear on the backend side. It sounds like there is a level of interaction between users and the actual data on the backend servers. Do the users query a process on the storage/processing servers and then that process works on the data and gives the user a result? Or do the users interact with the data directly? Regards, Colin van Niekerk RHCE: 805008755334920 ________________________________________ From: redhat-sysadmin-list-bounces at redhat.com [redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Laurent Wandrebeck [lw at hygeos.com] Sent: 20 March 2009 06:12 PM To: redhat-sysadmin-list at redhat.com Subject: a couple questions from a cluster newbie Hi list, our park is going to gain three new boxes, pushing storage size to 70TB. I think it's time to get rid of nfs /net automounts, and to go for some kind of a cluster. long story short: each typical server has a local storage (1 to 8TB, up to 15 soon), that are sata discs connected to a 3ware card, using hard raid 10 or 5. each of these machines is aimed at processing data from a given satellite. there are also one pgsql server, one apache server, one nis/home (via nfs) server each with a 3ware and its discs. brw, the nis/nfs server is soon to be turned into a directory server. gbps network, non administrable switches. /24 network class. now, I'd like to transform that mess into: 1) have one GFS volume for sat1...N data. So that, if needed, you can process whatever you want from whatever machine. 2) have a failover machine that could automagically take load for pg, apache and nfs/nis (the soon to be directory server) if the dedicated box fails. that means an efficient replication so data are identical on original pg/apache/etc machines and the failover one. 3) have some kind of load balancing on sat1...N, that would put processes on a box where processed data are local, without having the user to decide where to launch processes. resulting data from processes would have to be written on the local storage of the box. So that sat1 data and sat1 processed data stay on the same physical volume. That way, if a box really badly crashes, we know which data were lost (we can't afford to backup 70TB). now, questions (thx for arriving down there:) : 1) what i've read in doc is i should use gndb. am i on the right track ? It's unclear to me if it is safe to use a machine both for serving and processing data. 2) failover should be possible if i understood correctly doc. where i'm a bit stuck is the replication part part. wal shipping should do the trick for pg. directory server has some kind of failover mechanism afaik. about apache, i'm a bit in the dark. could someone enlighten me ? 3) is such a thing possible with cluster suite ? at all ? Would there be any better way to solve the problem of the boxes configuration so our DC can continue to grow without becoming a nightmare for me and users ? 4) right now, user homes follow them to whatever box they log on. should /home be another gfs volume so that every server (potentially hidden by load balancing if i understood correctly) can continue to access these data (processing codes are often on /home). Any other solution ? You'll find attached some kind of ascii art trying to describe what i'd like to get :) (open it with fixed size font) Thanks a lot for helping. Best Regards, -- Laurent -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: fo.txt URL: From nitin.gizare at wipro.com Mon Mar 23 16:43:45 2009 From: nitin.gizare at wipro.com (nitin.gizare at wipro.com) Date: Mon, 23 Mar 2009 22:13:45 +0530 Subject: rrd tool Message-ID: Hello I am interested in installing the rrdtool in RHEL 4.0. Does any one has steps to install. Also how can we install yum in rhel 4.0.? Rgds Nitin -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbathe at gmail.com Tue Mar 24 05:00:03 2009 From: sbathe at gmail.com (Saurabh Bathe) Date: Tue, 24 Mar 2009 10:30:03 +0530 Subject: rrd tool In-Reply-To: References: Message-ID: <49C868D3.1060509@gmail.com> nitin.gizare at wipro.com wrote: > Hello > > > > I am interested in installing the rrdtool in RHEL 4.0. > > Does any one has steps to install. Dag's repository should have it. http://dag.wieers.com/home-made/ http://dag.wieers.com/rpm/FAQ.php > > Also how can we install yum in rhel 4.0.? You dont need to. You can configure yum repositories in up2date so that up2ddate can download ans install RPMs from those repositories. --Saurabh From lw at hygeos.com Thu Mar 26 10:49:29 2009 From: lw at hygeos.com (Laurent Wandrebeck) Date: Thu, 26 Mar 2009 11:49:29 +0100 Subject: a couple questions from a cluster newbie In-Reply-To: References: Message-ID: <200903261149.29168.lw@hygeos.com> Le lundi 23 mars 2009 07:47, Colin van Niekerk a ?crit?: > Hi there, Hi Colin, > > Apologies is anyone has answered this already and I have missed it. This > post has been out for a while now. You're the first, kudos :) > > I would configure three VM's on the Failover box and add the ability to > have each server failover separately. This would involve having three load > balanced clusters as in the attached, again fixed sized fonts. Thanks for your ascii art. Which VM would you advice ? Xen as it is officially supported on rhel, or kvm ? something else maybe ? > > To replicate data between the virtual server and the physical server within > each cluster I would use DRBD (RAID1 on a network level), you can configure > this so that only once the data is committed to disk on both sides does the > kernel confirm the write. This will present the system with a new block > device and data must only be read and written via this device. As long as > your system is 'strong' enough and the link between the servers is fast > enough (this would depend on the amount of changed to the data - how much > data would need to be written to the block device on the other end of the > network) it will be just like reading and writing to any other block > device. Our network is gbps, and machines will be in the same rack, one hop away. So I guess synchronous replication will do the trick. > > For the backend you could use Conga with luci and ricci to manage the > cluster (thinking about ways to avoid pain going forward) but I have not > done this in a production environment so I'm not sure about the details. OK, I'll set up a couple VM soon to check the details. > > I'm afriad I have worked very little GFS as well so I can't answer you on > that side of things. Maybe the GNBD would be better for the load balanced > server replication as well, but as far as I know the main reason you would > use GNBD is that it exports the file system to many users and manages > locking better between the users which wouldn't help in the pg/ds/ap > clusters. Can anyone confirm? > > Just so I'm clear on the backend side. It sounds like there is a level of > interaction between users and the actual data on the backend servers. Do > the users query a process on the storage/processing servers and then that > process works on the data and gives the user a result? Or do the users > interact with the data directly? Users interact directly with data. classic (and simplified) scheme is: (shell script pseudo code) for i in files_to_be_processed do processing_program $i $output_dir/$output_result done Thx for helping, Regards, -- Laurent Wandrebeck IT Manager / Directeur des systemes d'informations HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com From Colin.vanNiekerk at mimecast.co.za Thu Mar 26 20:10:01 2009 From: Colin.vanNiekerk at mimecast.co.za (Colin van Niekerk) Date: Thu, 26 Mar 2009 22:10:01 +0200 Subject: a couple questions from a cluster newbie In-Reply-To: <200903261149.29168.lw@hygeos.com> References: , <200903261149.29168.lw@hygeos.com> Message-ID: Hi there, As far as the VM goes... I'd use KVM, mainly because RH is replacing Xen with KVM at some point in the future (last time i checked, it was going to be during the first half of 2009) :) I will think about the question regarding enabling of users to launch processes on remote servers that house the data being processed a little and get back to you with some info as soon as possible. Regards, Colin ________________________________________ From: redhat-sysadmin-list-bounces at redhat.com [redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Laurent Wandrebeck [lw at hygeos.com] Sent: 26 March 2009 12:49 PM To: redhat-sysadmin-list at redhat.com Subject: Re: a couple questions from a cluster newbie Le lundi 23 mars 2009 07:47, Colin van Niekerk a ?crit : > Hi there, Hi Colin, > > Apologies is anyone has answered this already and I have missed it. This > post has been out for a while now. You're the first, kudos :) > > I would configure three VM's on the Failover box and add the ability to > have each server failover separately. This would involve having three load > balanced clusters as in the attached, again fixed sized fonts. Thanks for your ascii art. Which VM would you advice ? Xen as it is officially supported on rhel, or kvm ? something else maybe ? > > To replicate data between the virtual server and the physical server within > each cluster I would use DRBD (RAID1 on a network level), you can configure > this so that only once the data is committed to disk on both sides does the > kernel confirm the write. This will present the system with a new block > device and data must only be read and written via this device. As long as > your system is 'strong' enough and the link between the servers is fast > enough (this would depend on the amount of changed to the data - how much > data would need to be written to the block device on the other end of the > network) it will be just like reading and writing to any other block > device. Our network is gbps, and machines will be in the same rack, one hop away. So I guess synchronous replication will do the trick. > > For the backend you could use Conga with luci and ricci to manage the > cluster (thinking about ways to avoid pain going forward) but I have not > done this in a production environment so I'm not sure about the details. OK, I'll set up a couple VM soon to check the details. > > I'm afriad I have worked very little GFS as well so I can't answer you on > that side of things. Maybe the GNBD would be better for the load balanced > server replication as well, but as far as I know the main reason you would > use GNBD is that it exports the file system to many users and manages > locking better between the users which wouldn't help in the pg/ds/ap > clusters. Can anyone confirm? > > Just so I'm clear on the backend side. It sounds like there is a level of > interaction between users and the actual data on the backend servers. Do > the users query a process on the storage/processing servers and then that > process works on the data and gives the user a result? Or do the users > interact with the data directly? Users interact directly with data. classic (and simplified) scheme is: (shell script pseudo code) for i in files_to_be_processed do processing_program $i $output_dir/$output_result done Thx for helping, Regards, -- Laurent Wandrebeck IT Manager / Directeur des systemes d'informations HYGEOS, Earth Observation Department / Observation de la Terre Euratechnologies 165 Avenue de Bretagne 59000 Lille, France tel: +33 3 20 08 24 98 http://www.hygeos.com -- redhat-sysadmin-list mailing list redhat-sysadmin-list at redhat.com https://www.redhat.com/mailman/listinfo/redhat-sysadmin-list

Colin van Niekerk
Phone 0861 114 063
Mobile +2782 557 9081
Fax 086 522 6377
-------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 109032622101701105.gif Type: image/gif Size: 71 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 109032622101701305 Type: image/jpeg Size: 11512 bytes Desc: not available URL: From samfw at redhat.com Fri Mar 27 12:40:34 2009 From: samfw at redhat.com (Sam Folk-Williams) Date: Fri, 27 Mar 2009 08:40:34 -0400 Subject: a couple questions from a cluster newbie In-Reply-To: References: , <200903261149.29168.lw@hygeos.com> Message-ID: <49CCC942.8050204@redhat.com> Just to clarify this - "Red Hat's strategic direction for the future development of its virtualization product portfolio is based on KVM, making Red Hat the only virtualization vendor leveraging technology that is developed as part of the Linux operating system. Existing Xen-based deployments will continue to be supported for the full lifetime of Red Hat Enterprise Linux 5, and Red Hat will provide a variety of tools and services to enable customers to migrate from their Red Hat Enterprise Linux 5 Xen deployment to KVM." Details here: http://www.redhat.com/virtualization-strategy/ -Sam Colin van Niekerk wrote: > > Hi there, > > As far as the VM goes... I'd use KVM, mainly because RH is replacing Xen > with KVM at some point in the future (last time i checked, it was going > to be during the first half of 2009) :) > > I will think about the question regarding enabling of users to launch > processes on remote servers that house the data being processed a little > and get back to you with some info as soon as possible. > > Regards, > Colin > > ________________________________________ > From: redhat-sysadmin-list-bounces at redhat.com > > [redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Laurent > Wandrebeck [lw at hygeos.com] > Sent: 26 March 2009 12:49 PM > To: redhat-sysadmin-list at redhat.com > > Subject: Re: a couple questions from a cluster newbie > > Le lundi 23 mars 2009 07:47, Colin van Niekerk a ?crit : >> Hi there, > Hi Colin, >> >> Apologies is anyone has answered this already and I have missed it. This >> post has been out for a while now. > You're the first, kudos :) >> >> I would configure three VM's on the Failover box and add the ability to >> have each server failover separately. This would involve having three load >> balanced clusters as in the attached, again fixed sized fonts. > Thanks for your ascii art. Which VM would you advice ? Xen as it is > officially > supported on rhel, or kvm ? something else maybe ? >> >> To replicate data between the virtual server and the physical server > within >> each cluster I would use DRBD (RAID1 on a network level), you can > configure >> this so that only once the data is committed to disk on both sides > does the >> kernel confirm the write. This will present the system with a new block >> device and data must only be read and written via this device. As long as >> your system is 'strong' enough and the link between the servers is fast >> enough (this would depend on the amount of changed to the data - how much >> data would need to be written to the block device on the other end of the >> network) it will be just like reading and writing to any other block >> device. > Our network is gbps, and machines will be in the same rack, one hop > away. So I > guess synchronous replication will do the trick. >> >> For the backend you could use Conga with luci and ricci to manage the >> cluster (thinking about ways to avoid pain going forward) but I have not >> done this in a production environment so I'm not sure about the details. > OK, I'll set up a couple VM soon to check the details. >> >> I'm afriad I have worked very little GFS as well so I can't answer you on >> that side of things. Maybe the GNBD would be better for the load balanced >> server replication as well, but as far as I know the main reason you would >> use GNBD is that it exports the file system to many users and manages >> locking better between the users which wouldn't help in the pg/ds/ap >> clusters. Can anyone confirm? >> >> Just so I'm clear on the backend side. It sounds like there is a level of >> interaction between users and the actual data on the backend servers. Do >> the users query a process on the storage/processing servers and then that >> process works on the data and gives the user a result? Or do the users >> interact with the data directly? > Users interact directly with data. classic (and simplified) scheme is: > (shell script pseudo code) > for i in files_to_be_processed do > processing_program $i $output_dir/$output_result > done > > Thx for helping, > Regards, > -- > Laurent Wandrebeck > IT Manager / Directeur des systemes d'informations > HYGEOS, Earth Observation Department / Observation de la Terre > Euratechnologies > 165 Avenue de Bretagne > 59000 Lille, France > tel: +33 3 20 08 24 98 > http://www.hygeos.com > > -- > redhat-sysadmin-list mailing list > redhat-sysadmin-list at redhat.com > https://www.redhat.com/mailman/listinfo/redhat-sysadmin-list > > Colin van Niekerk > > Technical > Mimecast South Africa > > Phone 0861 114 063 *?* Mobile +2782 557 9081 *?* Fax 086 522 6377 > > This email, sent at *22:10:17* on *2009-03-26* from > *colin.vanniekerk at mimecast.co.za* to *redhat-sysadmin-list at redhat.com* > has been scanned for viruses and malware by Mimecast, an innovator in > software as a service (SaaS) for business. *Mimecast South Africa?s* > email continuity, security, archiving and compliancy is managed by > Mimecast?s unified email management platform. To find out more, request > a demo. > > View our Legal Notice. > > > > > > > > > > Subscribe to future > Mimecast eshot>> > > > > > ------------------------------------------------------------------------ > > -- > redhat-sysadmin-list mailing list > redhat-sysadmin-list at redhat.com > https://www.redhat.com/mailman/listinfo/redhat-sysadmin-list -- Sam Folk-Williams Knowledge Program Manager Red Hat, Inc (919) 754-4558 From nitin.gizare at wipro.com Sat Mar 28 07:57:15 2009 From: nitin.gizare at wipro.com (nitin.gizare at wipro.com) Date: Sat, 28 Mar 2009 13:27:15 +0530 Subject: Console Messages In-Reply-To: <49CCC942.8050204@redhat.com> References: , <200903261149.29168.lw@hygeos.com> <49CCC942.8050204@redhat.com> Message-ID: Hello Some times we see some m/c which we have get kernel panic and panic message is Shown in console window. Is there way to get such issue in separate log files so that can be send to support team for further investigation. Rgds Nitin From sbathe at gmail.com Sat Mar 28 11:20:58 2009 From: sbathe at gmail.com (Saurabh Bathe) Date: Sat, 28 Mar 2009 16:50:58 +0530 Subject: Console Messages In-Reply-To: References: , <200903261149.29168.lw@hygeos.com> <49CCC942.8050204@redhat.com> Message-ID: <49CE081A.108@gmail.com> nitin.gizare at wipro.com wrote: > Hello > > Some times we see some m/c which we have get kernel panic and panic > message is > Shown in console window. Is there way to get such issue in separate log > files so that can be send to support team for further investigation. A few different ways: 1. Serial console / terminal 2. netdump / diskdump 3. crash utility kbase.redhat.com should have nice docs on all of these, though admittedly some of these maybe too old and may not work in all situations (netdump for sure is). And in any case, if you have a RH support contract, the friendly support guys over there will help you with instructions to setup your systems to be able to gather all the data. --Saurabh From zhbmaillistonly at gmail.com Sat Mar 28 11:32:55 2009 From: zhbmaillistonly at gmail.com (Zhang Huangbin) Date: Sat, 28 Mar 2009 19:32:55 +0800 Subject: Open Source Mail Server Solution for RHEL/CentOS 5.x Message-ID: <49CE0AE7.4090500@gmail.com> Hi, all. I'd like to introduce iRedMail open source mail server solution for RHEL/CentOS to you. * iRedMail is: - mail server solution for Red Hat(R) Enterprise Linux and CentOS 5.x, support both i386 and x86_64. - a shell script set, used to install and configure all mail server related software automatically. - open source project (GPL v2). * iRedOS is: - Customized CentOS 5.x, remove unnecessary packages - Ships iRedMail. * Download: - http://code.google.com/p/iredmail/downloads/list - http://www.iredmail.org/iredos/ * Feature list: http://code.google.com/p/iredmail/wiki/Features * Installation guide: http://code.google.com/p/iredmail/wiki/Installation * Success Stories: http://code.google.com/p/iredmail/wiki/Success_Stories * Group/Forum: http://groups.google.com/group/iredmail/ -- Best regards. Zhang Huangbin - Open Source Mail Server Solution for RHEL/CentOS 5.x: http://code.google.com/p/iredmail/