From veliogluh at itu.edu.tr Thu May 1 09:30:04 2008 From: veliogluh at itu.edu.tr (Hakan VELIOGLU) Date: Thu, 1 May 2008 12:30:04 +0300 Subject: ext3 to lvm convert without data loss In-Reply-To: References: Message-ID: <20080501123004.g3howyyklu80wgcc@webmail.itu.edu.tr> Hi, Is there a way to convert my large ext3 (storage) disk to lvm2 without data loss? hakan ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program. From jolt at ti.com Thu May 1 10:59:07 2008 From: jolt at ti.com (Olt, Joseph) Date: Thu, 1 May 2008 05:59:07 -0500 Subject: ext3 to lvm convert without data loss In-Reply-To: <20080501123004.g3howyyklu80wgcc@webmail.itu.edu.tr> Message-ID: <6B34B8A05FA7544BB7F013ACD452E02802A07B9B@dlee11.ent.ti.com> No. You will have to have some way to backup/restore or another disk. -----Original Message----- From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Hakan VELIOGLU Sent: Thursday, May 01, 2008 5:30 AM To: redhat-sysadmin-list at redhat.com Subject: ext3 to lvm convert without data loss Hi, Is there a way to convert my large ext3 (storage) disk to lvm2 without data loss? hakan ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program. -- redhat-sysadmin-list mailing list redhat-sysadmin-list at redhat.com https://www.redhat.com/mailman/listinfo/redhat-sysadmin-list From Andrew.Elliott at istat.ca Thu May 1 18:15:01 2008 From: Andrew.Elliott at istat.ca (Andrew Elliott) Date: Thu, 1 May 2008 14:15:01 -0400 Subject: CUPS scheduler dies when adding smb-shared Windows printer Message-ID: <8DD445A3B735E5488E2B243258DB5BCC04E0035C@apccomail1.northamerica.intra.abbott.com> Trying to add a printer that's shared on a Windows workstation on a different subnet, error_log isn't very helpful, just shows: "Scheduler shutting down due to SIGTERM" I adjusted cupsd.conf to "Allow All" connections from remote networks. I'm using the following command to set up the printer: /usr/sbin/lpadmin -p printer1 -v smb://print:print at XX.XXX.XX.XXX/printshare Error: lpadmin: add-printer (set device) failed: server-error-service-unavailable - service dies and has to be restarted after that RHEL v.3.2.3-42 cups-libs-1.1.17-13.3.12 cups-1.1.17-13.3.45 I do get information when running "smbclient -L XX.XXX.XX.XXX -U print" but it also gives the following error: session request to 10.207.92.127 failed (Called name not present) session request to 10 failed (Called name not present) updating to the most recent RHN patches is NOT possible at this time, it's a 24x7 environment (uptime 280 days). Any ideas? The printer adds fine on my test server connecting to the local internal network (same package versions, os, etc.). Thanks in advance Andrew Elliott. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chet.nichols at gmail.com Fri May 2 05:12:59 2008 From: chet.nichols at gmail.com (Chet Nichols III) Date: Fri, 2 May 2008 01:12:59 -0400 Subject: sar bug? In-Reply-To: References: Message-ID: Hey Tim- Sorry for the delay- thanks for that! I'll add a note to the bug with my experiences, and hopefully one of us can get an answer. I'm not sure where the bug in the kernel would be, since /proc seems to handle rollovers correctly. Looking at sar.c (but not too in depth), I'm wondering if it has to do with rollovers though somewhere. For example (lets used signed ints, even though I'm not sure what really is used in proc for /proc/net/dev), /proc/net/dev shows eth0 sending almost 2,147,483,647 bytes (or 32bit signed). As sar goes through each snapshot, it uses the previous value and the current value to calculate bytes/sec, but if at any point the value hits 32 bits, the kernel will reset the value in proc/net/dev to 0, and sar will be subtract large, almost 32bit number from a number close to 0, returning a huge negative number. However, I'm not getting a huge negative number, I'm getting a huge positive number (and there's no abs() being applied to the result, at least from what I saw, so that makes me more curious). Also, you'd think that it would only happen that one and only time the previous was greater than the current.. ie: the next run through, the current bytes sent would be, say 23981, and the previous would be 10481 or whatever, so it would happen very infrequently.. but instead, once it starts happening, it doesn't stop. So.. rollovers? Something to do with possibly casting/converting between signed/unsigned ints? Just random shots in the dark without really looking too deeply at anything- but let's hope someone has an answer :D Talk to you soon, thanks again! Chet On Tue, Apr 29, 2008 at 4:37 PM, Tim Mooney wrote: > In regard to: sar bug?, Chet Nichols III said (at 2:48pm on Apr 29, 2008): > > hey there- > > > > anyone ever seen a bug in sar where it will start spitting out incorrect > > values? we're running on 32bit intels. an example: > > > > 06:40:13 PM IFACE rxpck/s txpck/s rxbyt/s txbyt/s rxcmp/s > > txcmp/s rxmcst/s > > 06:40:18 PM lo 0.00 0.00 0.00 0.00 0.00 > > 0.00 0.00 > > 06:40:18 PM eth0 307775.86 390641.38 66821927.59 460472182.76 > > 0.00 0.00 0.00 > > 06:40:18 PM eth1 0.00 0.00 0.00 0.00 0.00 > > 0.00 0.00 > > 06:40:18 PM sit0 0.00 0.00 0.00 0.00 0.00 > > 0.00 0.00 > > > > That's funny, I was just going to send an email to the list about a > similar issue. I opened a bug about it: > > https://bugzilla.redhat.com/show_bug.cgi?id=443190 > > The problem for us is that we're not running the stock kernel, so I don't > think Red Hat is going to be too sympathetic. Are you using the stock > kernel? If so, maybe you want to add your name to that bug. > > Our main complaint is with the disk I/O statistics -- I modified the > sysstat > cron job to also save disk statistics, and sar frequently reports spurious > values when reporting on the collected sa data. This happens even using > iostat interactively, though, so it's not just the data that's recorded > by sadc. > > The sysstat FAQ says it's a kernel bug, but I'm puzzled why it's been > allowed to persist for so long, if it's a known kernel bug. > > Tim > -- > Tim Mooney Tim.Mooney at ndsu.edu > Information Technology Services (701) 231-1076 (Voice) > Room 242-J6, IACC Building (701) 231-8541 (Fax) > North Dakota State University, Fargo, ND 58105-5164 > > -- > redhat-sysadmin-list mailing list > redhat-sysadmin-list at redhat.com > https://www.redhat.com/mailman/listinfo/redhat-sysadmin-list > -- /* Chet Nichols III mail: chet.nichols at gmail.com (aim: chet / twitter: chet) */ -------------- next part -------------- An HTML attachment was scrubbed... URL: From chet.nichols at gmail.com Sun May 4 06:48:19 2008 From: chet.nichols at gmail.com (Chet Nichols III) Date: Sun, 4 May 2008 02:48:19 -0400 Subject: CUPS scheduler dies when adding smb-shared Windows printer In-Reply-To: <8DD445A3B735E5488E2B243258DB5BCC04E0035C@apccomail1.northamerica.intra.abbott.com> References: <8DD445A3B735E5488E2B243258DB5BCC04E0035C@apccomail1.northamerica.intra.abbott.com> Message-ID: Hey Andrew- What OS is the Windows workstation running? Talk to you soon- Chet 2008/5/1 Andrew Elliott : > > > > > Trying to add a printer that's shared on a Windows workstation on a > different subnet, error_log isn't very helpful, just shows: > > > > > > > > "Scheduler shutting down due to SIGTERM" > > > > > > > > I adjusted cupsd.conf to "Allow All" connections from remote networks. > > > > > > > > I'm using the following command to set up the printer: > > > > > > > > /usr/sbin/lpadmin -p printer1 -v smb://print:print at XX.XXX.XX.XXX > /printshare > > > > > > > > Error: > > > > > > > > lpadmin: add-printer (set device) failed: server-error-service-unavailable > - service dies and has to be restarted after that > > > > > > > > RHEL v.3.2.3-42 > > > > cups-libs-1.1.17-13.3.12 > > > > cups-1.1.17-13.3.45 > > > > > > > > I do get information when running "smbclient -L XX.XXX.XX.XXX -U print" > but it also gives the following error: > > > > > > > > session request to 10.207.92.127 failed (Called name not present) > > > > session request to 10 failed (Called name not present) > > > > > > > > updating to the most recent RHN patches is NOT possible at this time, it's > a > > 24x7 environment (uptime 280 days). > > > > > > > > Any ideas? > > > > > > > > The printer adds fine on my test server connecting to the local internal > network (same package versions, os, etc.). > > > > > > > > Thanks in advance > > > > Andrew Elliott. > > -- > redhat-sysadmin-list mailing list > redhat-sysadmin-list at redhat.com > https://www.redhat.com/mailman/listinfo/redhat-sysadmin-list > -- /* Chet Nichols III mail: chet.nichols at gmail.com (aim: chet / twitter: chet) */ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hannibaljackson at yahoo.com Tue May 6 15:44:49 2008 From: hannibaljackson at yahoo.com (Hannibal S. Jackson) Date: Tue, 6 May 2008 08:44:49 -0700 (PDT) Subject: INIT: Id "x" respawning too fast Message-ID: <746086.30121.qm@web50407.mail.re2.yahoo.com> An HTML attachment was scrubbed... URL: From hannibaljackson at yahoo.com Tue May 6 15:44:56 2008 From: hannibaljackson at yahoo.com (Hannibal S. Jackson) Date: Tue, 6 May 2008 08:44:56 -0700 (PDT) Subject: INIT: Id "x" respawning too fast Message-ID: <571885.34532.qm@web50410.mail.re2.yahoo.com> An HTML attachment was scrubbed... URL: From redhat at jmarki.net Tue May 6 21:04:48 2008 From: redhat at jmarki.net (Junhao) Date: Wed, 07 May 2008 05:04:48 +0800 Subject: INIT: Id "x" respawning too fast In-Reply-To: <746086.30121.qm@web50407.mail.re2.yahoo.com> References: <746086.30121.qm@web50407.mail.re2.yahoo.com> Message-ID: <4820C7F0.5070600@jmarki.net> Hannibal S. Jackson wrote: > Rebooted HP ML370 (Red Hat WS3) and then it came back with the error > INIT: Id "x" respawning too fast: disabled for 5 minutes. I've searched > and searched and have not been able to find a viable solution. I've read > RH's Knowledge base and they stated it was related to the graphic card > settings. Problem is I can not log in to make those changes. I've tried > to boot into single user mode, run level 3, but nothing has worked thus > far. It comes back to the login prompt but as soon as I try to log in it > goes right back to the login screen never asking me for a password. I've > read it could also be an issue in the inittab file but I can't log on to > view that or make any changes. Any assistance is greatly appreciated. > Getting that error but still can not log in at the console to make any > adjustments because the password prompt never comes back. Also, I've > noticed even when I try to tell it to boot the kernel into run level 3, > it still goes back to 5. > 1) password prompt: Are you able login, then immediately go to init 1? Init 1 is single user mode. If not you can try booting from a livecd. Then mount the harddisk (assembling raid if needed), and edit /etc/inittab to boot to runlevel 3 (or 1). 2) X spawning too fast is often due to an X misconfiguration, or missing graphics card drivers. Check /var/log/Xorg.0.log. Checking X configuration can be directly done using "startx", and "Ctrl-Alt-Backspace" to kill X if needed. Hope that helps. =) Regards, Junhao From hannibaljackson at yahoo.com Tue May 6 21:50:16 2008 From: hannibaljackson at yahoo.com (Hannibal S. Jackson) Date: Tue, 6 May 2008 14:50:16 -0700 (PDT) Subject: INIT: Id "x" respawning too fast In-Reply-To: <4820C7F0.5070600@jmarki.net> Message-ID: <955137.16995.qm@web50403.mail.re2.yahoo.com> An HTML attachment was scrubbed... URL: From redhat at jmarki.net Tue May 6 22:09:30 2008 From: redhat at jmarki.net (Junhao) Date: Wed, 07 May 2008 06:09:30 +0800 Subject: INIT: Id "x" respawning too fast In-Reply-To: <955137.16995.qm@web50403.mail.re2.yahoo.com> References: <955137.16995.qm@web50403.mail.re2.yahoo.com> Message-ID: <4820D71A.4080109@jmarki.net> Hannibal S. Jackson wrote: > --- On *Tue, 5/6/08, Junhao //* wrote: > > From: Junhao > Subject: Re: INIT: Id "x" respawning too fast > To: redhat-sysadmin-list at redhat.com > Date: Tuesday, May 6, 2008, 5:04 PM > > Hannibal S. Jackson wrote: > > Rebooted HP ML370 (Red Hat WS3) and then it came back with the error > > INIT: Id "x" respawning too fast: disabled for 5 minutes. > I've searched > > and searched and have not been able to find a viable solution. I've > read > > RH's Knowledge base and they stated it was related to the graphic card > > settings. Problem is I can not log in to make those changes. I've > tried > > to boot into single user mode, run level 3, but nothing has worked thus > > far. It comes back to the login prompt but as soon as I try to log in it > > goes right back to the login screen never asking me for a password. > I've > > read it could also be an issue in the inittab file but I can't log on > to > > view that or make any changes. Any assistance is greatly appreciated. > > Getting that error but still can not log in at the console to make any > > adjustments because the password prompt never comes back. Also, I've > > noticed even when I try to tell it to boot the kernel into run level 3, > > it still goes back to 5. > > > > 1) password prompt: > Are you able login, then immediately go to init 1? Init 1 is single user > mode. If not you can try booting from a livecd. Then mount the harddisk > (assembling raid if needed), and edit /etc/inittab to boot to runlevel 3 > (or 1). > > 2) X spawning too fast is often due to an X misconfiguration, or missing > graphics card drivers. Check /var/log/Xorg.0.log. Checking X > configuration can be directly done using "startx", and > "Ctrl-Alt-Backspace" to kill X if needed. > > Hope that helps. =) > > Regards, > Junhao > Not able to login in. Get the login prompt and type root. Comes right > back to the login prompt. I'm wondering if it has to do with the changes > I made the previous day trying to make the Red Hat machine an LDAP > Client. I changed the nsswitch.conf to point to ldap then files for > passwd and I'm wondering if that's why I can't log in via the console. > Was able to boot livecd now and type linux rescue. Now just trying to > backtrack and figure out what went wrong. I apologize I don't have much > linux experience, just Solaris and although they are somewhat similar > they are different in a lot of ways as well. I'd tried to comment out > the respawn in the /etc/inittab and type exit and that still wouldn't > let me in. Still wondering if there is something missing in the config > files since I used authconfig to try and change it from NIS to LDAP. > LDAP Server is Sun DS 6.2 on Solaris 10 and it stated making a Red Hat > machine a client was fairly simple. Obviously something went wrong in > the process. Thanks for your reply. Same here for Solaris experience, tripping everywhere when I'm doing Solaris... The /etc/inittab change should be # Default runlevel. The runlevels used by RHS are: # 0 - halt (Do NOT set initdefault to this) # 1 - Single user mode # 2 - Multiuser, without NFS (The same as 3, if you do not have networking) # 3 - Full multiuser mode # 4 - unused # 5 - X11 # 6 - reboot (Do NOT set initdefault to this) # id:3:initdefault: #id:5:initdefault: This is taken from RHEL5.1, don't think there are much changes to this file. Please correct me if I'm wrong though. Anyway, I thought /etc/nsswitch.conf should be "passwd files ldap" and "group files ldap", just in case ldap is not available? The PAM configuration files is at /etc/pam.d/*. You may need to revert kdm, passwd, other, login and maybe some others. Quite like a fragmented /etc/pam.conf from Solaris 10. Don't think PAM is related to the X respawning issue though. Regards, Junhao From hannibaljackson at yahoo.com Tue May 6 22:25:18 2008 From: hannibaljackson at yahoo.com (Hannibal S. Jackson) Date: Tue, 6 May 2008 15:25:18 -0700 (PDT) Subject: INIT: Id "x" respawning too fast In-Reply-To: <4820D71A.4080109@jmarki.net> Message-ID: <496512.451.qm@web50402.mail.re2.yahoo.com> An HTML attachment was scrubbed... URL: From Ryan.Sweat at atmosenergy.com Tue May 6 22:43:44 2008 From: Ryan.Sweat at atmosenergy.com (Sweat, Ryan) Date: Tue, 6 May 2008 17:43:44 -0500 Subject: INIT: Id "x" respawning too fast In-Reply-To: <496512.451.qm@web50402.mail.re2.yahoo.com> References: <4820D71A.4080109@jmarki.net> <496512.451.qm@web50402.mail.re2.yahoo.com> Message-ID: You could always boot with the kernel argument init=/bin/sh and you will get dropped to a shell where you can fix your inittab. ________________________________ From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Hannibal S. Jackson Sent: Tuesday, May 06, 2008 5:25 PM To: redhat-sysadmin-list at redhat.com Subject: Re: INIT: Id "x" respawning too fast Thanks, I'll take a look at it when I'm back in the office. That's the change I made in the /etc/inittab but that didn't fix it. That's why I'm wondering if the nsswitch.conf file is causing me not to be able to login to the console. I think I have ldap first and because slapd is not running it may be causing an issue. I would think if ldap wasn't running it would go to the next service, i.e. files passwd etc... As long as NOT FOUND=RETURN or whatever it is isn't there, I thought it just went to the next service but I'll double check again. Thanks for all your help. I'll see if I can figure out something else in the morning. --- On Tue, 5/6/08, Junhao wrote: From: Junhao Subject: Re: INIT: Id "x" respawning too fast To: redhat-sysadmin-list at redhat.com Date: Tuesday, May 6, 2008, 6:09 PM Hannibal S. Jackson wrote: > --- On *Tue, 5/6/08, Junhao //* wrote: > > From: Junhao > Subject: Re: INIT: Id "x" respawning too fast > To: redhat-sysadmin-list at redhat.com > Date: Tuesday, May 6, 2008, 5:04 PM > > Hannibal S. Jackson wrote: > > Rebooted HP ML370 (Red Hat WS3) and then it came back with the error > > INIT: Id "x" respawning too fast: disabled for 5 minutes. > I've searched > > and searched and have not been able to find a viable solution. I've > read > > RH's Knowledge base and they stated it was related to the graphic card > > settings. Problem is I can not log in to make those changes. I've > tried > > to boot into single user mode, run level 3, but nothing has worked thus > > far. It comes back to the login prompt but as soon as I try to log in it > > goes right back to the login screen never asking me for a password. > I've > > read it could also be an issue in the inittab file but I can't log on > to > > view that or make any changes. Any assistance is greatly appreciated. > > Getting that error but still can not log in at the console to make any > > adjustments because the password prompt never comes back. Also, I've > > noticed even when I try to tell it to boot the kernel into run level 3, > > it still goes back to 5. > > > > 1) password prompt: > Are you able login, then immediately go to init 1? Init 1 is single user > mode. If not you can try booting from a livecd. Then mount the harddisk > (assembling raid if needed), and edit /etc/inittab to boot to runlevel 3 > (or 1). > > 2) X spawning too fast is often due to an X misconfiguration, or missing > graphics card drivers. Check /var/log/Xorg.0.log. Checking X > configuration can be directly done using "startx", and > "Ctrl-Alt-Backspace" to kill X if needed. > > Hope that helps. =) > > Regards, > Junhao > Not able to login in. Get the login prompt and type root. Comes right > back to the login prompt. I'm wondering if it has to do with the changes > I made the previous day trying to make the Red Hat machine an LDAP > Client. I changed the nsswitch.conf to point to ldap then files for > passwd and I'm wondering if that's why I can't log in via the console. > Was able to boot livecd now and type linux rescue. Now just trying to > backtrack and figure out what went wrong. I apologize I don't have much > linux experience, just Solaris and although they are somewhat similar > they are different in a lot of ways as well. I'd tried to comment out > the respawn in the /etc/inittab and type exit and that still wouldn't > let me in. Still wondering if there is something missing in the config > files since I used authconfig to try and change it from NIS to LDAP. > LDAP Server is Sun DS 6.2 on Solaris 10 and it stated making a Red Hat > machine a client was fairly simple. Obviously something went wrong in > the process. Thanks for your reply. Same here for Solaris experience, tripping everywhere when I'm doing Solaris... The /etc/inittab change should be # Default runlevel. The runlevels used by RHS are: # 0 - halt (Do NOT set initdefault to this) # 1 - Single user mode # 2 - Multiuser, without NFS (The same as 3, if you do not have networking) # 3 - Full multiuser mode # 4 - unused # 5 - X11 # 6 - reboot (Do NOT set initdefault to this) # id:3:initdefault: #id:5:initdefault: This is taken from RHEL5.1, don't think there are much changes to this file. Please correct me if I'm wrong though. Anyway, I thought /etc/nsswitch.conf should be "passwd files ldap" and "group files ldap", just in case ldap is not available? The PAM configuration files is at /etc/pam.d/*. You may need to revert kdm, passwd, other, login and maybe some others. Quite like a fragmented /etc/pam.conf from Solaris 10. Don't think PAM is related to the X respawning issue though. Regards, Junhao -- redhat-sysadmin-list mailing list redhat-sysadmin-list at redhat.com https://www.redhat.com/mailman/listinfo/redhat-sysadmin-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From hannibaljackson at yahoo.com Tue May 6 22:57:29 2008 From: hannibaljackson at yahoo.com (Hannibal S. Jackson) Date: Tue, 6 May 2008 15:57:29 -0700 (PDT) Subject: INIT: Id "x" respawning too fast In-Reply-To: Message-ID: <337076.93140.qm@web50404.mail.re2.yahoo.com> An HTML attachment was scrubbed... URL: From Ryan.Sweat at atmosenergy.com Tue May 6 23:00:19 2008 From: Ryan.Sweat at atmosenergy.com (Sweat, Ryan) Date: Tue, 6 May 2008 18:00:19 -0500 Subject: INIT: Id "x" respawning too fast In-Reply-To: <337076.93140.qm@web50404.mail.re2.yahoo.com> References: <337076.93140.qm@web50404.mail.re2.yahoo.com> Message-ID: Yea, init=/bin/sh wouldn't ask for a password, but you'll probably have to remount the root filesystem in rw mode. But if you can boot the rescue disk then it shouldn't be necessary. ________________________________ From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Hannibal S. Jackson Sent: Tuesday, May 06, 2008 5:57 PM To: redhat-sysadmin-list at redhat.com Subject: RE: INIT: Id "x" respawning too fast Thanks Ryan, would that ask for a password? I can't enter a password. I can get the login prompt but as soon as I type root, it comes right back to the login prompt. That's why I'm questioning whether my nsswitch.conf file may be messed up as well as I've tried to change it from a NIS client to an LDAP client using authconfig and then changed the nsswitch.conf to reflect that and point to LDAP. I am able to boot off of cd now so I think I have a pretty good chance to fix it. Just gotta make sure I fix the right files. Again I'm a Red Hat newbie but familiar with some commands as I have a few years Solaris experience. --- On Tue, 5/6/08, Sweat, Ryan wrote: From: Sweat, Ryan Subject: RE: INIT: Id "x" respawning too fast To: redhat-sysadmin-list at redhat.com Date: Tuesday, May 6, 2008, 6:43 PM You could always boot with the kernel argument init=/bin/sh and you will get dropped to a shell where you can fix your inittab. ________________________________ From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Hannibal S. Jackson Sent: Tuesday, May 06, 2008 5:25 PM To: redhat-sysadmin-list at redhat.com Subject: Re: INIT: Id "x" respawning too fast Thanks, I'll take a look at it when I'm back in the office. That's the change I made in the /etc/inittab but that didn't fix it. That's why I'm wondering if the nsswitch.conf file is causing me not to be able to login to the console. I think I have ldap first and because slapd is not running it may be causing an issue. I would think if ldap wasn't running it would go to the next service, i.e. files passwd etc... As long as NOT FOUND=RETURN or whatever it is isn't there, I thought it just went to the next service but I'll double check again. Thanks for all your help. I'll see if I can figure out something else in the morning. --- On Tue, 5/6/08, Junhao wrote: From: Junhao Subject: Re: INIT: Id "x" respawning too fast To: redhat-sysadmin-list at redhat.com Date: Tuesday, May 6, 2008, 6:09 PM Hannibal S. Jackson wrote: > --- On *Tue, 5/6/08, Junhao //* wrote: > > From: Junhao > Subject: Re: INIT: Id "x" respawning too fast > To: redhat-sysadmin-list at redhat.com > Date: Tuesday, May 6, 2008, 5:04 PM > > Hannibal S. Jackson wrote: > > Rebooted HP ML370 (Red Hat WS3) and then it came back with the error > > INIT: Id "x" respawning too fast: disabled for 5 minutes. > I've searched > > and searched and have not been able to find a viable solution. I've > read > > RH's Knowledge base and they stated it was related to the graphic card > > settings. Problem is I can not log in to make those changes. I've > tried > > to boot into single user mode, run level 3, but nothing has worked thus > > far. It comes back to the login prompt but as soon as I try to log in it > > goes right back to the login screen never asking me for a password. > I've > > read it could also be an issue in the inittab file but I can't log on > to > > view that or make any changes. Any assistance is greatly appreciated. > > Getting that error but still can not log in at the console to make any > > adjustments because the password prompt never comes back. Also, I've > > noticed even when I try to tell it to boot the kernel into run level 3, > > it still goes back to 5. > > > > 1) password prompt: > Are you able login, then immediately go to init 1? Init 1 is single user > mode. If not you can try booting from a livecd. Then mount the harddisk > (assembling raid if needed), and edit /etc/inittab to boot to runlevel 3 > (or 1). > > 2) X spawning too fast is often due to an X misconfiguration, or missing > graphics card drivers. Check /var/log/Xorg.0.log. Checking X > configuration can be directly done using "startx", and > "Ctrl-Alt-Backspace" to kill X if needed. > > Hope that helps. =) > > Regards, > Junhao > Not able to login in. Get the login prompt and type root. Comes right > back to the login prompt. I'm wondering if it has to do with the changes > I made the previous day trying to make the Red Hat machine an LDAP > Client. I changed the nsswitch.conf to point to ldap then files for > passwd and I'm wondering if that's why I can't log in via the console. > Was able to boot livecd now and type linux rescue. Now just trying to > backtrack and figure out what went wrong. I apologize I don't have much > linux experience, just Solaris and although they are somewhat similar > they are different in a lot of ways as well. I'd tried to comment out > the respawn in the /etc/inittab and type exit and that still wouldn't > let me in. Still wondering if there is something missing in the config > files since I used authconfig to try and change it from NIS to LDAP. > LDAP Server is Sun DS 6.2 on Solaris 10 and it stated making a Red Hat > machine a client was fairly simple. Obviously something went wrong in > the process. Thanks for your reply. Same here for Solaris experience, tripping everywhere when I'm doing Solaris... The /etc/inittab change should be # Default runlevel. The runlevels used by RHS are: # 0 - halt (Do NOT set initdefault to this) # 1 - Single user mode # 2 - Multiuser, without NFS (The same as 3, if you do not have networking) # 3 - Full multiuser mode # 4 - unused # 5 - X11 # 6 - reboot (Do NOT set initdefault to this) # id:3:initdefault: #id:5:initdefault: This is taken from RHEL5.1, don't think there are much changes to this file. Please correct me if I'm wrong though. Anyway, I thought /etc/nsswitch.conf should be "passwd files ldap" and "group files ldap", just in case ldap is not available? The PAM configuration files is at /etc/pam.d/*. You may need to revert kdm, passwd, other, login and maybe some others. Quite like a fragmented /etc/pam.conf from Solaris 10. Don't think PAM is related to the X respawning issue though. Regards, Junhao -- redhat-sysadmin-list mailing list redhat-sysadmin-list at redhat.com https://www.redhat.com/mailman/listinfo/redhat-sysadmin-list -- redhat-sysadmin-list mailing list redhat-sysadmin-list at redhat.com https://www.redhat.com/mailman/listinfo/redhat-sysadmin-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From tinatianxia at hotmail.com Fri May 9 21:11:25 2008 From: tinatianxia at hotmail.com (Tina Tian) Date: Fri, 9 May 2008 14:11:25 -0700 Subject: Different performance Message-ID: I am a DBA. I have identical database servers running on two Linux redhat 4, host 1 and host 2. When I was running the same bulk load to database (load a data file to database), host 2 was much faster than host 1. On both host1 and host2, database are using file system mount on /dev/sda and /dev/sdb. I checked with my SA, host1 and host2 have same CPU, RAM, file system configuration. The only different is that host 2 has extra HD capacity with higher 15k RPM. But the extra 2 HDs(sdc and sdd) are dedicated to other applications, not used by database at all. My questions are: ----------------- 1. On host2 (faster), the extra faster HDs(/dev/sdc and sdd) are not used by database. Does it still affect IO performance of /dev/sda and /dev/sdb ? 2. During database bulk load testing, host 1(slower) shows longer service IO time (svctm) and longer IO waiting time(await). What other possible reason can cause this problem? Any idea? I did post the same issue to database discussion group and they suggested me to check OS performance(svctm). Below is the result from iostat on host1(slower) and host2(faster) during bulk load: Host 1: iostat -x 2 ===================== avg-cpu: %user %nice %sys %iowait %idle 0.15 0.00 0.07 0.28 99.49 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.01 0.59 0.24 0.19 29.22 6.17 14.61 3.08 83.49 0.01 21.71 3.84 0.16 sdb 0.04 10.05 0.89 3.74 117.37 110.34 58.69 55.17 49.13 0.10 21.76 4.48 2.08 avg-cpu: %user %nice %sys %iowait %idle 15.74 0.00 8.99 0.31 74.95 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 1.99 0.00 57.71 0.00 14025.87 0.00 7012.94 0.00 243.03 0.21 3.58 3.53 20.35 sdb 0.00 0.00 11.94 0.00 95.52 0.00 47.76 0.00 8.00 0.02 2.04 2.04 2.44 avg-cpu: %user %nice %sys %iowait %idle 6.18 0.00 2.37 9.24 82.20 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.50 0.50 23.00 1.00 5732.00 12.00 2866.00 6.00 239.33 0.07 3.08 3.02 7.25 sdb 0.00 129.00 7.00 130.00 56.00 2076.00 28.00 1038.00 15.56 0.75 5.49 5.40 73.95 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.12 12.44 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 3.50 0.00 3.00 0.00 52.00 0.00 26.00 17.33 0.03 10.00 3.67 1.10 sdb 0.00 182.50 0.00 182.50 0.00 2920.00 0.00 1460.00 16.00 0.99 5.44 5.44 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.49 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.01 0.00 12.06 0.00 6.03 12.00 0.01 6.00 6.00 0.60 sdb 0.00 184.92 0.00 185.43 0.00 2962.81 0.00 1481.41 15.98 1.01 5.45 5.38 99.70 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.43 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 184.08 0.00 184.08 0.00 2945.27 0.00 1472.64 16.00 0.99 5.39 5.38 99.00 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.31 87.56 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.02 15.33 6.67 1.00 sdb 0.00 181.00 0.00 181.00 0.00 2896.00 0.00 1448.00 16.00 0.99 5.48 5.49 99.40 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.19 12.37 87.45 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 178.00 0.00 178.50 0.00 2852.00 0.00 1426.00 15.98 1.00 5.61 5.55 99.10 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.37 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 179.50 0.00 179.50 0.00 2872.00 0.00 1436.00 16.00 0.99 5.52 5.53 99.25 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.44 87.50 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.50 0.00 3.50 0.00 40.00 0.00 20.00 11.43 0.07 20.00 4.00 1.40 sdb 0.00 179.00 0.00 179.50 0.00 2868.00 0.00 1434.00 15.98 1.02 5.68 5.53 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.19 12.41 87.34 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.00 0.00 12.00 0.00 6.00 12.00 0.01 6.50 6.50 0.65 sdb 0.00 183.50 0.00 183.50 0.00 2936.00 0.00 1468.00 16.00 0.99 5.40 5.41 99.25 Host 2: iostat -x 2 ================== avg-cpu: %user %nice %sys %iowait %idle 0.96 0.00 0.69 0.21 98.15 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 48.00 0.00 1.33 1.33 0.00 sda 0.01 5.31 0.23 1.55 17.96 54.93 8.98 27.47 40.76 0.07 41.59 1.21 0.22 sdb 0.03 3.99 0.84 0.47 113.52 35.67 56.76 17.83 114.36 0.03 23.00 2.55 0.33 sdc 0.05 37.80 0.58 1.50 131.96 314.37 65.98 157.19 214.93 0.43 205.85 2.84 0.59 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 40.35 0.00 3.52 3.52 0.00 avg-cpu: %user %nice %sys %iowait %idle 16.03 0.00 8.61 0.44 74.92 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.99 14.43 57.71 6.97 13775.12 171.14 6887.56 85.57 215.63 0.22 3.43 3.36 21.74 sdb 0.00 357.71 7.96 358.71 63.68 5731.34 31.84 2865.67 15.80 0.04 0.10 0.10 3.83 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 avg-cpu: %user %nice %sys %iowait %idle 15.62 0.00 8.81 0.56 75.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.50 0.00 56.00 0.00 13964.00 0.00 6982.00 0.00 249.36 0.22 3.90 3.89 21.80 sdb 0.00 635.00 7.00 635.00 64.00 10160.00 32.00 5080.00 15.93 0.06 0.09 0.09 5.55 sdc 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Thanks, Tina _________________________________________________________________ Turn every day into $1000. Learn more at SignInAndWIN.ca http://g.msn.ca/ca55/213 -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry.sorensen at juno.com Fri May 9 22:58:19 2008 From: larry.sorensen at juno.com (Larry D Sorensen) Date: Fri, 9 May 2008 16:58:19 -0600 Subject: Different performance Message-ID: <20080509.165820.6004.1.larry.sorensen@juno.com> Please include information on the databases including versions. It could just be different configurations on the databases. Are the patches up to date and equal on both servers? On Fri, 9 May 2008 14:11:25 -0700 Tina Tian writes: I am a DBA. I have identical database servers running on two Linux redhat 4, host 1 and host 2. When I was running the same bulk load to database (load a data file to database), host 2 was much faster than host 1. On both host1 and host2, database are using file system mount on /dev/sda and /dev/sdb. I checked with my SA, host1 and host2 have same CPU, RAM, file system configuration. The only different is that host 2 has extra HD capacity with higher 15k RPM. But the extra 2 HDs(sdc and sdd) are dedicated to other applications, not used by database at all. My questions are: ----------------- 1. On host2 (faster), the extra faster HDs(/dev/sdc and sdd) are not used by database. Does it still affect IO performance of /dev/sda and /dev/sdb ? 2. During database bulk load testing, host 1(slower) shows longer service IO time (svctm) and longer IO waiting time(await). What other possible reason can cause this problem? Any idea? I did post the same issue to database discussion group and they suggested me to check OS performance(svctm). Below is the result from iostat on host1(slower) and host2(faster) during bulk load: Host 1: iostat -x 2 ===================== avg-cpu: %user %nice %sys %iowait %idle 0.15 0.00 0.07 0.28 99.49 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.01 0.59 0.24 0.19 29.22 6.17 14.61 3.08 83.49 0.01 21.71 3.84 0.16 sdb 0.04 10.05 0.89 3.74 117.37 110.34 58.69 55.17 49.13 0.10 21.76 4.48 2.08 avg-cpu: %user %nice %sys %iowait %idle 15.74 0.00 8.99 0.31 74.95 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 1.99 0.00 57.71 0.00 14025.87 0.00 7012.94 0.00 243.03 0.21 3.58 3.53 20.35 sdb 0.00 0.00 11.94 0.00 95.52 0.00 47.76 0.00 8.00 0.02 2.04 2.04 2.44 avg-cpu: %user %nice %sys %iowait %idle 6.18 0.00 2.37 9.24 82.20 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.50 0.50 23.00 1.00 5732.00 12.00 2866.00 6.00 239.33 0.07 3.08 3.02 7.25 sdb 0.00 129.00 7.00 130.00 56.00 2076.00 28.00 1038.00 15.56 0.75 5.49 5.40 73.95 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.12 12.44 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 3.50 0.00 3.00 0.00 52.00 0.00 26.00 17.33 0.03 10.00 3.67 1.10 sdb 0.00 182.50 0.00 182.50 0.00 2920.00 0.00 1460.00 16.00 0.99 5.44 5.44 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.49 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.01 0.00 12.06 0.00 6.03 12.00 0.01 6.00 6.00 0.60 sdb 0.00 184.92 0.00 185.43 0.00 2962.81 0.00 1481.41 15.98 1.01 5.45 5.38 99.70 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.43 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 184.08 0.00 184.08 0.00 2945.27 0.00 1472.64 16.00 0.99 5.39 5.38 99.00 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.31 87.56 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.02 15.33 6.67 1.00 sdb 0.00 181.00 0.00 181.00 0.00 2896.00 0.00 1448.00 16.00 0.99 5.48 5.49 99.40 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.19 12.37 87.45 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 178.00 0.00 178.50 0.00 2852.00 0.00 1426.00 15.98 1.00 5.61 5.55 99.10 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.37 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 179.50 0.00 179.50 0.00 2872.00 0.00 1436.00 16.00 0.99 5.52 5.53 99.25 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.44 87.50 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.50 0.00 3.50 0.00 40.00 0.00 20.00 11.43 0.07 20.00 4.00 1.40 sdb 0.00 179.00 0.00 179.50 0.00 2868.00 0.00 1434.00 15.98 1.02 5.68 5.53 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.19 12.41 87.34 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.00 0.00 12.00 0.00 6.00 12.00 0.01 6.50 6.50 0.65 sdb 0.00 183.50 0.00 183.50 0.00 2936.00 0.00 1468.00 16.00 0.99 5.40 5.41 99.25 Host 2: iostat -x 2 ================== avg-cpu: %user %nice %sys %iowait %idle 0.96 0.00 0.69 0.21 98.15 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 48.00 0.00 1.33 1.33 0.00 sda 0.01 5.31 0.23 1.55 17.96 54.93 8.98 27.47 40.76 0.07 41.59 1.21 0.22 sdb 0.03 3.99 0.84 0.47 113.52 35.67 56.76 17.83 114.36 0.03 23.00 2.55 0.33 sdc 0.05 37.80 0.58 1.50 131.96 314.37 65.98 157.19 214.93 0.43 205.85 2.84 0.59 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 40.35 0.00 3.52 3.52 0.00 avg-cpu: %user %nice %sys %iowait %idle 16.03 0.00 8.61 0.44 74.92 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.99 14.43 57.71 6.97 13775.12 171.14 6887.56 85.57 215.63 0.22 3.43 3.36 21.74 sdb 0.00 357.71 7.96 358.71 63.68 5731.34 31.84 2865.67 15.80 0.04 0.10 0.10 3.83 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 avg-cpu: %user %nice %sys %iowait %idle 15.62 0.00 8.81 0.56 75.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.50 0.00 56.00 0.00 13964.00 0.00 6982.00 0.00 249.36 0.22 3.90 3.89 21.80 sdb 0.00 635.00 7.00 635.00 64.00 10160.00 32.00 5080.00 15.93 0.06 0.09 0.09 5.55 sdc 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Thanks, Tina Sign in and you could WIN! Enter for your chance to win $1000 every day. Visit SignInAndWIN.ca today to learn more! -------------- next part -------------- An HTML attachment was scrubbed... URL: From tinatianxia at hotmail.com Sat May 10 02:41:53 2008 From: tinatianxia at hotmail.com (Tina Tian) Date: Fri, 9 May 2008 19:41:53 -0700 Subject: Different performance In-Reply-To: <20080509.165820.6004.1.larry.sorensen@juno.com> References: <20080509.165820.6004.1.larry.sorensen@juno.com> Message-ID: The DB is Sybase ASE 15.0.2. Identical configuration on two hosts. My SA also confirmed that two hosts are almost identical except host2(faster DB load) has extra two disks sdc and sdd, sdc and sdd are with higer RPM=15k. The rest of disks sda and adb are identical on two hosts, with RPM=7k. On both host 1 and host 2, DBs are on /dev/sdb only. Best Regards, Tina To: redhat-sysadmin-list at redhat.comDate: Fri, 9 May 2008 16:58:19 -0600From: larry.sorensen at juno.comSubject: Re: Different performance Please include information on the databases including versions. It could just be different configurations on the databases. Are the patches up to date and equal on both servers? On Fri, 9 May 2008 14:11:25 -0700 Tina Tian writes: I am a DBA. I have identical database servers running on two Linux redhat 4, host 1 and host 2. When I was running the same bulk load to database (load a data file to database), host 2 was much faster than host 1. On both host1 and host2, database are using file system mount on /dev/sda and /dev/sdb. I checked with my SA, host1 and host2 have same CPU, RAM, file system configuration. The only different is that host 2 has extra HD capacity with higher 15k RPM. But the extra 2 HDs(sdc and sdd) are dedicated to other applications, not used by database at all. My questions are:-----------------1. On host2 (faster), the extra faster HDs(/dev/sdc and sdd) are not used by database. Does it still affect IO performance of /dev/sda and /dev/sdb ? 2. During database bulk load testing, host 1(slower) shows longer service IO time (svctm) and longer IO waiting time(await). What other possible reason can cause this problem? Any idea? I did post the same issue to database discussion group and they suggested me to check OS performance(svctm). Below is the result from iostat on host1(slower) and host2(faster) during bulk load: Host 1: iostat -x 2===================== avg-cpu: %user %nice %sys %iowait %idle 0.15 0.00 0.07 0.28 99.49 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.01 0.59 0.24 0.19 29.22 6.17 14.61 3.08 83.49 0.01 21.71 3.84 0.16 sdb 0.04 10.05 0.89 3.74 117.37 110.34 58.69 55.17 49.13 0.10 21.76 4.48 2.08 avg-cpu: %user %nice %sys %iowait %idle 15.74 0.00 8.99 0.31 74.95 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 1.99 0.00 57.71 0.00 14025.87 0.00 7012.94 0.00 243.03 0.21 3.58 3.53 20.35 sdb 0.00 0.00 11.94 0.00 95.52 0.00 47.76 0.00 8.00 0.02 2.04 2.04 2.44 avg-cpu: %user %nice %sys %iowait %idle 6.18 0.00 2.37 9.24 82.20 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.50 0.50 23.00 1.00 5732.00 12.00 2866.00 6.00 239.33 0.07 3.08 3.02 7.25 sdb 0.00 129.00 7.00 130.00 56.00 2076.00 28.00 1038.00 15.56 0.75 5.49 5.40 73.95 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.12 12.44 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 3.50 0.00 3.00 0.00 52.00 0.00 26.00 17.33 0.03 10.00 3.67 1.10 sdb 0.00 182.50 0.00 182.50 0.00 2920.00 0.00 1460.00 16.00 0.99 5.44 5.44 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.49 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.01 0.00 12.06 0.00 6.03 12.00 0.01 6.00 6.00 0.60 sdb 0.00 184.92 0.00 185.43 0.00 2962.81 0.00 1481.41 15.98 1.01 5.45 5.38 99.70 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.43 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 184.08 0.00 184.08 0.00 2945.27 0.00 1472.64 16.00 0.99 5.39 5.38 99.00 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.31 87.56 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.02 15.33 6.67 1.00 sdb 0.00 181.00 0.00 181.00 0.00 2896.00 0.00 1448.00 16.00 0.99 5.48 5.49 99.40 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.19 12.37 87.45 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 178.00 0.00 178.50 0.00 2852.00 0.00 1426.00 15.98 1.00 5.61 5.55 99.10 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.37 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 179.50 0.00 179.50 0.00 2872.00 0.00 1436.00 16.00 0.99 5.52 5.53 99.25 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.44 87.50 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.50 0.00 3.50 0.00 40.00 0.00 20.00 11.43 0.07 20.00 4.00 1.40 sdb 0.00 179.00 0.00 179.50 0.00 2868.00 0.00 1434.00 15.98 1.02 5.68 5.53 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.19 12.41 87.34 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.00 0.00 12.00 0.00 6.00 12.00 0.01 6.50 6.50 0.65 sdb 0.00 183.50 0.00 183.50 0.00 2936.00 0.00 1468.00 16.00 0.99 5.40 5.41 99.25 Host 2: iostat -x 2 ================== avg-cpu: %user %nice %sys %iowait %idle 0.96 0.00 0.69 0.21 98.15 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 48.00 0.00 1.33 1.33 0.00 sda 0.01 5.31 0.23 1.55 17.96 54.93 8.98 27.47 40.76 0.07 41.59 1.21 0.22 sdb 0.03 3.99 0.84 0.47 113.52 35.67 56.76 17.83 114.36 0.03 23.00 2.55 0.33 sdc 0.05 37.80 0.58 1.50 131.96 314.37 65.98 157.19 214.93 0.43 205.85 2.84 0.59 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 40.35 0.00 3.52 3.52 0.00 avg-cpu: %user %nice %sys %iowait %idle 16.03 0.00 8.61 0.44 74.92 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.99 14.43 57.71 6.97 13775.12 171.14 6887.56 85.57 215.63 0.22 3.43 3.36 21.74 sdb 0.00 357.71 7.96 358.71 63.68 5731.34 31.84 2865.67 15.80 0.04 0.10 0.10 3.83 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 avg-cpu: %user %nice %sys %iowait %idle 15.62 0.00 8.81 0.56 75.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.50 0.00 56.00 0.00 13964.00 0.00 6982.00 0.00 249.36 0.22 3.90 3.89 21.80 sdb 0.00 635.00 7.00 635.00 64.00 10160.00 32.00 5080.00 15.93 0.06 0.09 0.09 5.55 sdc 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Thanks, Tina Sign in and you could WIN! Enter for your chance to win $1000 every day. Visit SignInAndWIN.ca today to learn more! _________________________________________________________________ If you like crossword puzzles, then you'll love Flexicon, a game which combines four overlapping crossword puzzles into one! http://g.msn.ca/ca55/208 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jolt at ti.com Mon May 12 12:11:12 2008 From: jolt at ti.com (Olt, Joseph) Date: Mon, 12 May 2008 07:11:12 -0500 Subject: Different performance In-Reply-To: Message-ID: <6B34B8A05FA7544BB7F013ACD452E02802A71C6A@dlee11.ent.ti.com> Tina, How are the partitions laid out on the two systems? It is likely that something OS related is accessing sda and sdb or host 1 while being spread across more disks in host 2. From the output of host 2 you provided, the first stat shows sdc is taking some of the load. Regardless of the RAM being the same in both systems, is there much swapping? Swapping on higher performance drives will be quicker. Regards, Joseph ________________________________ From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina Tian Sent: Friday, May 09, 2008 10:42 PM To: redhat-sysadmin-list at redhat.com Subject: RE: Different performance The DB is Sybase ASE 15.0.2. Identical configuration on two hosts. My SA also confirmed that two hosts are almost identical except host2(faster DB load) has extra two disks sdc and sdd, sdc and sdd are with higer RPM=15k. The rest of disks sda and adb are identical on two hosts, with RPM=7k. On both host 1 and host 2, DBs are on /dev/sdb only. Best Regards, Tina ________________________________ To: redhat-sysadmin-list at redhat.com Date: Fri, 9 May 2008 16:58:19 -0600 From: larry.sorensen at juno.com Subject: Re: Different performance Please include information on the databases including versions. It could just be different configurations on the databases. Are the patches up to date and equal on both servers? On Fri, 9 May 2008 14:11:25 -0700 Tina Tian writes: I am a DBA. I have identical database servers running on two Linux redhat 4, host 1 and host 2. When I was running the same bulk load to database (load a data file to database), host 2 was much faster than host 1. On both host1 and host2, database are using file system mount on /dev/sda and /dev/sdb. I checked with my SA, host1 and host2 have same CPU, RAM, file system configuration. The only different is that host 2 has extra HD capacity with higher 15k RPM. But the extra 2 HDs(sdc and sdd) are dedicated to other applications, not used by database at all. My questions are: ----------------- 1. On host2 (faster), the extra faster HDs(/dev/sdc and sdd) are not used by database. Does it still affect IO performance of /dev/sda and /dev/sdb ? 2. During database bulk load testing, host 1(slower) shows longer service IO time (svctm) and longer IO waiting time(await). What other possible reason can cause this problem? Any idea? I did post the same issue to database discussion group and they suggested me to check OS performance(svctm). Below is the result from iostat on host1(slower) and host2(faster) during bulk load: Host 1: iostat -x 2 ===================== avg-cpu: %user %nice %sys %iowait %idle 0.15 0.00 0.07 0.28 99.49 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.01 0.59 0.24 0.19 29.22 6.17 14.61 3.08 83.49 0.01 21.71 3.84 0.16 sdb 0.04 10.05 0.89 3.74 117.37 110.34 58.69 55.17 49.13 0.10 21.76 4.48 2.08 avg-cpu: %user %nice %sys %iowait %idle 15.74 0.00 8.99 0.31 74.95 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 1.99 0.00 57.71 0.00 14025.87 0.00 7012.94 0.00 243.03 0.21 3.58 3.53 20.35 sdb 0.00 0.00 11.94 0.00 95.52 0.00 47.76 0.00 8.00 0.02 2.04 2.04 2.44 avg-cpu: %user %nice %sys %iowait %idle 6.18 0.00 2.37 9.24 82.20 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.50 0.50 23.00 1.00 5732.00 12.00 2866.00 6.00 239.33 0.07 3.08 3.02 7.25 sdb 0.00 129.00 7.00 130.00 56.00 2076.00 28.00 1038.00 15.56 0.75 5.49 5.40 73.95 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.12 12.44 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 3.50 0.00 3.00 0.00 52.00 0.00 26.00 17.33 0.03 10.00 3.67 1.10 sdb 0.00 182.50 0.00 182.50 0.00 2920.00 0.00 1460.00 16.00 0.99 5.44 5.44 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.49 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.01 0.00 12.06 0.00 6.03 12.00 0.01 6.00 6.00 0.60 sdb 0.00 184.92 0.00 185.43 0.00 2962.81 0.00 1481.41 15.98 1.01 5.45 5.38 99.70 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.43 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 184.08 0.00 184.08 0.00 2945.27 0.00 1472.64 16.00 0.99 5.39 5.38 99.00 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.31 87.56 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.02 15.33 6.67 1.00 sdb 0.00 181.00 0.00 181.00 0.00 2896.00 0.00 1448.00 16.00 0.99 5.48 5.49 99.40 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.19 12.37 87.45 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 178.00 0.00 178.50 0.00 2852.00 0.00 1426.00 15.98 1.00 5.61 5.55 99.10 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.37 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 179.50 0.00 179.50 0.00 2872.00 0.00 1436.00 16.00 0.99 5.52 5.53 99.25 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.44 87.50 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.50 0.00 3.50 0.00 40.00 0.00 20.00 11.43 0.07 20.00 4.00 1.40 sdb 0.00 179.00 0.00 179.50 0.00 2868.00 0.00 1434.00 15.98 1.02 5.68 5.53 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.19 12.41 87.34 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.00 0.00 12.00 0.00 6.00 12.00 0.01 6.50 6.50 0.65 sdb 0.00 183.50 0.00 183.50 0.00 2936.00 0.00 1468.00 16.00 0.99 5.40 5.41 99.25 Host 2: iostat -x 2 ================== avg-cpu: %user %nice %sys %iowait %idle 0.96 0.00 0.69 0.21 98.15 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 48.00 0.00 1.33 1.33 0.00 sda 0.01 5.31 0.23 1.55 17.96 54.93 8.98 27.47 40.76 0.07 41.59 1.21 0.22 sdb 0.03 3.99 0.84 0.47 113.52 35.67 56.76 17.83 114.36 0.03 23.00 2.55 0.33 sdc 0.05 37.80 0.58 1.50 131.96 314.37 65.98 157.19 214.93 0.43 205.85 2.84 0.59 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 40.35 0.00 3.52 3.52 0.00 avg-cpu: %user %nice %sys %iowait %idle 16.03 0.00 8.61 0.44 74.92 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.99 14.43 57.71 6.97 13775.12 171.14 6887.56 85.57 215.63 0.22 3.43 3.36 21.74 sdb 0.00 357.71 7.96 358.71 63.68 5731.34 31.84 2865.67 15.80 0.04 0.10 0.10 3.83 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 avg-cpu: %user %nice %sys %iowait %idle 15.62 0.00 8.81 0.56 75.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.50 0.00 56.00 0.00 13964.00 0.00 6982.00 0.00 249.36 0.22 3.90 3.89 21.80 sdb 0.00 635.00 7.00 635.00 64.00 10160.00 32.00 5080.00 15.93 0.06 0.09 0.09 5.55 sdc 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Thanks, Tina ________________________________ Sign in and you could WIN! Enter for your chance to win $1000 every day. Visit SignInAndWIN.ca today to learn more! ________________________________ You could win $1000 a day, now until May 12th, just for signing in to Windows Live Messenger. Check out SignInAndWIN.ca to learn more! -------------- next part -------------- An HTML attachment was scrubbed... URL: From tinatianxia at hotmail.com Mon May 12 17:36:47 2008 From: tinatianxia at hotmail.com (Tina Tian) Date: Mon, 12 May 2008 10:36:47 -0700 Subject: Different performance In-Reply-To: <6B34B8A05FA7544BB7F013ACD452E02802A71C6A@dlee11.ent.ti.com> References: <6B34B8A05FA7544BB7F013ACD452E02802A71C6A@dlee11.ent.ti.com> Message-ID: Thank you, Joseph. Let me explain it. On both host 1 and host2, sybase software is in /sybase and sybase database is in /sybasedata. On host 2, we have amada backup software in /dev/sdc and I believe some amada demon was running when I ran iostat. (> From the output of host 2 you provided, the first stat shows sdc is taking some of the load). Host 2 do have additional higher performance drivers which are not being used by sybase database (/sybasedata) at all. Will database be benefit from their quicker swap? Belows are results from fdisk/mount/dmesg(swap) on host1 and host2. Host 1, fdisk -l: ----------------- Disk /dev/sda: 72.7 GB, 72746008576 bytes255 heads, 63 sectors/track, 8844 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sda1 1 4 32098+ de Dell Utility/dev/sda2 5 1279 10241437+ 83 Linux/dev/sda3 * 1280 1406 1020127+ 83 Linux/dev/sda4 1407 8844 59745735 5 Extended/dev/sda5 1407 8844 59745703+ 8e Linux LVM Disk /dev/sdb: 598.8 GB, 598879502336 bytes255 heads, 63 sectors/track, 72809 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdb1 1 66868 537117178+ 83 Linux/dev/sdb2 66869 72809 47721082+ 5 Extended host 1, mount: --------------- /dev/mapper/VolGroup_ID_27777-LogVol2 on / type ext3 (rw)none on /proc type proc (rw)none on /sys type sysfs (rw)none on /dev/pts type devpts (rw,gid=5,mode=620)usbfs on /proc/bus/usb type usbfs (rw)/dev/sda3 on /boot type ext3 (rw)none on /dev/shm type tmpfs (rw)/dev/mapper/VolGroup_ID_27777-LogVol3 on /tmp type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVol6 on /usr type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVol5 on /var type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVolHome on /home type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVolSybase on /sybase type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVolTranLog on /tranlog type ext3 (rw)/dev/sdb1 on /sybasedata type ext3 (rw)none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) Host 1, dmesg|grep swap ------------------------ Adding 1998840k swap on /dev/VolGroup_ID_27777/LogVol1. Priority:-1 extents:1Adding 2097144k swap on /dev/VolGroup_ID_27777/LogVol0. Priority:-2 extents:1 Host 2, fdisk -l ---------------- Disk /dev/sda: 72.7 GB, 72746008576 bytes255 heads, 63 sectors/track, 8844 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sda1 1 4 32098+ de Dell Utility/dev/sda2 5 1534 12289725 83 Linux/dev/sda3 * 1535 1661 1020127+ 83 Linux/dev/sda4 1662 8844 57697447+ 5 Extended/dev/sda5 1662 8844 57697416 8e Linux LVM Disk /dev/sdb: 598.8 GB, 598879502336 bytes255 heads, 63 sectors/track, 72809 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdb1 * 1 72809 584838261 83 Linux Disk /dev/sdc: 299.4 GB, 299439751168 bytes255 heads, 63 sectors/track, 36404 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdc1 1 4370 35101993+ 83 Linux/dev/sdc2 4371 36404 257313105 83 Linux Disk /dev/sdd: 320.0 GB, 320072933376 bytes255 heads, 63 sectors/track, 38913 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdd1 1 38913 312568641 83 Linux host 2, mount: --------------- /dev/mapper/VolGroup_ID_787-LogVol1 on / type ext3 (rw)none on /proc type proc (rw)none on /sys type sysfs (rw)none on /dev/pts type devpts (rw,gid=5,mode=620)usbfs on /proc/bus/usb type usbfs (rw)/dev/sda3 on /boot type ext3 (rw)none on /dev/shm type tmpfs (rw)/dev/mapper/VolGroup_ID_787-LogVol2 on /tmp type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVol5 on /usr type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVol4 on /var type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVolHome on /home type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVolSybase on /sybase type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVolTranlog on /tranlog type ext3 (rw)/dev/sdb1 on /sybasedata type ext3 (rw)/dev/sdc1 on /pkgs type ext3 (rw)/dev/sdc2 on /amanda-data type ext3 (rw)none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) host 2 dmesg |grep swap: ------------------------ host 1 : dmesg |grep swapAdding 1769464k swap on /dev/VolGroup_ID_787/LogVol0. Priority:-1 extents:1 Best Regards, Tina Date: Mon, 12 May 2008 07:11:12 -0500From: jolt at ti.comTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performance Tina, How are the partitions laid out on the two systems? It is likely that something OS related is accessing sda and sdb or host 1 while being spread across more disks in host 2. From the output of host 2 you provided, the first stat shows sdc is taking some of the load. Regardless of the RAM being the same in both systems, is there much swapping? Swapping on higher performance drives will be quicker. Regards, Joseph From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina TianSent: Friday, May 09, 2008 10:42 PMTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performance The DB is Sybase ASE 15.0.2. Identical configuration on two hosts. My SA also confirmed that two hosts are almost identical except host2(faster DB load) has extra two disks sdc and sdd, sdc and sdd are with higer RPM=15k. The rest of disks sda and adb are identical on two hosts, with RPM=7k. On both host 1 and host 2, DBs are on /dev/sdb only. Best Regards,Tina To: redhat-sysadmin-list at redhat.comDate: Fri, 9 May 2008 16:58:19 -0600From: larry.sorensen at juno.comSubject: Re: Different performance Please include information on the databases including versions. It could just be different configurations on the databases. Are the patches up to date and equal on both servers? On Fri, 9 May 2008 14:11:25 -0700 Tina Tian writes: I am a DBA. I have identical database servers running on two Linux redhat 4, host 1 and host 2. When I was running the same bulk load to database (load a data file to database), host 2 was much faster than host 1. On both host1 and host2, database are using file system mount on /dev/sda and /dev/sdb. I checked with my SA, host1 and host2 have same CPU, RAM, file system configuration. The only different is that host 2 has extra HD capacity with higher 15k RPM. But the extra 2 HDs(sdc and sdd) are dedicated to other applications, not used by database at all. My questions are:-----------------1. On host2 (faster), the extra faster HDs(/dev/sdc and sdd) are not used by database. Does it still affect IO performance of /dev/sda and /dev/sdb ? 2. During database bulk load testing, host 1(slower) shows longer service IO time (svctm) and longer IO waiting time(await). What other possible reason can cause this problem? Any idea? I did post the same issue to database discussion group and they suggested me to check OS performance(svctm). Below is the result from iostat on host1(slower) and host2(faster) during bulk load: Host 1: iostat -x 2===================== avg-cpu: %user %nice %sys %iowait %idle 0.15 0.00 0.07 0.28 99.49 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.01 0.59 0.24 0.19 29.22 6.17 14.61 3.08 83.49 0.01 21.71 3.84 0.16 sdb 0.04 10.05 0.89 3.74 117.37 110.34 58.69 55.17 49.13 0.10 21.76 4.48 2.08 avg-cpu: %user %nice %sys %iowait %idle 15.74 0.00 8.99 0.31 74.95 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 1.99 0.00 57.71 0.00 14025.87 0.00 7012.94 0.00 243.03 0.21 3.58 3.53 20.35 sdb 0.00 0.00 11.94 0.00 95.52 0.00 47.76 0.00 8.00 0.02 2.04 2.04 2.44 avg-cpu: %user %nice %sys %iowait %idle 6.18 0.00 2.37 9.24 82.20 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.50 0.50 23.00 1.00 5732.00 12.00 2866.00 6.00 239.33 0.07 3.08 3.02 7.25 sdb 0.00 129.00 7.00 130.00 56.00 2076.00 28.00 1038.00 15.56 0.75 5.49 5.40 73.95 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.12 12.44 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 3.50 0.00 3.00 0.00 52.00 0.00 26.00 17.33 0.03 10.00 3.67 1.10 sdb 0.00 182.50 0.00 182.50 0.00 2920.00 0.00 1460.00 16.00 0.99 5.44 5.44 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.49 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.01 0.00 12.06 0.00 6.03 12.00 0.01 6.00 6.00 0.60 sdb 0.00 184.92 0.00 185.43 0.00 2962.81 0.00 1481.41 15.98 1.01 5.45 5.38 99.70 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.43 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 184.08 0.00 184.08 0.00 2945.27 0.00 1472.64 16.00 0.99 5.39 5.38 99.00 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.31 87.56 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.02 15.33 6.67 1.00 sdb 0.00 181.00 0.00 181.00 0.00 2896.00 0.00 1448.00 16.00 0.99 5.48 5.49 99.40 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.19 12.37 87.45 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 178.00 0.00 178.50 0.00 2852.00 0.00 1426.00 15.98 1.00 5.61 5.55 99.10 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.37 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 179.50 0.00 179.50 0.00 2872.00 0.00 1436.00 16.00 0.99 5.52 5.53 99.25 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.44 87.50 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.50 0.00 3.50 0.00 40.00 0.00 20.00 11.43 0.07 20.00 4.00 1.40 sdb 0.00 179.00 0.00 179.50 0.00 2868.00 0.00 1434.00 15.98 1.02 5.68 5.53 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.19 12.41 87.34 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.00 0.00 12.00 0.00 6.00 12.00 0.01 6.50 6.50 0.65 sdb 0.00 183.50 0.00 183.50 0.00 2936.00 0.00 1468.00 16.00 0.99 5.40 5.41 99.25 Host 2: iostat -x 2 ================== avg-cpu: %user %nice %sys %iowait %idle 0.96 0.00 0.69 0.21 98.15 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 48.00 0.00 1.33 1.33 0.00 sda 0.01 5.31 0.23 1.55 17.96 54.93 8.98 27.47 40.76 0.07 41.59 1.21 0.22 sdb 0.03 3.99 0.84 0.47 113.52 35.67 56.76 17.83 114.36 0.03 23.00 2.55 0.33 sdc 0.05 37.80 0.58 1.50 131.96 314.37 65.98 157.19 214.93 0.43 205.85 2.84 0.59 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 40.35 0.00 3.52 3.52 0.00 avg-cpu: %user %nice %sys %iowait %idle 16.03 0.00 8.61 0.44 74.92 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.99 14.43 57.71 6.97 13775.12 171.14 6887.56 85.57 215.63 0.22 3.43 3.36 21.74 sdb 0.00 357.71 7.96 358.71 63.68 5731.34 31.84 2865.67 15.80 0.04 0.10 0.10 3.83 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 avg-cpu: %user %nice %sys %iowait %idle 15.62 0.00 8.81 0.56 75.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.50 0.00 56.00 0.00 13964.00 0.00 6982.00 0.00 249.36 0.22 3.90 3.89 21.80 sdb 0.00 635.00 7.00 635.00 64.00 10160.00 32.00 5080.00 15.93 0.06 0.09 0.09 5.55 sdc 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Thanks, Tina Sign in and you could WIN! Enter for your chance to win $1000 every day. Visit SignInAndWIN.ca today to learn more! You could win $1000 a day, now until May 12th, just for signing in to Windows Live Messenger. Check out SignInAndWIN.ca to learn more! _________________________________________________________________ Enter today for your chance to win $1000 a day?today until May 12th. Learn more at SignInAndWIN.ca http://g.msn.ca/ca55/215 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dongwu at yahoo-inc.com Mon May 12 17:48:16 2008 From: dongwu at yahoo-inc.com (Dongwu Zeng) Date: Mon, 12 May 2008 10:48:16 -0700 Subject: Different performance In-Reply-To: Message-ID: You may want to use ?dd? command to do some raw-io testing. The test can help check whether the problem is hardware/kernel related or fs/app related. Good luck. Dongwu Zeng On 5/12/08 10:36 AM, "Tina Tian" wrote: > > Thank you, Joseph. > > Let me explain it. On both host 1 and host2, sybase software is in /sybase and > sybase database is in /sybasedata. On host 2, we have amada backup software in > /dev/sdc and I believe some amada demon was running when I ran iostat. (> From > the output of host 2 you provided, the first stat shows sdc is taking some of > the load). > > Host 2 do have additional higher performance drivers which are not being used > by sybase database (/sybasedata) at all. Will database be benefit from their > quicker swap? > > Belows are results from fdisk/mount/dmesg(swap) on host1 and host2. > > > Host 1, fdisk -l: > ----------------- > Disk /dev/sda: 72.7 GB, 72746008576 bytes > 255 heads, 63 sectors/track, 8844 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > Device Boot Start End Blocks Id System > /dev/sda1 1 4 32098+ de Dell Utility > /dev/sda2 5 1279 10241437+ 83 Linux > /dev/sda3 * 1280 1406 1020127+ 83 Linux > /dev/sda4 1407 8844 59745735 5 Extended > /dev/sda5 1407 8844 59745703+ 8e Linux LVM > Disk /dev/sdb: 598.8 GB, 598879502336 bytes > 255 heads, 63 sectors/track, 72809 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > Device Boot Start End Blocks Id System > /dev/sdb1 1 66868 537117178+ 83 Linux > /dev/sdb2 66869 72809 47721082+ 5 Extended > > > host 1, mount: > --------------- > /dev/mapper/VolGroup_ID_27777-LogVol2 on / type ext3 (rw) > none on /proc type proc (rw) > none on /sys type sysfs (rw) > none on /dev/pts type devpts (rw,gid=5,mode=620) > usbfs on /proc/bus/usb type usbfs (rw) > /dev/sda3 on /boot type ext3 (rw) > none on /dev/shm type tmpfs (rw) > /dev/mapper/VolGroup_ID_27777-LogVol3 on /tmp type ext3 (rw) > /dev/mapper/VolGroup_ID_27777-LogVol6 on /usr type ext3 (rw) > /dev/mapper/VolGroup_ID_27777-LogVol5 on /var type ext3 (rw) > /dev/mapper/VolGroup_ID_27777-LogVolHome on /home type ext3 (rw) > /dev/mapper/VolGroup_ID_27777-LogVolSybase on /sybase type ext3 (rw) > /dev/mapper/VolGroup_ID_27777-LogVolTranLog on /tranlog type ext3 (rw) > /dev/sdb1 on /sybasedata type ext3 (rw) > none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) > sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) > > Host 1, dmesg|grep swap > ------------------------ > Adding 1998840k swap on /dev/VolGroup_ID_27777/LogVol1. Priority:-1 extents:1 > Adding 2097144k swap on /dev/VolGroup_ID_27777/LogVol0. Priority:-2 extents:1 > > Host 2, fdisk -l > ---------------- > Disk /dev/sda: 72.7 GB, 72746008576 bytes > 255 heads, 63 sectors/track, 8844 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > Device Boot Start End Blocks Id System > /dev/sda1 1 4 32098+ de Dell Utility > /dev/sda2 5 1534 12289725 83 Linux > /dev/sda3 * 1535 1661 1020127+ 83 Linux > /dev/sda4 1662 8844 57697447+ 5 Extended > /dev/sda5 1662 8844 57697416 8e Linux LVM > Disk /dev/sdb: 598.8 GB, 598879502336 bytes > 255 heads, 63 sectors/track, 72809 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > Device Boot Start End Blocks Id System > /dev/sdb1 * 1 72809 584838261 83 Linux > Disk /dev/sdc: 299.4 GB, 299439751168 bytes > 255 heads, 63 sectors/track, 36404 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > Device Boot Start End Blocks Id System > /dev/sdc1 1 4370 35101993+ 83 Linux > /dev/sdc2 4371 36404 257313105 83 Linux > Disk /dev/sdd: 320.0 GB, 320072933376 bytes > 255 heads, 63 sectors/track, 38913 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes > Device Boot Start End Blocks Id System > /dev/sdd1 1 38913 312568641 83 Linux > > host 2, mount: > --------------- > /dev/mapper/VolGroup_ID_787-LogVol1 on / type ext3 (rw) > none on /proc type proc (rw) > none on /sys type sysfs (rw) > none on /dev/pts type devpts (rw,gid=5,mode=620) > usbfs on /proc/bus/usb type usbfs (rw) > /dev/sda3 on /boot type ext3 (rw) > none on /dev/shm type tmpfs (rw) > /dev/mapper/VolGroup_ID_787-LogVol2 on /tmp type ext3 (rw) > /dev/mapper/VolGroup_ID_787-LogVol5 on /usr type ext3 (rw) > /dev/mapper/VolGroup_ID_787-LogVol4 on /var type ext3 (rw) > /dev/mapper/VolGroup_ID_787-LogVolHome on /home type ext3 (rw) > /dev/mapper/VolGroup_ID_787-LogVolSybase on /sybase type ext3 (rw) > /dev/mapper/VolGroup_ID_787-LogVolTranlog on /tranlog type ext3 (rw) > /dev/sdb1 on /sybasedata type ext3 (rw) > /dev/sdc1 on /pkgs type ext3 (rw) > /dev/sdc2 on /amanda-data type ext3 (rw) > none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) > sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) > > host 2 dmesg |grep swap: > ------------------------ > > host 1 : dmesg |grep swap > Adding 1769464k swap on /dev/VolGroup_ID_787/LogVol0. Priority:-1 extents:1 > > > Best Regards, > Tina >> >> Date: Mon, 12 May 2008 07:11:12 -0500 >> From: jolt at ti.com >> To: redhat-sysadmin-list at redhat.com >> Subject: RE: Different performance >> >> Tina, >> >> >> >> How are the partitions laid out on the two systems? It is likely that >> something OS related is accessing sda and sdb or host 1 while being spread >> across more disks in host 2. From the output of host 2 you provided, the >> first stat shows sdc is taking some of the load. Regardless of the RAM being >> the same in both systems, is there much swapping? Swapping on higher >> performance drives will be quicker. >> >> >> >> Regards, >> >> >> >> Joseph >> >> >> >> >> From: redhat-sysadmin-list-bounces at redhat.com >> [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina Tian >> Sent: Friday, May 09, 2008 10:42 PM >> To: redhat-sysadmin-list at redhat.com >> Subject: RE: Different performance >> >> >> >> The DB is Sybase ASE 15.0.2. Identical configuration on two hosts. My SA also >> confirmed that two hosts are almost identical except host2(faster DB load) >> has extra two disks sdc and sdd, sdc and sdd are with higer RPM=15k. The >> rest of disks sda and adb are identical on two hosts, with RPM=7k. On both >> host 1 and host 2, DBs are on /dev/sdb only. >> >> >> Best Regards, >> Tina >> >> >> To: redhat-sysadmin-list at redhat.com >> Date: Fri, 9 May 2008 16:58:19 -0600 >> From: larry.sorensen at juno.com >> Subject: Re: Different performance >> >> Please include information on the databases including versions. It could just >> be different configurations on the databases. Are the patches up to date and >> equal on both servers? >> >> >> >> On Fri, 9 May 2008 14:11:25 -0700 Tina Tian writes: >>> >>> I am a DBA. I have identical database servers running on two Linux redhat 4, >>> host 1 and host 2. When I was running the same bulk load to database (load a >>> data file to database), host 2 was much faster than host 1. >>> >>> On both host1 and host2, database are using file system mount on /dev/sda >>> and /dev/sdb. >>> >>> I checked with my SA, host1 and host2 have same CPU, RAM, file system >>> configuration. The only different is that host 2 has extra HD capacity with >>> higher 15k RPM. But the extra 2 HDs(sdc and sdd) are dedicated to other >>> applications, not used by database at all. >>> >>> My questions are: >>> ----------------- >>> 1. On host2 (faster), the extra faster HDs(/dev/sdc and sdd) are not used by >>> database. Does it still affect IO performance of /dev/sda and /dev/sdb ? >>> >>> 2. During database bulk load testing, host 1(slower) shows longer service IO >>> time (svctm) and longer IO waiting time(await). >>> What other possible reason can cause this problem? Any idea? >>> >>> I did post the same issue to database discussion group and they suggested me >>> to check OS performance(svctm). >>> >>> >>> Below is the result from iostat on host1(slower) and host2(faster) during >>> bulk load: >>> >>> Host 1: iostat -x 2 >>> ===================== >>> >>> >>> >>> avg-cpu: %user %nice %sys %iowait %idle >>> >>> 0.15 0.00 0.07 0.28 99.49 >>> >>> >>> >>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s >>> avgrq-sz avgqu-sz await svctm %util >>> >>> sda 0.01 0.59 0.24 0.19 29.22 6.17 14.61 3.08 >>> 83.49 0.01 21.71 3.84 0.16 >>> >>> sdb 0.04 10.05 0.89 3.74 117.37 110.34 58.69 55.17 >>> 49.13 0.10 21.76 4.48 2.08 >>> >>> >>> >>> avg-cpu: %user %nice %sys %iowait %idle >>> >>> 15.74 0.00 8.99 0.31 74.95 >>> >>> >>> >>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s >>> avgrq-sz avgqu-sz await svctm %util >>> >>> sda 1.99 0.00 57.71 0.00 14025.87 0.00 7012.94 0.00 >>> 243.03 0.21 3.58 3.53 20.35 >>> >>> sdb 0.00 0.00 11.94 0.00 95.52 0.00 47.76 0.00 >>> 8.00 0.02 2.04 2.04 2.44 >>> >>> >>> >>> avg-cpu: %user %nice %sys %iowait %idle >>> >>> 6.18 0.00 2.37 9.24 82.20 >>> >>> >>> >>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s >>> avgrq-sz avgqu-sz await svctm %util >>> >>> sda 0.50 0.50 23.00 1.00 5732.00 12.00 2866.00 6.00 >>> 239.33 0.07 3.08 3.02 7.25 >>> >>> sdb 0.00 129.00 7.00 130.00 56.00 2076.00 28.00 1038.00 >>> 15.56 0.75 5.49 5.40 73.95 >>> >>> >>> >>> avg-cpu: %user %nice %sys %iowait %idle >>> >>> 0.06 0.00 0.12 12.44 87.38 >>> >>> >>> >>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s >>> avgrq-sz avgqu-sz await svctm %util >>> >>> sda 0.00 3.50 0.00 3.00 0.00 52.00 0.00 26.00 >>> 17.33 0.03 10.00 3.67 1.10 >>> >>> sdb 0.00 182.50 0.00 182.50 0.00 2920.00 0.00 1460.00 >>> 16.00 0.99 5.44 5.44 99.30 >>> >>> >>> >>> avg-cpu: %user %nice %sys %iowait %idle >>> >>> 0.00 0.00 0.12 12.49 87.38 >>> >>> >>> >>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s >>> avgrq-sz avgqu-sz await svctm %util >>> >>> sda 0.00 0.50 0.00 1.01 0.00 12.06 0.00 6.03 >>> 12.00 0.01 6.00 6.00 0.60 >>> >>> sdb 0.00 184.92 0.00 185.43 0.00 2962.81 0.00 1481.41 >>> 15.98 1.01 5.45 5.38 99.70 >>> >>> >>> >>> avg-cpu: %user %nice %sys %iowait %idle >>> >>> 0.00 0.00 0.06 12.43 87.51 >>> >>> >>> >>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s >>> avgrq-sz avgqu-sz await svctm %util >>> >>> sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 >>> 0.00 0.00 0.00 0.00 0.00 >>> >>> sdb 0.00 184.08 0.00 184.08 0.00 2945.27 0.00 1472.64 >>> 16.00 0.99 5.39 5.38 99.00 >>> >>> >>> >>> avg-cpu: %user %nice %sys %iowait %idle >>> >>> 0.00 0.00 0.12 12.31 87.56 >>> >>> >>> >>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s >>> avgrq-sz avgqu-sz await svctm %util >>> >>> sda 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 >>> 13.33 0.02 15.33 6.67 1.00 >>> >>> sdb 0.00 181.00 0.00 181.00 0.00 2896.00 0.00 1448.00 >>> 16.00 0.99 5.48 5.49 99.40 >>> >>> >>> >>> avg-cpu: %user %nice %sys %iowait %idle >>> >>> 0.00 0.00 0.19 12.37 87.45 >>> >>> >>> >>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s >>> avgrq-sz avgqu-sz await svctm %util >>> >>> sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 >>> 0.00 0.00 0.00 0.00 0.00 >>> >>> sdb 0.00 178.00 0.00 178.50 0.00 2852.00 0.00 1426.00 >>> 15.98 1.00 5.61 5.55 99.10 >>> >>> >>> >>> avg-cpu: %user %nice %sys %iowait %idle >>> >>> 0.00 0.00 0.12 12.37 87.51 >>> >>> >>> >>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s >>> avgrq-sz avgqu-sz await svctm %util >>> >>> sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 >>> 0.00 0.00 0.00 0.00 0.00 >>> >>> sdb 0.00 179.50 0.00 179.50 0.00 2872.00 0.00 1436.00 >>> 16.00 0.99 5.52 5.53 99.25 >>> >>> >>> >>> avg-cpu: %user %nice %sys %iowait %idle >>> >>> 0.00 0.00 0.06 12.44 87.50 >>> >>> >>> >>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s >>> avgrq-sz avgqu-sz await svctm %util >>> >>> sda 0.00 1.50 0.00 3.50 0.00 40.00 0.00 20.00 >>> 11.43 0.07 20.00 4.00 1.40 >>> >>> sdb 0.00 179.00 0.00 179.50 0.00 2868.00 0.00 1434.00 >>> 15.98 1.02 5.68 5.53 99.30 >>> >>> >>> >>> avg-cpu: %user %nice %sys %iowait %idle >>> >>> 0.06 0.00 0.19 12.41 87.34 >>> >>> >>> >>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s >>> avgrq-sz avgqu-sz await svctm %util >>> >>> sda 0.00 0.50 0.00 1.00 0.00 12.00 0.00 6.00 >>> 12.00 0.01 6.50 6.50 0.65 >>> >>> sdb 0.00 183.50 0.00 183.50 0.00 2936.00 0.00 1468.00 >>> 16.00 0.99 5.40 5.41 99.25 >>> >>> >>> >>> >>> >>> >>> Host 2: iostat -x 2 >>> >>> ================== >>> >>> avg-cpu: %user %nice %sys %iowait %idle >>> >>> 0.96 0.00 0.69 0.21 98.15 >>> >>> >>> >>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s >>> avgrq-sz avgqu-sz await svctm %util >>> >>> hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 >>> 48.00 0.00 1.33 1.33 0.00 >>> >>> sda 0.01 5.31 0.23 1.55 17.96 54.93 8.98 27.47 >>> 40.76 0.07 41.59 1.21 0.22 >>> >>> sdb 0.03 3.99 0.84 0.47 113.52 35.67 56.76 17.83 >>> 114.36 0.03 23.00 2.55 0.33 >>> >>> sdc 0.05 37.80 0.58 1.50 131.96 314.37 65.98 157.19 >>> 214.93 0.43 205.85 2.84 0.59 >>> >>> sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 >>> 40.35 0.00 3.52 3.52 0.00 >>> >>> >>> >>> avg-cpu: %user %nice %sys %iowait %idle >>> >>> 16.03 0.00 8.61 0.44 74.92 >>> >>> >>> >>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s >>> avgrq-sz avgqu-sz await svctm %util >>> >>> hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 >>> 0.00 0.00 0.00 0.00 0.00 >>> >>> sda 1.99 14.43 57.71 6.97 13775.12 171.14 6887.56 85.57 >>> 215.63 0.22 3.43 3.36 21.74 >>> >>> sdb 0.00 357.71 7.96 358.71 63.68 5731.34 31.84 2865.67 >>> 15.80 0.04 0.10 0.10 3.83 >>> >>> sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 >>> 0.00 0.00 0.00 0.00 0.00 >>> >>> sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 >>> 0.00 0.00 0.00 0.00 0.00 >>> >>> >>> >>> avg-cpu: %user %nice %sys %iowait %idle >>> >>> 15.62 0.00 8.81 0.56 75.00 >>> >>> >>> >>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s >>> avgrq-sz avgqu-sz await svctm %util >>> >>> hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 >>> 0.00 0.00 0.00 0.00 0.00 >>> >>> sda 1.50 0.00 56.00 0.00 13964.00 0.00 6982.00 0.00 >>> 249.36 0.22 3.90 3.89 21.80 >>> >>> sdb 0.00 635.00 7.00 635.00 64.00 10160.00 32.00 5080.00 >>> 15.93 0.06 0.09 0.09 5.55 >>> >>> sdc 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 >>> 13.33 0.00 0.00 0.00 0.00 >>> >>> sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 >>> 0.00 0.00 0.00 0.00 0.00 >>> >>> >>> >>> >>> >>> Thanks, >>> >>> Tina >>> >>> >>> >>> >>> >>> Sign in and you could WIN! Enter for your chance to win $1000 every day. >>> Visit SignInAndWIN.ca today to learn more! >>> >>> >> >> >> >> >> You could win $1000 a day, now until May 12th, just for signing in to Windows >> Live Messenger. Check out SignInAndWIN.ca to learn more! >> > > > Sign in to Windows Live Messenger, and enter for your chance to win $1000 a > day?today until May 12th. Visit SignInAndWIN.ca > > -- > redhat-sysadmin-list mailing list > redhat-sysadmin-list at redhat.com > https://www.redhat.com/mailman/listinfo/redhat-sysadmin-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From jolt at ti.com Mon May 12 18:59:43 2008 From: jolt at ti.com (Olt, Joseph) Date: Mon, 12 May 2008 13:59:43 -0500 Subject: Different performance In-Reply-To: Message-ID: <6B34B8A05FA7544BB7F013ACD452E02802AD608A@dlee11.ent.ti.com> Tina, Could you run a vmstat output while under load to see how much memory is swapping and how quickly context switching is occurring? "vmstat 5 20" Also, what kernel is running? "uname -a" ________________________________ From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina Tian Sent: Monday, May 12, 2008 1:37 PM To: redhat-sysadmin-list at redhat.com Subject: RE: Different performance Thank you, Joseph. Let me explain it. On both host 1 and host2, sybase software is in /sybase and sybase database is in /sybasedata. On host 2, we have amada backup software in /dev/sdc and I believe some amada demon was running when I ran iostat. (> From the output of host 2 you provided, the first stat shows sdc is taking some of the load). Host 2 do have additional higher performance drivers which are not being used by sybase database (/sybasedata) at all. Will database be benefit from their quicker swap? Belows are results from fdisk/mount/dmesg(swap) on host1 and host2. Host 1, fdisk -l: ----------------- Disk /dev/sda: 72.7 GB, 72746008576 bytes 255 heads, 63 sectors/track, 8844 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 4 32098+ de Dell Utility /dev/sda2 5 1279 10241437+ 83 Linux /dev/sda3 * 1280 1406 1020127+ 83 Linux /dev/sda4 1407 8844 59745735 5 Extended /dev/sda5 1407 8844 59745703+ 8e Linux LVM Disk /dev/sdb: 598.8 GB, 598879502336 bytes 255 heads, 63 sectors/track, 72809 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 66868 537117178+ 83 Linux /dev/sdb2 66869 72809 47721082+ 5 Extended host 1, mount: --------------- /dev/mapper/VolGroup_ID_27777-LogVol2 on / type ext3 (rw) none on /proc type proc (rw) none on /sys type sysfs (rw) none on /dev/pts type devpts (rw,gid=5,mode=620) usbfs on /proc/bus/usb type usbfs (rw) /dev/sda3 on /boot type ext3 (rw) none on /dev/shm type tmpfs (rw) /dev/mapper/VolGroup_ID_27777-LogVol3 on /tmp type ext3 (rw) /dev/mapper/VolGroup_ID_27777-LogVol6 on /usr type ext3 (rw) /dev/mapper/VolGroup_ID_27777-LogVol5 on /var type ext3 (rw) /dev/mapper/VolGroup_ID_27777-LogVolHome on /home type ext3 (rw) /dev/mapper/VolGroup_ID_27777-LogVolSybase on /sybase type ext3 (rw) /dev/mapper/VolGroup_ID_27777-LogVolTranLog on /tranlog type ext3 (rw) /dev/sdb1 on /sybasedata type ext3 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) Host 1, dmesg|grep swap ------------------------ Adding 1998840k swap on /dev/VolGroup_ID_27777/LogVol1. Priority:-1 extents:1 Adding 2097144k swap on /dev/VolGroup_ID_27777/LogVol0. Priority:-2 extents:1 Host 2, fdisk -l ---------------- Disk /dev/sda: 72.7 GB, 72746008576 bytes 255 heads, 63 sectors/track, 8844 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 4 32098+ de Dell Utility /dev/sda2 5 1534 12289725 83 Linux /dev/sda3 * 1535 1661 1020127+ 83 Linux /dev/sda4 1662 8844 57697447+ 5 Extended /dev/sda5 1662 8844 57697416 8e Linux LVM Disk /dev/sdb: 598.8 GB, 598879502336 bytes 255 heads, 63 sectors/track, 72809 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 * 1 72809 584838261 83 Linux Disk /dev/sdc: 299.4 GB, 299439751168 bytes 255 heads, 63 sectors/track, 36404 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdc1 1 4370 35101993+ 83 Linux /dev/sdc2 4371 36404 257313105 83 Linux Disk /dev/sdd: 320.0 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdd1 1 38913 312568641 83 Linux host 2, mount: --------------- /dev/mapper/VolGroup_ID_787-LogVol1 on / type ext3 (rw) none on /proc type proc (rw) none on /sys type sysfs (rw) none on /dev/pts type devpts (rw,gid=5,mode=620) usbfs on /proc/bus/usb type usbfs (rw) /dev/sda3 on /boot type ext3 (rw) none on /dev/shm type tmpfs (rw) /dev/mapper/VolGroup_ID_787-LogVol2 on /tmp type ext3 (rw) /dev/mapper/VolGroup_ID_787-LogVol5 on /usr type ext3 (rw) /dev/mapper/VolGroup_ID_787-LogVol4 on /var type ext3 (rw) /dev/mapper/VolGroup_ID_787-LogVolHome on /home type ext3 (rw) /dev/mapper/VolGroup_ID_787-LogVolSybase on /sybase type ext3 (rw) /dev/mapper/VolGroup_ID_787-LogVolTranlog on /tranlog type ext3 (rw) /dev/sdb1 on /sybasedata type ext3 (rw) /dev/sdc1 on /pkgs type ext3 (rw) /dev/sdc2 on /amanda-data type ext3 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) host 2 dmesg |grep swap: ------------------------ host 1 : dmesg |grep swap Adding 1769464k swap on /dev/VolGroup_ID_787/LogVol0. Priority:-1 extents:1 Best Regards, Tina ________________________________ Date: Mon, 12 May 2008 07:11:12 -0500 From: jolt at ti.com To: redhat-sysadmin-list at redhat.com Subject: RE: Different performance Tina, How are the partitions laid out on the two systems? It is likely that something OS related is accessing sda and sdb or host 1 while being spread across more disks in host 2. From the output of host 2 you provided, the first stat shows sdc is taking some of the load. Regardless of the RAM being the same in both systems, is there much swapping? Swapping on higher performance drives will be quicker. Regards, Joseph ________________________________ From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina Tian Sent: Friday, May 09, 2008 10:42 PM To: redhat-sysadmin-list at redhat.com Subject: RE: Different performance The DB is Sybase ASE 15.0.2. Identical configuration on two hosts. My SA also confirmed that two hosts are almost identical except host2(faster DB load) has extra two disks sdc and sdd, sdc and sdd are with higer RPM=15k. The rest of disks sda and adb are identical on two hosts, with RPM=7k. On both host 1 and host 2, DBs are on /dev/sdb only. Best Regards, Tina ________________________________ To: redhat-sysadmin-list at redhat.com Date: Fri, 9 May 2008 16:58:19 -0600 From: larry.sorensen at juno.com Subject: Re: Different performance Please include information on the databases including versions. It could just be different configurations on the databases. Are the patches up to date and equal on both servers? On Fri, 9 May 2008 14:11:25 -0700 Tina Tian writes: I am a DBA. I have identical database servers running on two Linux redhat 4, host 1 and host 2. When I was running the same bulk load to database (load a data file to database), host 2 was much faster than host 1. On both host1 and host2, database are using file system mount on /dev/sda and /dev/sdb. I checked with my SA, host1 and host2 have same CPU, RAM, file system configuration. The only different is that host 2 has extra HD capacity with higher 15k RPM. But the extra 2 HDs(sdc and sdd) are dedicated to other applications, not used by database at all. My questions are: ----------------- 1. On host2 (faster), the extra faster HDs(/dev/sdc and sdd) are not used by database. Does it still affect IO performance of /dev/sda and /dev/sdb ? 2. During database bulk load testing, host 1(slower) shows longer service IO time (svctm) and longer IO waiting time(await). What other possible reason can cause this problem? Any idea? I did post the same issue to database discussion group and they suggested me to check OS performance(svctm). Below is the result from iostat on host1(slower) and host2(faster) during bulk load: Host 1: iostat -x 2 ===================== avg-cpu: %user %nice %sys %iowait %idle 0.15 0.00 0.07 0.28 99.49 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.01 0.59 0.24 0.19 29.22 6.17 14.61 3.08 83.49 0.01 21.71 3.84 0.16 sdb 0.04 10.05 0.89 3.74 117.37 110.34 58.69 55.17 49.13 0.10 21.76 4.48 2.08 avg-cpu: %user %nice %sys %iowait %idle 15.74 0.00 8.99 0.31 74.95 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 1.99 0.00 57.71 0.00 14025.87 0.00 7012.94 0.00 243.03 0.21 3.58 3.53 20.35 sdb 0.00 0.00 11.94 0.00 95.52 0.00 47.76 0.00 8.00 0.02 2.04 2.04 2.44 avg-cpu: %user %nice %sys %iowait %idle 6.18 0.00 2.37 9.24 82.20 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.50 0.50 23.00 1.00 5732.00 12.00 2866.00 6.00 239.33 0.07 3.08 3.02 7.25 sdb 0.00 129.00 7.00 130.00 56.00 2076.00 28.00 1038.00 15.56 0.75 5.49 5.40 73.95 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.12 12.44 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 3.50 0.00 3.00 0.00 52.00 0.00 26.00 17.33 0.03 10.00 3.67 1.10 sdb 0.00 182.50 0.00 182.50 0.00 2920.00 0.00 1460.00 16.00 0.99 5.44 5.44 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.49 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.01 0.00 12.06 0.00 6.03 12.00 0.01 6.00 6.00 0.60 sdb 0.00 184.92 0.00 185.43 0.00 2962.81 0.00 1481.41 15.98 1.01 5.45 5.38 99.70 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.43 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 184.08 0.00 184.08 0.00 2945.27 0.00 1472.64 16.00 0.99 5.39 5.38 99.00 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.31 87.56 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.02 15.33 6.67 1.00 sdb 0.00 181.00 0.00 181.00 0.00 2896.00 0.00 1448.00 16.00 0.99 5.48 5.49 99.40 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.19 12.37 87.45 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 178.00 0.00 178.50 0.00 2852.00 0.00 1426.00 15.98 1.00 5.61 5.55 99.10 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.37 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 179.50 0.00 179.50 0.00 2872.00 0.00 1436.00 16.00 0.99 5.52 5.53 99.25 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.44 87.50 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.50 0.00 3.50 0.00 40.00 0.00 20.00 11.43 0.07 20.00 4.00 1.40 sdb 0.00 179.00 0.00 179.50 0.00 2868.00 0.00 1434.00 15.98 1.02 5.68 5.53 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.19 12.41 87.34 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.00 0.00 12.00 0.00 6.00 12.00 0.01 6.50 6.50 0.65 sdb 0.00 183.50 0.00 183.50 0.00 2936.00 0.00 1468.00 16.00 0.99 5.40 5.41 99.25 Host 2: iostat -x 2 ================== avg-cpu: %user %nice %sys %iowait %idle 0.96 0.00 0.69 0.21 98.15 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 48.00 0.00 1.33 1.33 0.00 sda 0.01 5.31 0.23 1.55 17.96 54.93 8.98 27.47 40.76 0.07 41.59 1.21 0.22 sdb 0.03 3.99 0.84 0.47 113.52 35.67 56.76 17.83 114.36 0.03 23.00 2.55 0.33 sdc 0.05 37.80 0.58 1.50 131.96 314.37 65.98 157.19 214.93 0.43 205.85 2.84 0.59 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 40.35 0.00 3.52 3.52 0.00 avg-cpu: %user %nice %sys %iowait %idle 16.03 0.00 8.61 0.44 74.92 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.99 14.43 57.71 6.97 13775.12 171.14 6887.56 85.57 215.63 0.22 3.43 3.36 21.74 sdb 0.00 357.71 7.96 358.71 63.68 5731.34 31.84 2865.67 15.80 0.04 0.10 0.10 3.83 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 avg-cpu: %user %nice %sys %iowait %idle 15.62 0.00 8.81 0.56 75.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.50 0.00 56.00 0.00 13964.00 0.00 6982.00 0.00 249.36 0.22 3.90 3.89 21.80 sdb 0.00 635.00 7.00 635.00 64.00 10160.00 32.00 5080.00 15.93 0.06 0.09 0.09 5.55 sdc 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Thanks, Tina ________________________________ Sign in and you could WIN! Enter for your chance to win $1000 every day. Visit SignInAndWIN.ca today to learn more! ________________________________ You could win $1000 a day, now until May 12th, just for signing in to Windows Live Messenger. Check out SignInAndWIN.ca to learn more! ________________________________ Sign in to Windows Live Messenger, and enter for your chance to win $1000 a day-today until May 12th. Visit SignInAndWIN.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From tinatianxia at hotmail.com Mon May 12 20:59:32 2008 From: tinatianxia at hotmail.com (Tina Tian) Date: Mon, 12 May 2008 13:59:32 -0700 Subject: Different performance In-Reply-To: <6B34B8A05FA7544BB7F013ACD452E02802AD608A@dlee11.ent.ti.com> References: <6B34B8A05FA7544BB7F013ACD452E02802AD608A@dlee11.ent.ti.com> Message-ID: I just did two tests. Belows are the vmstat output from the tests. Test 1: dd if=/dev/zero of=dd_out.out bs=1MB count=700 ( dd_out.out creates a file on the same folder of database file). As suggested by Doungwu. vmstat 2 20 on host 1: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 168 622076 76312 1279844 0 0 1 0 0 0 0 0 100 0 0 0 168 622076 76312 1279844 0 0 0 0 1003 133 0 0 100 0 1 0 168 967084 76312 935864 0 0 0 0 1004 136 0 4 96 0 0 2 168 573380 76372 1320864 0 0 2 130386 1317 352 0 9 72 18 vmstat 2 20 on host 2: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 11716 699612 229432 723212 0 0 1 0 1 0 1 1 98 0 2 1 11716 358916 229748 1060636 0 0 0 27648 1057 457 0 7 88 5 0 1 11716 11328 230080 1402984 0 0 0 141598 1205 512 0 9 72 20 Test 2: load a ascii data file to the sybase databae on host1 and host2. vmstat 2 20 on host 1: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 168 1106604 78268 797408 0 0 1 0 0 0 0 0 100 0 2 0 168 1104076 78300 799456 0 0 1092 58 1034 175 2 1 96 1 2 0 168 1090124 78316 813480 0 0 6984 0 1102 181 16 9 75 0 2 0 168 1076108 78332 827244 0 0 6984 0 1101 182 16 9 75 0 0 1 168 1070044 78368 832928 0 0 2738 1168 1187 400 6 2 82 10 0 1 168 1069980 78372 832924 0 0 2 1388 1177 364 0 0 87 12 0 1 168 1070012 78380 832916 0 0 0 1494 1190 392 0 0 88 12 0 1 168 1070068 78388 832908 0 0 0 1382 1175 361 0 0 87 12 0 1 168 1070068 78388 832908 0 0 0 1324 1168 345 0 0 87 12 0 1 168 1069996 78396 832900 0 0 0 1426 1181 373 0 0 88 12 0 1 168 1069996 78396 832900 0 0 0 1488 1188 387 0 0 87 12 0 1 168 1070052 78404 832892 0 0 0 1422 1181 374 0 0 87 12 0 1 168 1070052 78404 832892 0 0 0 1316 1167 343 0 0 88 12 0 0 168 1070428 78404 832892 0 0 0 690 1093 245 0 0 94 6 vmstat 2 20 on host 2: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 1 0 11716 8156 230820 1412904 0 0 1 0 1 0 1 1 98 0 0 0 11716 9784 230820 1412904 0 0 0 0 1008 1078 0 0 99 0 2 0 11716 4680 230852 1417292 0 0 2328 44 1061 462 5 3 92 1 2 0 11716 4440 230216 1417928 0 0 6936 3330 1515 1309 15 9 74 1 2 0 11716 4824 226236 1421908 0 0 6862 5060 1736 1740 16 9 75 1 0 0 11716 4676 219492 1429432 0 0 1912 4786 1636 1615 5 3 92 0 0 0 11716 6884 217968 1428616 0 0 64 6 1015 1100 2 0 97 1 Can you see something special? Thank you,Tina Date: Mon, 12 May 2008 13:59:43 -0500From: jolt at ti.comTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performance Tina, Could you run a vmstat output while under load to see how much memory is swapping and how quickly context switching is occurring? ?vmstat 5 20? Also, what kernel is running? ?uname ?a? From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina TianSent: Monday, May 12, 2008 1:37 PMTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performance Thank you, Joseph. Let me explain it. On both host 1 and host2, sybase software is in /sybase and sybase database is in /sybasedata. On host 2, we have amada backup software in /dev/sdc and I believe some amada demon was running when I ran iostat. (> From the output of host 2 you provided, the first stat shows sdc is taking some of the load). Host 2 do have additional higher performance drivers which are not being used by sybase database (/sybasedata) at all. Will database be benefit from their quicker swap? Belows are results from fdisk/mount/dmesg(swap) on host1 and host2. Host 1, fdisk -l:-----------------Disk /dev/sda: 72.7 GB, 72746008576 bytes255 heads, 63 sectors/track, 8844 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sda1 1 4 32098+ de Dell Utility/dev/sda2 5 1279 10241437+ 83 Linux/dev/sda3 * 1280 1406 1020127+ 83 Linux/dev/sda4 1407 8844 59745735 5 Extended/dev/sda5 1407 8844 59745703+ 8e Linux LVMDisk /dev/sdb: 598.8 GB, 598879502336 bytes255 heads, 63 sectors/track, 72809 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdb1 1 66868 537117178+ 83 Linux/dev/sdb2 66869 72809 47721082+ 5 Extended host 1, mount:---------------/dev/mapper/VolGroup_ID_27777-LogVol2 on / type ext3 (rw)none on /proc type proc (rw)none on /sys type sysfs (rw)none on /dev/pts type devpts (rw,gid=5,mode=620)usbfs on /proc/bus/usb type usbfs (rw)/dev/sda3 on /boot type ext3 (rw)none on /dev/shm type tmpfs (rw)/dev/mapper/VolGroup_ID_27777-LogVol3 on /tmp type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVol6 on /usr type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVol5 on /var type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVolHome on /home type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVolSybase on /sybase type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVolTranLog on /tranlog type ext3 (rw)/dev/sdb1 on /sybasedata type ext3 (rw)none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)Host 1, dmesg|grep swap------------------------Adding 1998840k swap on /dev/VolGroup_ID_27777/LogVol1. Priority:-1 extents:1Adding 2097144k swap on /dev/VolGroup_ID_27777/LogVol0. Priority:-2 extents:1 Host 2, fdisk -l----------------Disk /dev/sda: 72.7 GB, 72746008576 bytes255 heads, 63 sectors/track, 8844 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sda1 1 4 32098+ de Dell Utility/dev/sda2 5 1534 12289725 83 Linux/dev/sda3 * 1535 1661 1020127+ 83 Linux/dev/sda4 1662 8844 57697447+ 5 Extended/dev/sda5 1662 8844 57697416 8e Linux LVMDisk /dev/sdb: 598.8 GB, 598879502336 bytes255 heads, 63 sectors/track, 72809 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdb1 * 1 72809 584838261 83 LinuxDisk /dev/sdc: 299.4 GB, 299439751168 bytes255 heads, 63 sectors/track, 36404 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdc1 1 4370 35101993+ 83 Linux/dev/sdc2 4371 36404 257313105 83 LinuxDisk /dev/sdd: 320.0 GB, 320072933376 bytes255 heads, 63 sectors/track, 38913 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdd1 1 38913 312568641 83 Linuxhost 2, mount:---------------/dev/mapper/VolGroup_ID_787-LogVol1 on / type ext3 (rw)none on /proc type proc (rw)none on /sys type sysfs (rw)none on /dev/pts type devpts (rw,gid=5,mode=620)usbfs on /proc/bus/usb type usbfs (rw)/dev/sda3 on /boot type ext3 (rw)none on /dev/shm type tmpfs (rw)/dev/mapper/VolGroup_ID_787-LogVol2 on /tmp type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVol5 on /usr type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVol4 on /var type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVolHome on /home type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVolSybase on /sybase type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVolTranlog on /tranlog type ext3 (rw)/dev/sdb1 on /sybasedata type ext3 (rw)/dev/sdc1 on /pkgs type ext3 (rw)/dev/sdc2 on /amanda-data type ext3 (rw)none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)host 2 dmesg |grep swap:------------------------ host 1 : dmesg |grep swapAdding 1769464k swap on /dev/VolGroup_ID_787/LogVol0. Priority:-1 extents:1 Best Regards,Tina Date: Mon, 12 May 2008 07:11:12 -0500From: jolt at ti.comTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performance Tina, How are the partitions laid out on the two systems? It is likely that something OS related is accessing sda and sdb or host 1 while being spread across more disks in host 2. From the output of host 2 you provided, the first stat shows sdc is taking some of the load. Regardless of the RAM being the same in both systems, is there much swapping? Swapping on higher performance drives will be quicker. Regards, Joseph From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina TianSent: Friday, May 09, 2008 10:42 PMTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performance The DB is Sybase ASE 15.0.2. Identical configuration on two hosts. My SA also confirmed that two hosts are almost identical except host2(faster DB load) has extra two disks sdc and sdd, sdc and sdd are with higer RPM=15k. The rest of disks sda and adb are identical on two hosts, with RPM=7k. On both host 1 and host 2, DBs are on /dev/sdb only. Best Regards,Tina To: redhat-sysadmin-list at redhat.comDate: Fri, 9 May 2008 16:58:19 -0600From: larry.sorensen at juno.comSubject: Re: Different performance Please include information on the databases including versions. It could just be different configurations on the databases. Are the patches up to date and equal on both servers? On Fri, 9 May 2008 14:11:25 -0700 Tina Tian writes: I am a DBA. I have identical database servers running on two Linux redhat 4, host 1 and host 2. When I was running the same bulk load to database (load a data file to database), host 2 was much faster than host 1. On both host1 and host2, database are using file system mount on /dev/sda and /dev/sdb. I checked with my SA, host1 and host2 have same CPU, RAM, file system configuration. The only different is that host 2 has extra HD capacity with higher 15k RPM. But the extra 2 HDs(sdc and sdd) are dedicated to other applications, not used by database at all. My questions are:-----------------1. On host2 (faster), the extra faster HDs(/dev/sdc and sdd) are not used by database. Does it still affect IO performance of /dev/sda and /dev/sdb ? 2. During database bulk load testing, host 1(slower) shows longer service IO time (svctm) and longer IO waiting time(await). What other possible reason can cause this problem? Any idea? I did post the same issue to database discussion group and they suggested me to check OS performance(svctm). Below is the result from iostat on host1(slower) and host2(faster) during bulk load: Host 1: iostat -x 2===================== avg-cpu: %user %nice %sys %iowait %idle 0.15 0.00 0.07 0.28 99.49 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.01 0.59 0.24 0.19 29.22 6.17 14.61 3.08 83.49 0.01 21.71 3.84 0.16 sdb 0.04 10.05 0.89 3.74 117.37 110.34 58.69 55.17 49.13 0.10 21.76 4.48 2.08 avg-cpu: %user %nice %sys %iowait %idle 15.74 0.00 8.99 0.31 74.95 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 1.99 0.00 57.71 0.00 14025.87 0.00 7012.94 0.00 243.03 0.21 3.58 3.53 20.35 sdb 0.00 0.00 11.94 0.00 95.52 0.00 47.76 0.00 8.00 0.02 2.04 2.04 2.44 avg-cpu: %user %nice %sys %iowait %idle 6.18 0.00 2.37 9.24 82.20 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.50 0.50 23.00 1.00 5732.00 12.00 2866.00 6.00 239.33 0.07 3.08 3.02 7.25 sdb 0.00 129.00 7.00 130.00 56.00 2076.00 28.00 1038.00 15.56 0.75 5.49 5.40 73.95 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.12 12.44 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 3.50 0.00 3.00 0.00 52.00 0.00 26.00 17.33 0.03 10.00 3.67 1.10 sdb 0.00 182.50 0.00 182.50 0.00 2920.00 0.00 1460.00 16.00 0.99 5.44 5.44 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.49 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.01 0.00 12.06 0.00 6.03 12.00 0.01 6.00 6.00 0.60 sdb 0.00 184.92 0.00 185.43 0.00 2962.81 0.00 1481.41 15.98 1.01 5.45 5.38 99.70 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.43 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 184.08 0.00 184.08 0.00 2945.27 0.00 1472.64 16.00 0.99 5.39 5.38 99.00 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.31 87.56 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.02 15.33 6.67 1.00 sdb 0.00 181.00 0.00 181.00 0.00 2896.00 0.00 1448.00 16.00 0.99 5.48 5.49 99.40 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.19 12.37 87.45 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 178.00 0.00 178.50 0.00 2852.00 0.00 1426.00 15.98 1.00 5.61 5.55 99.10 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.37 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 179.50 0.00 179.50 0.00 2872.00 0.00 1436.00 16.00 0.99 5.52 5.53 99.25 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.44 87.50 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.50 0.00 3.50 0.00 40.00 0.00 20.00 11.43 0.07 20.00 4.00 1.40 sdb 0.00 179.00 0.00 179.50 0.00 2868.00 0.00 1434.00 15.98 1.02 5.68 5.53 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.19 12.41 87.34 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.00 0.00 12.00 0.00 6.00 12.00 0.01 6.50 6.50 0.65 sdb 0.00 183.50 0.00 183.50 0.00 2936.00 0.00 1468.00 16.00 0.99 5.40 5.41 99.25 Host 2: iostat -x 2 ================== avg-cpu: %user %nice %sys %iowait %idle 0.96 0.00 0.69 0.21 98.15 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 48.00 0.00 1.33 1.33 0.00 sda 0.01 5.31 0.23 1.55 17.96 54.93 8.98 27.47 40.76 0.07 41.59 1.21 0.22 sdb 0.03 3.99 0.84 0.47 113.52 35.67 56.76 17.83 114.36 0.03 23.00 2.55 0.33 sdc 0.05 37.80 0.58 1.50 131.96 314.37 65.98 157.19 214.93 0.43 205.85 2.84 0.59 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 40.35 0.00 3.52 3.52 0.00 avg-cpu: %user %nice %sys %iowait %idle 16.03 0.00 8.61 0.44 74.92 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.99 14.43 57.71 6.97 13775.12 171.14 6887.56 85.57 215.63 0.22 3.43 3.36 21.74 sdb 0.00 357.71 7.96 358.71 63.68 5731.34 31.84 2865.67 15.80 0.04 0.10 0.10 3.83 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 avg-cpu: %user %nice %sys %iowait %idle 15.62 0.00 8.81 0.56 75.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.50 0.00 56.00 0.00 13964.00 0.00 6982.00 0.00 249.36 0.22 3.90 3.89 21.80 sdb 0.00 635.00 7.00 635.00 64.00 10160.00 32.00 5080.00 15.93 0.06 0.09 0.09 5.55 sdc 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Thanks, Tina Sign in and you could WIN! Enter for your chance to win $1000 every day. Visit SignInAndWIN.ca today to learn more! You could win $1000 a day, now until May 12th, just for signing in to Windows Live Messenger. Check out SignInAndWIN.ca to learn more! Sign in to Windows Live Messenger, and enter for your chance to win $1000 a day?today until May 12th. Visit SignInAndWIN.ca _________________________________________________________________ Try Chicktionary, a game that tests how many words you can form from the letters given. Find this and more puzzles at Live Search Games! http://g.msn.ca/ca55/207 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bill at magicdigits.com Mon May 12 21:09:32 2008 From: bill at magicdigits.com (Bill Watson) Date: Mon, 12 May 2008 14:09:32 -0700 Subject: Different performance In-Reply-To: Message-ID: <008e01c8b474$79592330$09000032@bill> Tina, Is there any chance that host 1 is set to "write-through" and host 2 is set to dirty (trust me) writes on the hardware disk controller or on the database driver? Note how host 1 waits until all reading is done prior to the first write - host 2 has simultaneous read/writing. Bill Watson bill at magicdigits.com -----Original Message----- From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina Tian Sent: Monday, May 12, 2008 2:00 PM To: redhat-sysadmin-list at redhat.com Subject: RE: Different performance I just did two tests. Belows are the vmstat output from the tests. Test 1: dd if=/dev/zero of=dd_out.out bs=1MB count=700 ( dd_out.out creates a file on the same folder of database file). As suggested by Doungwu. vmstat 2 20 on host 1: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 168 622076 76312 1279844 0 0 1 0 0 0 0 0 100 0 0 0 168 622076 76312 1279844 0 0 0 0 1003 133 0 0 100 0 1 0 168 967084 76312 935864 0 0 0 0 1004 136 0 4 96 0 0 2 168 573380 76372 1320864 0 0 2 130386 1317 352 0 9 72 18 vmstat 2 20 on host 2: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 11716 699612 229432 723212 0 0 1 0 1 0 1 1 98 0 2 1 11716 358916 229748 1060636 0 0 0 27648 1057 457 0 7 88 5 0 1 11716 11328 230080 1402984 0 0 0 141598 1205 512 0 9 72 20 Test 2: load a ascii data file to the sybase databae on host1 and host2. vmstat 2 20 on host 1: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 168 1106604 78268 797408 0 0 1 0 0 0 0 0 100 0 2 0 168 1104076 78300 799456 0 0 1092 58 1034 175 2 1 96 1 2 0 168 1090124 78316 813480 0 0 6984 0 1102 181 16 9 75 0 2 0 168 1076108 78332 827244 0 0 6984 0 1101 182 16 9 75 0 0 1 168 1070044 78368 832928 0 0 2738 1168 1187 400 6 2 82 10 0 1 168 1069980 78372 832924 0 0 2 1388 1177 364 0 0 87 12 0 1 168 1070012 78380 832916 0 0 0 1494 1190 392 0 0 88 12 0 1 168 1070068 78388 832908 0 0 0 1382 1175 361 0 0 87 12 0 1 168 1070068 78388 832908 0 0 0 1324 1168 345 0 0 87 12 0 1 168 1069996 78396 832900 0 0 0 1426 1181 373 0 0 88 12 0 1 168 1069996 78396 832900 0 0 0 1488 1188 387 0 0 87 12 0 1 168 1070052 78404 832892 0 0 0 1422 1181 374 0 0 87 12 0 1 168 1070052 78404 832892 0 0 0 1316 1167 343 0 0 88 12 0 0 168 1070428 78404 832892 0 0 0 690 1093 245 0 0 94 6 vmstat 2 20 on host 2: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 1 0 11716 8156 230820 1412904 0 0 1 0 1 0 1 1 98 0 0 0 11716 9784 230820 1412904 0 0 0 0 1008 1078 0 0 99 0 2 0 11716 4680 230852 1417292 0 0 2328 44 1061 462 5 3 92 1 2 0 11716 4440 230216 1417928 0 0 6936 3330 1515 1309 15 9 74 1 2 0 11716 4824 226236 1421908 0 0 6862 5060 1736 1740 16 9 75 1 0 0 11716 4676 219492 1429432 0 0 1912 4786 1636 1615 5 3 92 0 0 0 11716 6884 217968 1428616 0 0 64 6 1015 1100 2 0 97 1 Can you see something special? Thank you, Tina _____ Date: Mon, 12 May 2008 13:59:43 -0500 From: jolt at ti.com To: redhat-sysadmin-list at redhat.com Subject: RE: Different performance Tina, Could you run a vmstat output while under load to see how much memory is swapping and how quickly context switching is occurring? "vmstat 5 20" Also, what kernel is running? "uname -a" _____ From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina Tian Sent: Monday, May 12, 2008 1:37 PM To: redhat-sysadmin-list at redhat.com Subject: RE: Different performance Thank you, Joseph. Let me explain it. On both host 1 and host2, sybase software is in /sybase and sybase database is in /sybasedata. On host 2, we have amada backup software in /dev/sdc and I believe some amada demon was running when I ran iostat. (> From the output of host 2 you provided, the first stat shows sdc is taking some of the load). Host 2 do have additional higher performance drivers which are not being used by sybase database (/sybasedata) at all. Will database be benefit from their quicker swap? Belows are results from fdisk/mount/dmesg(swap) on host1 and host2. Host 1, fdisk -l: ----------------- Disk /dev/sda: 72.7 GB, 72746008576 bytes 255 heads, 63 sectors/track, 8844 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 4 32098+ de Dell Utility /dev/sda2 5 1279 10241437+ 83 Linux /dev/sda3 * 1280 1406 1020127+ 83 Linux /dev/sda4 1407 8844 59745735 5 Extended /dev/sda5 1407 8844 59745703+ 8e Linux LVM Disk /dev/sdb: 598.8 GB, 598879502336 bytes 255 heads, 63 sectors/track, 72809 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 66868 537117178+ 83 Linux /dev/sdb2 66869 72809 47721082+ 5 Extended host 1, mount: --------------- /dev/mapper/VolGroup_ID_27777-LogVol2 on / type ext3 (rw) none on /proc type proc (rw) none on /sys type sysfs (rw) none on /dev/pts type devpts (rw,gid=5,mode=620) usbfs on /proc/bus/usb type usbfs (rw) /dev/sda3 on /boot type ext3 (rw) none on /dev/shm type tmpfs (rw) /dev/mapper/VolGroup_ID_27777-LogVol3 on /tmp type ext3 (rw) /dev/mapper/VolGroup_ID_27777-LogVol6 on /usr type ext3 (rw) /dev/mapper/VolGroup_ID_27777-LogVol5 on /var type ext3 (rw) /dev/mapper/VolGroup_ID_27777-LogVolHome on /home type ext3 (rw) /dev/mapper/VolGroup_ID_27777-LogVolSybase on /sybase type ext3 (rw) /dev/mapper/VolGroup_ID_27777-LogVolTranLog on /tranlog type ext3 (rw) /dev/sdb1 on /sybasedata type ext3 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) Host 1, dmesg|grep swap ------------------------ Adding 1998840k swap on /dev/VolGroup_ID_27777/LogVol1. Priority:-1 extents:1 Adding 2097144k swap on /dev/VolGroup_ID_27777/LogVol0. Priority:-2 extents:1 Host 2, fdisk -l ---------------- Disk /dev/sda: 72.7 GB, 72746008576 bytes 255 heads, 63 sectors/track, 8844 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 4 32098+ de Dell Utility /dev/sda2 5 1534 12289725 83 Linux /dev/sda3 * 1535 1661 1020127+ 83 Linux /dev/sda4 1662 8844 57697447+ 5 Extended /dev/sda5 1662 8844 57697416 8e Linux LVM Disk /dev/sdb: 598.8 GB, 598879502336 bytes 255 heads, 63 sectors/track, 72809 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 * 1 72809 584838261 83 Linux Disk /dev/sdc: 299.4 GB, 299439751168 bytes 255 heads, 63 sectors/track, 36404 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdc1 1 4370 35101993+ 83 Linux /dev/sdc2 4371 36404 257313105 83 Linux Disk /dev/sdd: 320.0 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdd1 1 38913 312568641 83 Linux host 2, mount: --------------- /dev/mapper/VolGroup_ID_787-LogVol1 on / type ext3 (rw) none on /proc type proc (rw) none on /sys type sysfs (rw) none on /dev/pts type devpts (rw,gid=5,mode=620) usbfs on /proc/bus/usb type usbfs (rw) /dev/sda3 on /boot type ext3 (rw) none on /dev/shm type tmpfs (rw) /dev/mapper/VolGroup_ID_787-LogVol2 on /tmp type ext3 (rw) /dev/mapper/VolGroup_ID_787-LogVol5 on /usr type ext3 (rw) /dev/mapper/VolGroup_ID_787-LogVol4 on /var type ext3 (rw) /dev/mapper/VolGroup_ID_787-LogVolHome on /home type ext3 (rw) /dev/mapper/VolGroup_ID_787-LogVolSybase on /sybase type ext3 (rw) /dev/mapper/VolGroup_ID_787-LogVolTranlog on /tranlog type ext3 (rw) /dev/sdb1 on /sybasedata type ext3 (rw) /dev/sdc1 on /pkgs type ext3 (rw) /dev/sdc2 on /amanda-data type ext3 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) host 2 dmesg |grep swap: ------------------------ host 1 : dmesg |grep swap Adding 1769464k swap on /dev/VolGroup_ID_787/LogVol0. Priority:-1 extents:1 Best Regards, Tina _____ Date: Mon, 12 May 2008 07:11:12 -0500 From: jolt at ti.com To: redhat-sysadmin-list at redhat.com Subject: RE: Different performance Tina, How are the partitions laid out on the two systems? It is likely that something OS related is accessing sda and sdb or host 1 while being spread across more disks in host 2. From the output of host 2 you provided, the first stat shows sdc is taking some of the load. Regardless of the RAM being the same in both systems, is there much swapping? Swapping on higher performance drives will be quicker. Regards, Joseph _____ From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina Tian Sent: Friday, May 09, 2008 10:42 PM To: redhat-sysadmin-list at redhat.com Subject: RE: Different performance The DB is Sybase ASE 15.0.2. Identical configuration on two hosts. My SA also confirmed that two hosts are almost identical except host2(faster DB load) has extra two disks sdc and sdd, sdc and sdd are with higer RPM=15k. The rest of disks sda and adb are identical on two hosts, with RPM=7k. On both host 1 and host 2, DBs are on /dev/sdb only. Best Regards, Tina _____ To: redhat-sysadmin-list at redhat.com Date: Fri, 9 May 2008 16:58:19 -0600 From: larry.sorensen at juno.com Subject: Re: Different performance Please include information on the databases including versions. It could just be different configurations on the databases. Are the patches up to date and equal on both servers? On Fri, 9 May 2008 14:11:25 -0700 Tina Tian writes: I am a DBA. I have identical database servers running on two Linux redhat 4, host 1 and host 2. When I was running the same bulk load to database (load a data file to database), host 2 was much faster than host 1. On both host1 and host2, database are using file system mount on /dev/sda and /dev/sdb. I checked with my SA, host1 and host2 have same CPU, RAM, file system configuration. The only different is that host 2 has extra HD capacity with higher 15k RPM. But the extra 2 HDs(sdc and sdd) are dedicated to other applications, not used by database at all. My questions are: ----------------- 1. On host2 (faster), the extra faster HDs(/dev/sdc and sdd) are not used by database. Does it still affect IO performance of /dev/sda and /dev/sdb ? 2. During database bulk load testing, host 1(slower) shows longer service IO time (svctm) and longer IO waiting time(await). What other possible reason can cause this problem? Any idea? I did post the same issue to database discussion group and they suggested me to check OS performance(svctm). Below is the result from iostat on host1(slower) and host2(faster) during bulk load: Host 1: iostat -x 2 ===================== avg-cpu: %user %nice %sys %iowait %idle 0.15 0.00 0.07 0.28 99.49 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.01 0.59 0.24 0.19 29.22 6.17 14.61 3.08 83.49 0.01 21.71 3.84 0.16 sdb 0.04 10.05 0.89 3.74 117.37 110.34 58.69 55.17 49.13 0.10 21.76 4.48 2.08 avg-cpu: %user %nice %sys %iowait %idle 15.74 0.00 8.99 0.31 74.95 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 1.99 0.00 57.71 0.00 14025.87 0.00 7012.94 0.00 243.03 0.21 3.58 3.53 20.35 sdb 0.00 0.00 11.94 0.00 95.52 0.00 47.76 0.00 8.00 0.02 2.04 2.04 2.44 avg-cpu: %user %nice %sys %iowait %idle 6.18 0.00 2.37 9.24 82.20 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.50 0.50 23.00 1.00 5732.00 12.00 2866.00 6.00 239.33 0.07 3.08 3.02 7.25 sdb 0.00 129.00 7.00 130.00 56.00 2076.00 28.00 1038.00 15.56 0.75 5.49 5.40 73.95 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.12 12.44 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 3.50 0.00 3.00 0.00 52.00 0.00 26.00 17.33 0.03 10.00 3.67 1.10 sdb 0.00 182.50 0.00 182.50 0.00 2920.00 0.00 1460.00 16.00 0.99 5.44 5.44 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.49 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.01 0.00 12.06 0.00 6.03 12.00 0.01 6.00 6.00 0.60 sdb 0.00 184.92 0.00 185.43 0.00 2962.81 0.00 1481.41 15.98 1.01 5.45 5.38 99.70 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.43 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 184.08 0.00 184.08 0.00 2945.27 0.00 1472.64 16.00 0.99 5.39 5.38 99.00 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.31 87.56 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.02 15.33 6.67 1.00 sdb 0.00 181.00 0.00 181.00 0.00 2896.00 0.00 1448.00 16.00 0.99 5.48 5.49 99.40 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.19 12.37 87.45 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 178.00 0.00 178.50 0.00 2852.00 0.00 1426.00 15.98 1.00 5.61 5.55 99.10 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.37 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 179.50 0.00 179.50 0.00 2872.00 0.00 1436.00 16.00 0.99 5.52 5.53 99.25 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.44 87.50 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.50 0.00 3.50 0.00 40.00 0.00 20.00 11.43 0.07 20.00 4.00 1.40 sdb 0.00 179.00 0.00 179.50 0.00 2868.00 0.00 1434.00 15.98 1.02 5.68 5.53 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.19 12.41 87.34 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.00 0.00 12.00 0.00 6.00 12.00 0.01 6.50 6.50 0.65 sdb 0.00 183.50 0.00 183.50 0.00 2936.00 0.00 1468.00 16.00 0.99 5.40 5.41 99.25 Host 2: iostat -x 2 ================== avg-cpu: %user %nice %sys %iowait %idle 0.96 0.00 0.69 0.21 98.15 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 48.00 0.00 1.33 1.33 0.00 sda 0.01 5.31 0.23 1.55 17.96 54.93 8.98 27.47 40.76 0.07 41.59 1.21 0.22 sdb 0.03 3.99 0.84 0.47 113.52 35.67 56.76 17.83 114.36 0.03 23.00 2.55 0.33 sdc 0.05 37.80 0.58 1.50 131.96 314.37 65.98 157.19 214.93 0.43 205.85 2.84 0.59 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 40.35 0.00 3.52 3.52 0.00 avg-cpu: %user %nice %sys %iowait %idle 16.03 0.00 8.61 0.44 74.92 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.99 14.43 57.71 6.97 13775.12 171.14 6887.56 85.57 215.63 0.22 3.43 3.36 21.74 sdb 0.00 357.71 7.96 358.71 63.68 5731.34 31.84 2865.67 15.80 0.04 0.10 0.10 3.83 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 avg-cpu: %user %nice %sys %iowait %idle 15.62 0.00 8.81 0.56 75.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.50 0.00 56.00 0.00 13964.00 0.00 6982.00 0.00 249.36 0.22 3.90 3.89 21.80 sdb 0.00 635.00 7.00 635.00 64.00 10160.00 32.00 5080.00 15.93 0.06 0.09 0.09 5.55 sdc 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Thanks, Tina _____ Sign in and you could WIN! Enter for your chance to win $1000 every day. Visit SignInAndWIN.ca today to learn more! _____ You could win $1000 a day, now until May 12th, just for signing in to Windows Live Messenger. Check out SignInAndWIN.ca to learn more! _____ Sign in to Windows Live Messenger, and enter for your chance to win $1000 a day-today until May 12th. Visit SignInAndWIN.ca _____ Sign in today. When you sign in to Windows Live Messenger you could win $1000 a day until May 12th. Learn more at SignInAndWIN.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhon_jimenez at petroseis.com.co Mon May 12 21:17:24 2008 From: jhon_jimenez at petroseis.com.co (Jhon Jimenez) Date: Mon, 12 May 2008 16:17:24 -0500 Subject: XFS file system Message-ID: Can I create a xfs file system over redhat Linux 4 ??, I need to process large files (4GB and more ) and I can see that ext3 is very slow Many thanks Jhon Jimenez Administrador de Sistemas PETROSEIS LTDA Internal Virus Database is out-of-date. Checked by AVG. Version: 7.5.524 / Virus Database: 269.23.6 - Release Date: 28/04/2008 0:00 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tinatianxia at hotmail.com Mon May 12 22:21:50 2008 From: tinatianxia at hotmail.com (Tina Tian) Date: Mon, 12 May 2008 15:21:50 -0700 Subject: Different performance In-Reply-To: <008e01c8b474$79592330$09000032@bill> References: <008e01c8b474$79592330$09000032@bill> Message-ID: Bill, Belows are the output from dmesg. host2 is "write-through". Is host 1 really "write-through"? Host1: scsi0 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 2.0.14 aic7902: Ultra320 Wide Channel A, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBs scsi1 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 2.0.14 aic7902: Ultra320 Wide Channel B, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBs megasas: 00.00.02.03 Mon Jan 30 16:30:45 PST 2006megasas: 0x1028:0x0015:0x1028:0x1f03: bus 2:slot 14:func 0ACPI: PCI interrupt 0000:02:0e.0[A] -> GSI 142 (level, low) -> IRQ 209scsi2 : LSI Logic SAS based MegaRAID driver Vendor: DP Model: BACKPLANE Rev: 1.00 Type: Enclosure ANSI SCSI revision: 05 Vendor: DELL Model: PERC 5/i Rev: 1.00 Type: Direct-Access ANSI SCSI revision: 05SCSI device sda: 142082048 512-byte hdwr sectors (72746 MB)sda: asking for cache data failedsda: assuming drive cache: write through sda: sda1 sda2 sda3 sda4 < sda5 >Attached scsi disk sda at scsi2, channel 2, id 0, lun 0 Vendor: DELL Model: PERC 5/i Rev: 1.00 Type: Direct-Access ANSI SCSI revision: 05SCSI device sdb: 1169686528 512-byte hdwr sectors (598880 MB)sdb: asking for cache data failedsdb: assuming drive cache: write through sdb: sdb1 sdb2 < >Attached scsi disk sdb at scsi2, channel 2, id 1, lun 0 Host2: scsi0 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 2.0.14 aic7902: Ultra320 Wide Channel A, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBs scsi1 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 2.0.14 aic7902: Ultra320 Wide Channel B, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBs megasas: 00.00.02.03 Mon Jan 30 16:30:45 PST 2006megasas: 0x1028:0x0015:0x1028:0x1f03: bus 2:slot 14:func 0ACPI: PCI interrupt 0000:02:0e.0[A] -> GSI 142 (level, low) -> IRQ 209scsi2 : LSI Logic SAS based MegaRAID driver Vendor: DP Model: BACKPLANE Rev: 1.05 Type: Enclosure ANSI SCSI revision: 05 Vendor: DELL Model: PERC 5/i Rev: 1.03 Type: Direct-Access ANSI SCSI revision: 05SCSI device sda: 142082048 512-byte hdwr sectors (72746 MB)SCSI device sda: drive cache: write through sda: sda1 sda2 sda3 sda4 < sda5 >Attached scsi disk sda at scsi2, channel 2, id 0, lun 0 Vendor: DELL Model: PERC 5/i Rev: 1.03 Type: Direct-Access ANSI SCSI revision: 05SCSI device sdb: 1169686528 512-byte hdwr sectors (598880 MB)SCSI device sdb: drive cache: write through sdb: sdb1Attached scsi disk sdb at scsi2, channel 2, id 1, lun 0 Vendor: DELL Model: PERC 5/i Rev: 1.03 Type: Direct-Access ANSI SCSI revision: 05SCSI device sdc: 584843264 512-byte hdwr sectors (299440 MB)SCSI device sdc: drive cache: write through sdc: sdc1 sdc2Attached scsi disk sdc at scsi2, channel 2, id 2, lun 0 Thanks, Tina From: bill at magicdigits.comTo: redhat-sysadmin-list at redhat.comDate: Mon, 12 May 2008 14:09:32 -0700Subject: RE: Different performance Tina, Is there any chance that host 1 is set to "write-through" and host 2 is set to dirty (trust me) writes on the hardware disk controller or on the database driver? Note how host 1 waits until all reading is done prior to the first write - host 2 has simultaneous read/writing. Bill Watson bill at magicdigits.com -----Original Message-----From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina TianSent: Monday, May 12, 2008 2:00 PMTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performanceI just did two tests. Belows are the vmstat output from the tests. Test 1: dd if=/dev/zero of=dd_out.out bs=1MB count=700 ( dd_out.out creates a file on the same folder of database file). As suggested by Doungwu. vmstat 2 20 on host 1: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 168 622076 76312 1279844 0 0 1 0 0 0 0 0 100 0 0 0 168 622076 76312 1279844 0 0 0 0 1003 133 0 0 100 0 1 0 168 967084 76312 935864 0 0 0 0 1004 136 0 4 96 0 0 2 168 573380 76372 1320864 0 0 2 130386 1317 352 0 9 72 18 vmstat 2 20 on host 2: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 11716 699612 229432 723212 0 0 1 0 1 0 1 1 98 0 2 1 11716 358916 229748 1060636 0 0 0 27648 1057 457 0 7 88 5 0 1 11716 11328 230080 1402984 0 0 0 141598 1205 512 0 9 72 20 Test 2: load a ascii data file to the sybase databae on host1 and host2. vmstat 2 20 on host 1: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 168 1106604 78268 797408 0 0 1 0 0 0 0 0 100 0 2 0 168 1104076 78300 799456 0 0 1092 58 1034 175 2 1 96 1 2 0 168 1090124 78316 813480 0 0 6984 0 1102 181 16 9 75 0 2 0 168 1076108 78332 827244 0 0 6984 0 1101 182 16 9 75 0 0 1 168 1070044 78368 832928 0 0 2738 1168 1187 400 6 2 82 10 0 1 168 1069980 78372 832924 0 0 2 1388 1177 364 0 0 87 12 0 1 168 1070012 78380 832916 0 0 0 1494 1190 392 0 0 88 12 0 1 168 1070068 78388 832908 0 0 0 1382 1175 361 0 0 87 12 0 1 168 1070068 78388 832908 0 0 0 1324 1168 345 0 0 87 12 0 1 168 1069996 78396 832900 0 0 0 1426 1181 373 0 0 88 12 0 1 168 1069996 78396 832900 0 0 0 1488 1188 387 0 0 87 12 0 1 168 1070052 78404 832892 0 0 0 1422 1181 374 0 0 87 12 0 1 168 1070052 78404 832892 0 0 0 1316 1167 343 0 0 88 12 0 0 168 1070428 78404 832892 0 0 0 690 1093 245 0 0 94 6 vmstat 2 20 on host 2: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 1 0 11716 8156 230820 1412904 0 0 1 0 1 0 1 1 98 0 0 0 11716 9784 230820 1412904 0 0 0 0 1008 1078 0 0 99 0 2 0 11716 4680 230852 1417292 0 0 2328 44 1061 462 5 3 92 1 2 0 11716 4440 230216 1417928 0 0 6936 3330 1515 1309 15 9 74 1 2 0 11716 4824 226236 1421908 0 0 6862 5060 1736 1740 16 9 75 1 0 0 11716 4676 219492 1429432 0 0 1912 4786 1636 1615 5 3 92 0 0 0 11716 6884 217968 1428616 0 0 64 6 1015 1100 2 0 97 1 Can you see something special? Thank you,Tina Date: Mon, 12 May 2008 13:59:43 -0500From: jolt at ti.comTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performance Tina, Could you run a vmstat output while under load to see how much memory is swapping and how quickly context switching is occurring? ?vmstat 5 20? Also, what kernel is running? ?uname ?a? From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina TianSent: Monday, May 12, 2008 1:37 PMTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performance Thank you, Joseph. Let me explain it. On both host 1 and host2, sybase software is in /sybase and sybase database is in /sybasedata. On host 2, we have amada backup software in /dev/sdc and I believe some amada demon was running when I ran iostat. (> From the output of host 2 you provided, the first stat shows sdc is taking some of the load). Host 2 do have additional higher performance drivers which are not being used by sybase database (/sybasedata) at all. Will database be benefit from their quicker swap? Belows are results from fdisk/mount/dmesg(swap) on host1 and host2. Host 1, fdisk -l:-----------------Disk /dev/sda: 72.7 GB, 72746008576 bytes255 heads, 63 sectors/track, 8844 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sda1 1 4 32098+ de Dell Utility/dev/sda2 5 1279 10241437+ 83 Linux/dev/sda3 * 1280 1406 1020127+ 83 Linux/dev/sda4 1407 8844 59745735 5 Extended/dev/sda5 1407 8844 59745703+ 8e Linux LVMDisk /dev/sdb: 598.8 GB, 598879502336 bytes255 heads, 63 sectors/track, 72809 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdb1 1 66868 537117178+ 83 Linux/dev/sdb2 66869 72809 47721082+ 5 Extended host 1, mount:---------------/dev/mapper/VolGroup_ID_27777-LogVol2 on / type ext3 (rw)none on /proc type proc (rw)none on /sys type sysfs (rw)none on /dev/pts type devpts (rw,gid=5,mode=620)usbfs on /proc/bus/usb type usbfs (rw)/dev/sda3 on /boot type ext3 (rw)none on /dev/shm type tmpfs (rw)/dev/mapper/VolGroup_ID_27777-LogVol3 on /tmp type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVol6 on /usr type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVol5 on /var type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVolHome on /home type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVolSybase on /sybase type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVolTranLog on /tranlog type ext3 (rw)/dev/sdb1 on /sybasedata type ext3 (rw)none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)Host 1, dmesg|grep swap------------------------Adding 1998840k swap on /dev/VolGroup_ID_27777/LogVol1. Priority:-1 extents:1Adding 2097144k swap on /dev/VolGroup_ID_27777/LogVol0. Priority:-2 extents:1 Host 2, fdisk -l----------------Disk /dev/sda: 72.7 GB, 72746008576 bytes255 heads, 63 sectors/track, 8844 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sda1 1 4 32098+ de Dell Utility/dev/sda2 5 1534 12289725 83 Linux/dev/sda3 * 1535 1661 1020127+ 83 Linux/dev/sda4 1662 8844 57697447+ 5 Extended/dev/sda5 1662 8844 57697416 8e Linux LVMDisk /dev/sdb: 598.8 GB, 598879502336 bytes255 heads, 63 sectors/track, 72809 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdb1 * 1 72809 584838261 83 LinuxDisk /dev/sdc: 299.4 GB, 299439751168 bytes255 heads, 63 sectors/track, 36404 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdc1 1 4370 35101993+ 83 Linux/dev/sdc2 4371 36404 257313105 83 LinuxDisk /dev/sdd: 320.0 GB, 320072933376 bytes255 heads, 63 sectors/track, 38913 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdd1 1 38913 312568641 83 Linuxhost 2, mount:---------------/dev/mapper/VolGroup_ID_787-LogVol1 on / type ext3 (rw)none on /proc type proc (rw)none on /sys type sysfs (rw)none on /dev/pts type devpts (rw,gid=5,mode=620)usbfs on /proc/bus/usb type usbfs (rw)/dev/sda3 on /boot type ext3 (rw)none on /dev/shm type tmpfs (rw)/dev/mapper/VolGroup_ID_787-LogVol2 on /tmp type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVol5 on /usr type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVol4 on /var type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVolHome on /home type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVolSybase on /sybase type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVolTranlog on /tranlog type ext3 (rw)/dev/sdb1 on /sybasedata type ext3 (rw)/dev/sdc1 on /pkgs type ext3 (rw)/dev/sdc2 on /amanda-data type ext3 (rw)none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)host 2 dmesg |grep swap:------------------------ host 1 : dmesg |grep swapAdding 1769464k swap on /dev/VolGroup_ID_787/LogVol0. Priority:-1 extents:1 Best Regards,Tina Date: Mon, 12 May 2008 07:11:12 -0500From: jolt at ti.comTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performance Tina, How are the partitions laid out on the two systems? It is likely that something OS related is accessing sda and sdb or host 1 while being spread across more disks in host 2. From the output of host 2 you provided, the first stat shows sdc is taking some of the load. Regardless of the RAM being the same in both systems, is there much swapping? Swapping on higher performance drives will be quicker. Regards, Joseph From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina TianSent: Friday, May 09, 2008 10:42 PMTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performance The DB is Sybase ASE 15.0.2. Identical configuration on two hosts. My SA also confirmed that two hosts are almost identical except host2(faster DB load) has extra two disks sdc and sdd, sdc and sdd are with higer RPM=15k. The rest of disks sda and adb are identical on two hosts, with RPM=7k. On both host 1 and host 2, DBs are on /dev/sdb only. Best Regards,Tina To: redhat-sysadmin-list at redhat.comDate: Fri, 9 May 2008 16:58:19 -0600From: larry.sorensen at juno.comSubject: Re: Different performance Please include information on the databases including versions. It could just be different configurations on the databases. Are the patches up to date and equal on both servers? On Fri, 9 May 2008 14:11:25 -0700 Tina Tian writes: I am a DBA. I have identical database servers running on two Linux redhat 4, host 1 and host 2. When I was running the same bulk load to database (load a data file to database), host 2 was much faster than host 1. On both host1 and host2, database are using file system mount on /dev/sda and /dev/sdb. I checked with my SA, host1 and host2 have same CPU, RAM, file system configuration. The only different is that host 2 has extra HD capacity with higher 15k RPM. But the extra 2 HDs(sdc and sdd) are dedicated to other applications, not used by database at all. My questions are:-----------------1. On host2 (faster), the extra faster HDs(/dev/sdc and sdd) are not used by database. Does it still affect IO performance of /dev/sda and /dev/sdb ? 2. During database bulk load testing, host 1(slower) shows longer service IO time (svctm) and longer IO waiting time(await). What other possible reason can cause this problem? Any idea? I did post the same issue to database discussion group and they suggested me to check OS performance(svctm). Below is the result from iostat on host1(slower) and host2(faster) during bulk load: Host 1: iostat -x 2===================== avg-cpu: %user %nice %sys %iowait %idle 0.15 0.00 0.07 0.28 99.49 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.01 0.59 0.24 0.19 29.22 6.17 14.61 3.08 83.49 0.01 21.71 3.84 0.16 sdb 0.04 10.05 0.89 3.74 117.37 110.34 58.69 55.17 49.13 0.10 21.76 4.48 2.08 avg-cpu: %user %nice %sys %iowait %idle 15.74 0.00 8.99 0.31 74.95 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 1.99 0.00 57.71 0.00 14025.87 0.00 7012.94 0.00 243.03 0.21 3.58 3.53 20.35 sdb 0.00 0.00 11.94 0.00 95.52 0.00 47.76 0.00 8.00 0.02 2.04 2.04 2.44 avg-cpu: %user %nice %sys %iowait %idle 6.18 0.00 2.37 9.24 82.20 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.50 0.50 23.00 1.00 5732.00 12.00 2866.00 6.00 239.33 0.07 3.08 3.02 7.25 sdb 0.00 129.00 7.00 130.00 56.00 2076.00 28.00 1038.00 15.56 0.75 5.49 5.40 73.95 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.12 12.44 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 3.50 0.00 3.00 0.00 52.00 0.00 26.00 17.33 0.03 10.00 3.67 1.10 sdb 0.00 182.50 0.00 182.50 0.00 2920.00 0.00 1460.00 16.00 0.99 5.44 5.44 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.49 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.01 0.00 12.06 0.00 6.03 12.00 0.01 6.00 6.00 0.60 sdb 0.00 184.92 0.00 185.43 0.00 2962.81 0.00 1481.41 15.98 1.01 5.45 5.38 99.70 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.43 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 184.08 0.00 184.08 0.00 2945.27 0.00 1472.64 16.00 0.99 5.39 5.38 99.00 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.31 87.56 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.02 15.33 6.67 1.00 sdb 0.00 181.00 0.00 181.00 0.00 2896.00 0.00 1448.00 16.00 0.99 5.48 5.49 99.40 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.19 12.37 87.45 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 178.00 0.00 178.50 0.00 2852.00 0.00 1426.00 15.98 1.00 5.61 5.55 99.10 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.37 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 179.50 0.00 179.50 0.00 2872.00 0.00 1436.00 16.00 0.99 5.52 5.53 99.25 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.44 87.50 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.50 0.00 3.50 0.00 40.00 0.00 20.00 11.43 0.07 20.00 4.00 1.40 sdb 0.00 179.00 0.00 179.50 0.00 2868.00 0.00 1434.00 15.98 1.02 5.68 5.53 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.19 12.41 87.34 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.00 0.00 12.00 0.00 6.00 12.00 0.01 6.50 6.50 0.65 sdb 0.00 183.50 0.00 183.50 0.00 2936.00 0.00 1468.00 16.00 0.99 5.40 5.41 99.25 Host 2: iostat -x 2 ================== avg-cpu: %user %nice %sys %iowait %idle 0.96 0.00 0.69 0.21 98.15 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 48.00 0.00 1.33 1.33 0.00 sda 0.01 5.31 0.23 1.55 17.96 54.93 8.98 27.47 40.76 0.07 41.59 1.21 0.22 sdb 0.03 3.99 0.84 0.47 113.52 35.67 56.76 17.83 114.36 0.03 23.00 2.55 0.33 sdc 0.05 37.80 0.58 1.50 131.96 314.37 65.98 157.19 214.93 0.43 205.85 2.84 0.59 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 40.35 0.00 3.52 3.52 0.00 avg-cpu: %user %nice %sys %iowait %idle 16.03 0.00 8.61 0.44 74.92 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.99 14.43 57.71 6.97 13775.12 171.14 6887.56 85.57 215.63 0.22 3.43 3.36 21.74 sdb 0.00 357.71 7.96 358.71 63.68 5731.34 31.84 2865.67 15.80 0.04 0.10 0.10 3.83 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 avg-cpu: %user %nice %sys %iowait %idle 15.62 0.00 8.81 0.56 75.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.50 0.00 56.00 0.00 13964.00 0.00 6982.00 0.00 249.36 0.22 3.90 3.89 21.80 sdb 0.00 635.00 7.00 635.00 64.00 10160.00 32.00 5080.00 15.93 0.06 0.09 0.09 5.55 sdc 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Thanks, Tina Sign in and you could WIN! Enter for your chance to win $1000 every day. Visit SignInAndWIN.ca today to learn more! You could win $1000 a day, now until May 12th, just for signing in to Windows Live Messenger. Check out SignInAndWIN.ca to learn more! Sign in to Windows Live Messenger, and enter for your chance to win $1000 a day?today until May 12th. Visit SignInAndWIN.ca Sign in today. When you sign in to Windows Live Messenger you could win $1000 a day until May 12th. Learn more at SignInAndWIN.ca _________________________________________________________________ Enter today for your chance to win $1000 a day?today until May 12th. Learn more at SignInAndWIN.ca http://g.msn.ca/ca55/215 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bill at magicdigits.com Mon May 12 22:55:56 2008 From: bill at magicdigits.com (Bill Watson) Date: Mon, 12 May 2008 15:55:56 -0700 Subject: Different performance In-Reply-To: Message-ID: <00c601c8b483$5686f620$09000032@bill> Tina, Now maybe we're getting somewhere =). The write through is best set in BIOS. RedHat driver *may* correctly detect that setting. The drivers "assuming" write through as stated means that on a shutdown 0, the driver may not issue "flush the cache" commands. If you are quick on the trigger and kill power as soon as it says OK, and the computer gods don't like your most recent sacrafice, you can lose some of the cache writes (with no error logged). Now as far as identical systems - you can see that one is an Adaptec and one is a Dell PERC (dpti) Megaraid controller. I'd bet a Big Mac that the BIOS of the Adaptec is set differently than the PERC on write-through. Also the controllers may have significantly different cache memories installed and certainly different read/write logistics. If you have and choose to keep dirty cache, please put a UPS on the system and a battery backup in the controller. While the setting is great for performance, it is not pretty on failure. Bill Watson bill at magicdigits.com -----Original Message----- From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina Tian Sent: Monday, May 12, 2008 3:22 PM To: redhat-sysadmin-list at redhat.com Subject: RE: Different performance Bill, Belows are the output from dmesg. host2 is "write-through". Is host 1 really "write-through"? Host1: scsi0 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 2.0.14 aic7902: Ultra320 Wide Channel A, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBs scsi1 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 2.0.14 aic7902: Ultra320 Wide Channel B, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBs megasas: 00.00.02.03 Mon Jan 30 16:30:45 PST 2006 megasas: 0x1028:0x0015:0x1028:0x1f03: bus 2:slot 14:func 0 ACPI: PCI interrupt 0000:02:0e.0[A] -> GSI 142 (level, low) -> IRQ 209 scsi2 : LSI Logic SAS based MegaRAID driver Vendor: DP Model: BACKPLANE Rev: 1.00 Type: Enclosure ANSI SCSI revision: 05 Vendor: DELL Model: PERC 5/i Rev: 1.00 Type: Direct-Access ANSI SCSI revision: 05 SCSI device sda: 142082048 512-byte hdwr sectors (72746 MB) sda: asking for cache data failed sda: assuming drive cache: write through sda: sda1 sda2 sda3 sda4 < sda5 > Attached scsi disk sda at scsi2, channel 2, id 0, lun 0 Vendor: DELL Model: PERC 5/i Rev: 1.00 Type: Direct-Access ANSI SCSI revision: 05 SCSI device sdb: 1169686528 512-byte hdwr sectors (598880 MB) sdb: asking for cache data failed sdb: assuming drive cache: write through sdb: sdb1 sdb2 < > Attached scsi disk sdb at scsi2, channel 2, id 1, lun 0 Host2: scsi0 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 2.0.14 aic7902: Ultra320 Wide Channel A, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBs scsi1 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 2.0.14 aic7902: Ultra320 Wide Channel B, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBs megasas: 00.00.02.03 Mon Jan 30 16:30:45 PST 2006 megasas: 0x1028:0x0015:0x1028:0x1f03: bus 2:slot 14:func 0 ACPI: PCI interrupt 0000:02:0e.0[A] -> GSI 142 (level, low) -> IRQ 209 scsi2 : LSI Logic SAS based MegaRAID driver Vendor: DP Model: BACKPLANE Rev: 1.05 Type: Enclosure ANSI SCSI revision: 05 Vendor: DELL Model: PERC 5/i Rev: 1.03 Type: Direct-Access ANSI SCSI revision: 05 SCSI device sda: 142082048 512-byte hdwr sectors (72746 MB) SCSI device sda: drive cache: write through sda: sda1 sda2 sda3 sda4 < sda5 > Attached scsi disk sda at scsi2, channel 2, id 0, lun 0 Vendor: DELL Model: PERC 5/i Rev: 1.03 Type: Direct-Access ANSI SCSI revision: 05 SCSI device sdb: 1169686528 512-byte hdwr sectors (598880 MB) SCSI device sdb: drive cache: write through sdb: sdb1 Attached scsi disk sdb at scsi2, channel 2, id 1, lun 0 Vendor: DELL Model: PERC 5/i Rev: 1.03 Type: Direct-Access ANSI SCSI revision: 05 SCSI device sdc: 584843264 512-byte hdwr sectors (299440 MB) SCSI device sdc: drive cache: write through sdc: sdc1 sdc2 Attached scsi disk sdc at scsi2, channel 2, id 2, lun 0 Thanks, Tina _____ From: bill at magicdigits.com To: redhat-sysadmin-list at redhat.com Date: Mon, 12 May 2008 14:09:32 -0700 Subject: RE: Different performance Tina, Is there any chance that host 1 is set to "write-through" and host 2 is set to dirty (trust me) writes on the hardware disk controller or on the database driver? Note how host 1 waits until all reading is done prior to the first write - host 2 has simultaneous read/writing. Bill Watson bill at magicdigits.com -----Original Message----- From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina Tian Sent: Monday, May 12, 2008 2:00 PM To: redhat-sysadmin-list at redhat.com Subject: RE: Different performance I just did two tests. Belows are the vmstat output from the tests. Test 1: dd if=/dev/zero of=dd_out.out bs=1MB count=700 ( dd_out.out creates a file on the same folder of database file). As suggested by Doungwu. vmstat 2 20 on host 1: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 168 622076 76312 1279844 0 0 1 0 0 0 0 0 100 0 0 0 168 622076 76312 1279844 0 0 0 0 1003 133 0 0 100 0 1 0 168 967084 76312 935864 0 0 0 0 1004 136 0 4 96 0 0 2 168 573380 76372 1320864 0 0 2 130386 1317 352 0 9 72 18 vmstat 2 20 on host 2: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 11716 699612 229432 723212 0 0 1 0 1 0 1 1 98 0 2 1 11716 358916 229748 1060636 0 0 0 27648 1057 457 0 7 88 5 0 1 11716 11328 230080 1402984 0 0 0 141598 1205 512 0 9 72 20 Test 2: load a ascii data file to the sybase databae on host1 and host2. vmstat 2 20 on host 1: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 168 1106604 78268 797408 0 0 1 0 0 0 0 0 100 0 2 0 168 1104076 78300 799456 0 0 1092 58 1034 175 2 1 96 1 2 0 168 1090124 78316 813480 0 0 6984 0 1102 181 16 9 75 0 2 0 168 1076108 78332 827244 0 0 6984 0 1101 182 16 9 75 0 0 1 168 1070044 78368 832928 0 0 2738 1168 1187 400 6 2 82 10 0 1 168 1069980 78372 832924 0 0 2 1388 1177 364 0 0 87 12 0 1 168 1070012 78380 832916 0 0 0 1494 1190 392 0 0 88 12 0 1 168 1070068 78388 832908 0 0 0 1382 1175 361 0 0 87 12 0 1 168 1070068 78388 832908 0 0 0 1324 1168 345 0 0 87 12 0 1 168 1069996 78396 832900 0 0 0 1426 1181 373 0 0 88 12 0 1 168 1069996 78396 832900 0 0 0 1488 1188 387 0 0 87 12 0 1 168 1070052 78404 832892 0 0 0 1422 1181 374 0 0 87 12 0 1 168 1070052 78404 832892 0 0 0 1316 1167 343 0 0 88 12 0 0 168 1070428 78404 832892 0 0 0 690 1093 245 0 0 94 6 vmstat 2 20 on host 2: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 1 0 11716 8156 230820 1412904 0 0 1 0 1 0 1 1 98 0 0 0 11716 9784 230820 1412904 0 0 0 0 1008 1078 0 0 99 0 2 0 11716 4680 230852 1417292 0 0 2328 44 1061 462 5 3 92 1 2 0 11716 4440 230216 1417928 0 0 6936 3330 1515 1309 15 9 74 1 2 0 11716 4824 226236 1421908 0 0 6862 5060 1736 1740 16 9 75 1 0 0 11716 4676 219492 1429432 0 0 1912 4786 1636 1615 5 3 92 0 0 0 11716 6884 217968 1428616 0 0 64 6 1015 1100 2 0 97 1 Can you see something special? Thank you, Tina _____ Date: Mon, 12 May 2008 13:59:43 -0500 From: jolt at ti.com To: redhat-sysadmin-list at redhat.com Subject: RE: Different performance Tina, Could you run a vmstat output while under load to see how much memory is swapping and how quickly context switching is occurring? "vmstat 5 20" Also, what kernel is running? "uname -a" _____ From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina Tian Sent: Monday, May 12, 2008 1:37 PM To: redhat-sysadmin-list at redhat.com Subject: RE: Different performance Thank you, Joseph. Let me explain it. On both host 1 and host2, sybase software is in /sybase and sybase database is in /sybasedata. On host 2, we have amada backup software in /dev/sdc and I believe some amada demon was running when I ran iostat. (> From the output of host 2 you provided, the first stat shows sdc is taking some of the load). Host 2 do have additional higher performance drivers which are not being used by sybase database (/sybasedata) at all. Will database be benefit from their quicker swap? Belows are results from fdisk/mount/dmesg(swap) on host1 and host2. Host 1, fdisk -l: ----------------- Disk /dev/sda: 72.7 GB, 72746008576 bytes 255 heads, 63 sectors/track, 8844 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 4 32098+ de Dell Utility /dev/sda2 5 1279 10241437+ 83 Linux /dev/sda3 * 1280 1406 1020127+ 83 Linux /dev/sda4 1407 8844 59745735 5 Extended /dev/sda5 1407 8844 59745703+ 8e Linux LVM Disk /dev/sdb: 598.8 GB, 598879502336 bytes 255 heads, 63 sectors/track, 72809 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 66868 537117178+ 83 Linux /dev/sdb2 66869 72809 47721082+ 5 Extended host 1, mount: --------------- /dev/mapper/VolGroup_ID_27777-LogVol2 on / type ext3 (rw) none on /proc type proc (rw) none on /sys type sysfs (rw) none on /dev/pts type devpts (rw,gid=5,mode=620) usbfs on /proc/bus/usb type usbfs (rw) /dev/sda3 on /boot type ext3 (rw) none on /dev/shm type tmpfs (rw) /dev/mapper/VolGroup_ID_27777-LogVol3 on /tmp type ext3 (rw) /dev/mapper/VolGroup_ID_27777-LogVol6 on /usr type ext3 (rw) /dev/mapper/VolGroup_ID_27777-LogVol5 on /var type ext3 (rw) /dev/mapper/VolGroup_ID_27777-LogVolHome on /home type ext3 (rw) /dev/mapper/VolGroup_ID_27777-LogVolSybase on /sybase type ext3 (rw) /dev/mapper/VolGroup_ID_27777-LogVolTranLog on /tranlog type ext3 (rw) /dev/sdb1 on /sybasedata type ext3 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) Host 1, dmesg|grep swap ------------------------ Adding 1998840k swap on /dev/VolGroup_ID_27777/LogVol1. Priority:-1 extents:1 Adding 2097144k swap on /dev/VolGroup_ID_27777/LogVol0. Priority:-2 extents:1 Host 2, fdisk -l ---------------- Disk /dev/sda: 72.7 GB, 72746008576 bytes 255 heads, 63 sectors/track, 8844 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 4 32098+ de Dell Utility /dev/sda2 5 1534 12289725 83 Linux /dev/sda3 * 1535 1661 1020127+ 83 Linux /dev/sda4 1662 8844 57697447+ 5 Extended /dev/sda5 1662 8844 57697416 8e Linux LVM Disk /dev/sdb: 598.8 GB, 598879502336 bytes 255 heads, 63 sectors/track, 72809 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 * 1 72809 584838261 83 Linux Disk /dev/sdc: 299.4 GB, 299439751168 bytes 255 heads, 63 sectors/track, 36404 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdc1 1 4370 35101993+ 83 Linux /dev/sdc2 4371 36404 257313105 83 Linux Disk /dev/sdd: 320.0 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdd1 1 38913 312568641 83 Linux host 2, mount: --------------- /dev/mapper/VolGroup_ID_787-LogVol1 on / type ext3 (rw) none on /proc type proc (rw) none on /sys type sysfs (rw) none on /dev/pts type devpts (rw,gid=5,mode=620) usbfs on /proc/bus/usb type usbfs (rw) /dev/sda3 on /boot type ext3 (rw) none on /dev/shm type tmpfs (rw) /dev/mapper/VolGroup_ID_787-LogVol2 on /tmp type ext3 (rw) /dev/mapper/VolGroup_ID_787-LogVol5 on /usr type ext3 (rw) /dev/mapper/VolGroup_ID_787-LogVol4 on /var type ext3 (rw) /dev/mapper/VolGroup_ID_787-LogVolHome on /home type ext3 (rw) /dev/mapper/VolGroup_ID_787-LogVolSybase on /sybase type ext3 (rw) /dev/mapper/VolGroup_ID_787-LogVolTranlog on /tranlog type ext3 (rw) /dev/sdb1 on /sybasedata type ext3 (rw) /dev/sdc1 on /pkgs type ext3 (rw) /dev/sdc2 on /amanda-data type ext3 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) host 2 dmesg |grep swap: ------------------------ host 1 : dmesg |grep swap Adding 1769464k swap on /dev/VolGroup_ID_787/LogVol0. Priority:-1 extents:1 Best Regards, Tina _____ Date: Mon, 12 May 2008 07:11:12 -0500 From: jolt at ti.com To: redhat-sysadmin-list at redhat.com Subject: RE: Different performance Tina, How are the partitions laid out on the two systems? It is likely that something OS related is accessing sda and sdb or host 1 while being spread across more disks in host 2. From the output of host 2 you provided, the first stat shows sdc is taking some of the load. Regardless of the RAM being the same in both systems, is there much swapping? Swapping on higher performance drives will be quicker. Regards, Joseph _____ From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina Tian Sent: Friday, May 09, 2008 10:42 PM To: redhat-sysadmin-list at redhat.com Subject: RE: Different performance The DB is Sybase ASE 15.0.2. Identical configuration on two hosts. My SA also confirmed that two hosts are almost identical except host2(faster DB load) has extra two disks sdc and sdd, sdc and sdd are with higer RPM=15k. The rest of disks sda and adb are identical on two hosts, with RPM=7k. On both host 1 and host 2, DBs are on /dev/sdb only. Best Regards, Tina _____ To: redhat-sysadmin-list at redhat.com Date: Fri, 9 May 2008 16:58:19 -0600 From: larry.sorensen at juno.com Subject: Re: Different performance Please include information on the databases including versions. It could just be different configurations on the databases. Are the patches up to date and equal on both servers? On Fri, 9 May 2008 14:11:25 -0700 Tina Tian writes: I am a DBA. I have identical database servers running on two Linux redhat 4, host 1 and host 2. When I was running the same bulk load to database (load a data file to database), host 2 was much faster than host 1. On both host1 and host2, database are using file system mount on /dev/sda and /dev/sdb. I checked with my SA, host1 and host2 have same CPU, RAM, file system configuration. The only different is that host 2 has extra HD capacity with higher 15k RPM. But the extra 2 HDs(sdc and sdd) are dedicated to other applications, not used by database at all. My questions are: ----------------- 1. On host2 (faster), the extra faster HDs(/dev/sdc and sdd) are not used by database. Does it still affect IO performance of /dev/sda and /dev/sdb ? 2. During database bulk load testing, host 1(slower) shows longer service IO time (svctm) and longer IO waiting time(await). What other possible reason can cause this problem? Any idea? I did post the same issue to database discussion group and they suggested me to check OS performance(svctm). Below is the result from iostat on host1(slower) and host2(faster) during bulk load: Host 1: iostat -x 2 ===================== avg-cpu: %user %nice %sys %iowait %idle 0.15 0.00 0.07 0.28 99.49 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.01 0.59 0.24 0.19 29.22 6.17 14.61 3.08 83.49 0.01 21.71 3.84 0.16 sdb 0.04 10.05 0.89 3.74 117.37 110.34 58.69 55.17 49.13 0.10 21.76 4.48 2.08 avg-cpu: %user %nice %sys %iowait %idle 15.74 0.00 8.99 0.31 74.95 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 1.99 0.00 57.71 0.00 14025.87 0.00 7012.94 0.00 243.03 0.21 3.58 3.53 20.35 sdb 0.00 0.00 11.94 0.00 95.52 0.00 47.76 0.00 8.00 0.02 2.04 2.04 2.44 avg-cpu: %user %nice %sys %iowait %idle 6.18 0.00 2.37 9.24 82.20 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.50 0.50 23.00 1.00 5732.00 12.00 2866.00 6.00 239.33 0.07 3.08 3.02 7.25 sdb 0.00 129.00 7.00 130.00 56.00 2076.00 28.00 1038.00 15.56 0.75 5.49 5.40 73.95 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.12 12.44 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 3.50 0.00 3.00 0.00 52.00 0.00 26.00 17.33 0.03 10.00 3.67 1.10 sdb 0.00 182.50 0.00 182.50 0.00 2920.00 0.00 1460.00 16.00 0.99 5.44 5.44 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.49 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.01 0.00 12.06 0.00 6.03 12.00 0.01 6.00 6.00 0.60 sdb 0.00 184.92 0.00 185.43 0.00 2962.81 0.00 1481.41 15.98 1.01 5.45 5.38 99.70 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.43 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 184.08 0.00 184.08 0.00 2945.27 0.00 1472.64 16.00 0.99 5.39 5.38 99.00 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.31 87.56 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.02 15.33 6.67 1.00 sdb 0.00 181.00 0.00 181.00 0.00 2896.00 0.00 1448.00 16.00 0.99 5.48 5.49 99.40 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.19 12.37 87.45 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 178.00 0.00 178.50 0.00 2852.00 0.00 1426.00 15.98 1.00 5.61 5.55 99.10 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.37 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 179.50 0.00 179.50 0.00 2872.00 0.00 1436.00 16.00 0.99 5.52 5.53 99.25 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.44 87.50 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.50 0.00 3.50 0.00 40.00 0.00 20.00 11.43 0.07 20.00 4.00 1.40 sdb 0.00 179.00 0.00 179.50 0.00 2868.00 0.00 1434.00 15.98 1.02 5.68 5.53 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.19 12.41 87.34 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.00 0.00 12.00 0.00 6.00 12.00 0.01 6.50 6.50 0.65 sdb 0.00 183.50 0.00 183.50 0.00 2936.00 0.00 1468.00 16.00 0.99 5.40 5.41 99.25 Host 2: iostat -x 2 ================== avg-cpu: %user %nice %sys %iowait %idle 0.96 0.00 0.69 0.21 98.15 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 48.00 0.00 1.33 1.33 0.00 sda 0.01 5.31 0.23 1.55 17.96 54.93 8.98 27.47 40.76 0.07 41.59 1.21 0.22 sdb 0.03 3.99 0.84 0.47 113.52 35.67 56.76 17.83 114.36 0.03 23.00 2.55 0.33 sdc 0.05 37.80 0.58 1.50 131.96 314.37 65.98 157.19 214.93 0.43 205.85 2.84 0.59 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 40.35 0.00 3.52 3.52 0.00 avg-cpu: %user %nice %sys %iowait %idle 16.03 0.00 8.61 0.44 74.92 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.99 14.43 57.71 6.97 13775.12 171.14 6887.56 85.57 215.63 0.22 3.43 3.36 21.74 sdb 0.00 357.71 7.96 358.71 63.68 5731.34 31.84 2865.67 15.80 0.04 0.10 0.10 3.83 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 avg-cpu: %user %nice %sys %iowait %idle 15.62 0.00 8.81 0.56 75.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.50 0.00 56.00 0.00 13964.00 0.00 6982.00 0.00 249.36 0.22 3.90 3.89 21.80 sdb 0.00 635.00 7.00 635.00 64.00 10160.00 32.00 5080.00 15.93 0.06 0.09 0.09 5.55 sdc 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Thanks, Tina _____ Sign in and you could WIN! Enter for your chance to win $1000 every day. Visit SignInAndWIN.ca today to learn more! _____ You could win $1000 a day, now until May 12th, just for signing in to Windows Live Messenger. Check out SignInAndWIN.ca to learn more! _____ Sign in to Windows Live Messenger, and enter for your chance to win $1000 a day-today until May 12th. Visit SignInAndWIN.ca _____ Sign in today. When you sign in to Windows Live Messenger you could win $1000 a day until May 12th. Learn more at SignInAndWIN.ca _____ Sign in to Windows Live Messenger, and enter for your chance to win $1000 a day-today until May 12th. Visit SignInAndWIN.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From tinatianxia at hotmail.com Tue May 13 06:27:31 2008 From: tinatianxia at hotmail.com (Tina Tian) Date: Mon, 12 May 2008 23:27:31 -0700 Subject: Different performance Message-ID: Bill, Thank you so much for your help. I forward your mail to my SA. She confirmed that host1 is somewhat different from host2 (faster one). Originally they were the same but host2 was crash before and vendor tech support changed motherboard. Regarding the different cash memory installed and different read/write logistics you mentioned in your reply mail, she will run diagnostics to find it (that is beyond my OS knowledge). Hopefully my SA could find something special and you could finally win a Big Mac. I will post the diagnostics result from my SA. I also want to thank you SAs for your help. As a DBA, this is my first time to ask for help in SA's discussion group - redhat-sysadmin-list at redhat.com and it is very good experience. Best Regards, Tina From: bill at magicdigits.comTo: redhat-sysadmin-list at redhat.comDate: Mon, 12 May 2008 15:55:56 -0700Subject: RE: Different performance Tina, Now maybe we're getting somewhere =). The write through is best set in BIOS. RedHat driver *may* correctly detect that setting. The drivers "assuming" write through as stated means that on a shutdown 0, the driver may not issue "flush the cache" commands. If you are quick on the trigger and kill power as soon as it says OK, and the computer gods don't like your most recent sacrafice, you can lose some of the cache writes (with no error logged). Now as far as identical systems - you can see that one is an Adaptec and one is a Dell PERC (dpti) Megaraid controller. I'd bet a Big Mac that the BIOS of the Adaptec is set differently than the PERC on write-through. Also the controllers may have significantly different cache memories installed and certainly different read/write logistics. If you have and choose to keep dirty cache, please put a UPS on the system and a battery backup in the controller. While the setting is great for performance, it is not pretty on failure. Bill Watson bill at magicdigits.com -----Original Message-----From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina TianSent: Monday, May 12, 2008 3:22 PMTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performanceBill, Belows are the output from dmesg. host2 is "write-through". Is host 1 really "write-through"? Host1:scsi0 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 2.0.14 aic7902: Ultra320 Wide Channel A, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBsscsi1 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 2.0.14 aic7902: Ultra320 Wide Channel B, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBsmegasas: 00.00.02.03 Mon Jan 30 16:30:45 PST 2006megasas: 0x1028:0x0015:0x1028:0x1f03: bus 2:slot 14:func 0ACPI: PCI interrupt 0000:02:0e.0[A] -> GSI 142 (level, low) -> IRQ 209scsi2 : LSI Logic SAS based MegaRAID driver Vendor: DP Model: BACKPLANE Rev: 1.00 Type: Enclosure ANSI SCSI revision: 05 Vendor: DELL Model: PERC 5/i Rev: 1.00 Type: Direct-Access ANSI SCSI revision: 05SCSI device sda: 142082048 512-byte hdwr sectors (72746 MB)sda: asking for cache data failedsda: assuming drive cache: write through sda: sda1 sda2 sda3 sda4 < sda5 >Attached scsi disk sda at scsi2, channel 2, id 0, lun 0 Vendor: DELL Model: PERC 5/i Rev: 1.00 Type: Direct-Access ANSI SCSI revision: 05SCSI device sdb: 1169686528 512-byte hdwr sectors (598880 MB)sdb: asking for cache data failedsdb: assuming drive cache: write through sdb: sdb1 sdb2 < >Attached scsi disk sdb at scsi2, channel 2, id 1, lun 0 Host2:scsi0 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 2.0.14 aic7902: Ultra320 Wide Channel A, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBsscsi1 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 2.0.14 aic7902: Ultra320 Wide Channel B, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBsmegasas: 00.00.02.03 Mon Jan 30 16:30:45 PST 2006megasas: 0x1028:0x0015:0x1028:0x1f03: bus 2:slot 14:func 0ACPI: PCI interrupt 0000:02:0e.0[A] -> GSI 142 (level, low) -> IRQ 209scsi2 : LSI Logic SAS based MegaRAID driver Vendor: DP Model: BACKPLANE Rev: 1.05 Type: Enclosure ANSI SCSI revision: 05 Vendor: DELL Model: PERC 5/i Rev: 1.03 Type: Direct-Access ANSI SCSI revision: 05SCSI device sda: 142082048 512-byte hdwr sectors (72746 MB)SCSI device sda: drive cache: write through sda: sda1 sda2 sda3 sda4 < sda5 >Attached scsi disk sda at scsi2, channel 2, id 0, lun 0 Vendor: DELL Model: PERC 5/i Rev: 1.03 Type: Direct-Access ANSI SCSI revision: 05SCSI device sdb: 1169686528 512-byte hdwr sectors (598880 MB)SCSI device sdb: drive cache: write through sdb: sdb1Attached scsi disk sdb at scsi2, channel 2, id 1, lun 0 Vendor: DELL Model: PERC 5/i Rev: 1.03 Type: Direct-Access ANSI SCSI revision: 05SCSI device sdc: 584843264 512-byte hdwr sectors (299440 MB)SCSI device sdc: drive cache: write through sdc: sdc1 sdc2Attached scsi disk sdc at scsi2, channel 2, id 2, lun 0Thanks,Tina From: bill at magicdigits.comTo: redhat-sysadmin-list at redhat.comDate: Mon, 12 May 2008 14:09:32 -0700Subject: RE: Different performance Tina, Is there any chance that host 1 is set to "write-through" and host 2 is set to dirty (trust me) writes on the hardware disk controller or on the database driver? Note how host 1 waits until all reading is done prior to the first write - host 2 has simultaneous read/writing. Bill Watson bill at magicdigits.com -----Original Message-----From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina TianSent: Monday, May 12, 2008 2:00 PMTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performanceI just did two tests. Belows are the vmstat output from the tests. Test 1: dd if=/dev/zero of=dd_out.out bs=1MB count=700 ( dd_out.out creates a file on the same folder of database file). As suggested by Doungwu. vmstat 2 20 on host 1: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 168 622076 76312 1279844 0 0 1 0 0 0 0 0 100 0 0 0 168 622076 76312 1279844 0 0 0 0 1003 133 0 0 100 0 1 0 168 967084 76312 935864 0 0 0 0 1004 136 0 4 96 0 0 2 168 573380 76372 1320864 0 0 2 130386 1317 352 0 9 72 18 vmstat 2 20 on host 2: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 11716 699612 229432 723212 0 0 1 0 1 0 1 1 98 0 2 1 11716 358916 229748 1060636 0 0 0 27648 1057 457 0 7 88 5 0 1 11716 11328 230080 1402984 0 0 0 141598 1205 512 0 9 72 20 Test 2: load a ascii data file to the sybase databae on host1 and host2. vmstat 2 20 on host 1: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 168 1106604 78268 797408 0 0 1 0 0 0 0 0 100 0 2 0 168 1104076 78300 799456 0 0 1092 58 1034 175 2 1 96 1 2 0 168 1090124 78316 813480 0 0 6984 0 1102 181 16 9 75 0 2 0 168 1076108 78332 827244 0 0 6984 0 1101 182 16 9 75 0 0 1 168 1070044 78368 832928 0 0 2738 1168 1187 400 6 2 82 10 0 1 168 1069980 78372 832924 0 0 2 1388 1177 364 0 0 87 12 0 1 168 1070012 78380 832916 0 0 0 1494 1190 392 0 0 88 12 0 1 168 1070068 78388 832908 0 0 0 1382 1175 361 0 0 87 12 0 1 168 1070068 78388 832908 0 0 0 1324 1168 345 0 0 87 12 0 1 168 1069996 78396 832900 0 0 0 1426 1181 373 0 0 88 12 0 1 168 1069996 78396 832900 0 0 0 1488 1188 387 0 0 87 12 0 1 168 1070052 78404 832892 0 0 0 1422 1181 374 0 0 87 12 0 1 168 1070052 78404 832892 0 0 0 1316 1167 343 0 0 88 12 0 0 168 1070428 78404 832892 0 0 0 690 1093 245 0 0 94 6 vmstat 2 20 on host 2: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 1 0 11716 8156 230820 1412904 0 0 1 0 1 0 1 1 98 0 0 0 11716 9784 230820 1412904 0 0 0 0 1008 1078 0 0 99 0 2 0 11716 4680 230852 1417292 0 0 2328 44 1061 462 5 3 92 1 2 0 11716 4440 230216 1417928 0 0 6936 3330 1515 1309 15 9 74 1 2 0 11716 4824 226236 1421908 0 0 6862 5060 1736 1740 16 9 75 1 0 0 11716 4676 219492 1429432 0 0 1912 4786 1636 1615 5 3 92 0 0 0 11716 6884 217968 1428616 0 0 64 6 1015 1100 2 0 97 1 Can you see something special? Thank you,Tina Date: Mon, 12 May 2008 13:59:43 -0500From: jolt at ti.comTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performance Tina, Could you run a vmstat output while under load to see how much memory is swapping and how quickly context switching is occurring? ?vmstat 5 20? Also, what kernel is running? ?uname ?a? From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina TianSent: Monday, May 12, 2008 1:37 PMTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performance Thank you, Joseph. Let me explain it. On both host 1 and host2, sybase software is in /sybase and sybase database is in /sybasedata. On host 2, we have amada backup software in /dev/sdc and I believe some amada demon was running when I ran iostat. (> From the output of host 2 you provided, the first stat shows sdc is taking some of the load). Host 2 do have additional higher performance drivers which are not being used by sybase database (/sybasedata) at all. Will database be benefit from their quicker swap? Belows are results from fdisk/mount/dmesg(swap) on host1 and host2. Host 1, fdisk -l:-----------------Disk /dev/sda: 72.7 GB, 72746008576 bytes255 heads, 63 sectors/track, 8844 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sda1 1 4 32098+ de Dell Utility/dev/sda2 5 1279 10241437+ 83 Linux/dev/sda3 * 1280 1406 1020127+ 83 Linux/dev/sda4 1407 8844 59745735 5 Extended/dev/sda5 1407 8844 59745703+ 8e Linux LVMDisk /dev/sdb: 598.8 GB, 598879502336 bytes255 heads, 63 sectors/track, 72809 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdb1 1 66868 537117178+ 83 Linux/dev/sdb2 66869 72809 47721082+ 5 Extended host 1, mount:---------------/dev/mapper/VolGroup_ID_27777-LogVol2 on / type ext3 (rw)none on /proc type proc (rw)none on /sys type sysfs (rw)none on /dev/pts type devpts (rw,gid=5,mode=620)usbfs on /proc/bus/usb type usbfs (rw)/dev/sda3 on /boot type ext3 (rw)none on /dev/shm type tmpfs (rw)/dev/mapper/VolGroup_ID_27777-LogVol3 on /tmp type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVol6 on /usr type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVol5 on /var type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVolHome on /home type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVolSybase on /sybase type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVolTranLog on /tranlog type ext3 (rw)/dev/sdb1 on /sybasedata type ext3 (rw)none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)Host 1, dmesg|grep swap------------------------Adding 1998840k swap on /dev/VolGroup_ID_27777/LogVol1. Priority:-1 extents:1Adding 2097144k swap on /dev/VolGroup_ID_27777/LogVol0. Priority:-2 extents:1 Host 2, fdisk -l----------------Disk /dev/sda: 72.7 GB, 72746008576 bytes255 heads, 63 sectors/track, 8844 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sda1 1 4 32098+ de Dell Utility/dev/sda2 5 1534 12289725 83 Linux/dev/sda3 * 1535 1661 1020127+ 83 Linux/dev/sda4 1662 8844 57697447+ 5 Extended/dev/sda5 1662 8844 57697416 8e Linux LVMDisk /dev/sdb: 598.8 GB, 598879502336 bytes255 heads, 63 sectors/track, 72809 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdb1 * 1 72809 584838261 83 LinuxDisk /dev/sdc: 299.4 GB, 299439751168 bytes255 heads, 63 sectors/track, 36404 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdc1 1 4370 35101993+ 83 Linux/dev/sdc2 4371 36404 257313105 83 LinuxDisk /dev/sdd: 320.0 GB, 320072933376 bytes255 heads, 63 sectors/track, 38913 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdd1 1 38913 312568641 83 Linuxhost 2, mount:---------------/dev/mapper/VolGroup_ID_787-LogVol1 on / type ext3 (rw)none on /proc type proc (rw)none on /sys type sysfs (rw)none on /dev/pts type devpts (rw,gid=5,mode=620)usbfs on /proc/bus/usb type usbfs (rw)/dev/sda3 on /boot type ext3 (rw)none on /dev/shm type tmpfs (rw)/dev/mapper/VolGroup_ID_787-LogVol2 on /tmp type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVol5 on /usr type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVol4 on /var type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVolHome on /home type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVolSybase on /sybase type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVolTranlog on /tranlog type ext3 (rw)/dev/sdb1 on /sybasedata type ext3 (rw)/dev/sdc1 on /pkgs type ext3 (rw)/dev/sdc2 on /amanda-data type ext3 (rw)none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)host 2 dmesg |grep swap:------------------------ host 1 : dmesg |grep swapAdding 1769464k swap on /dev/VolGroup_ID_787/LogVol0. Priority:-1 extents:1 Best Regards,Tina Date: Mon, 12 May 2008 07:11:12 -0500From: jolt at ti.comTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performance Tina, How are the partitions laid out on the two systems? It is likely that something OS related is accessing sda and sdb or host 1 while being spread across more disks in host 2. From the output of host 2 you provided, the first stat shows sdc is taking some of the load. Regardless of the RAM being the same in both systems, is there much swapping? Swapping on higher performance drives will be quicker. Regards, Joseph From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina TianSent: Friday, May 09, 2008 10:42 PMTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performance The DB is Sybase ASE 15.0.2. Identical configuration on two hosts. My SA also confirmed that two hosts are almost identical except host2(faster DB load) has extra two disks sdc and sdd, sdc and sdd are with higer RPM=15k. The rest of disks sda and adb are identical on two hosts, with RPM=7k. On both host 1 and host 2, DBs are on /dev/sdb only. Best Regards,Tina To: redhat-sysadmin-list at redhat.comDate: Fri, 9 May 2008 16:58:19 -0600From: larry.sorensen at juno.comSubject: Re: Different performance Please include information on the databases including versions. It could just be different configurations on the databases. Are the patches up to date and equal on both servers? On Fri, 9 May 2008 14:11:25 -0700 Tina Tian writes: I am a DBA. I have identical database servers running on two Linux redhat 4, host 1 and host 2. When I was running the same bulk load to database (load a data file to database), host 2 was much faster than host 1. On both host1 and host2, database are using file system mount on /dev/sda and /dev/sdb. I checked with my SA, host1 and host2 have same CPU, RAM, file system configuration. The only different is that host 2 has extra HD capacity with higher 15k RPM. But the extra 2 HDs(sdc and sdd) are dedicated to other applications, not used by database at all. My questions are:-----------------1. On host2 (faster), the extra faster HDs(/dev/sdc and sdd) are not used by database. Does it still affect IO performance of /dev/sda and /dev/sdb ? 2. During database bulk load testing, host 1(slower) shows longer service IO time (svctm) and longer IO waiting time(await). What other possible reason can cause this problem? Any idea? I did post the same issue to database discussion group and they suggested me to check OS performance(svctm). Below is the result from iostat on host1(slower) and host2(faster) during bulk load: Host 1: iostat -x 2===================== avg-cpu: %user %nice %sys %iowait %idle 0.15 0.00 0.07 0.28 99.49 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.01 0.59 0.24 0.19 29.22 6.17 14.61 3.08 83.49 0.01 21.71 3.84 0.16 sdb 0.04 10.05 0.89 3.74 117.37 110.34 58.69 55.17 49.13 0.10 21.76 4.48 2.08 avg-cpu: %user %nice %sys %iowait %idle 15.74 0.00 8.99 0.31 74.95 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 1.99 0.00 57.71 0.00 14025.87 0.00 7012.94 0.00 243.03 0.21 3.58 3.53 20.35 sdb 0.00 0.00 11.94 0.00 95.52 0.00 47.76 0.00 8.00 0.02 2.04 2.04 2.44 avg-cpu: %user %nice %sys %iowait %idle 6.18 0.00 2.37 9.24 82.20 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.50 0.50 23.00 1.00 5732.00 12.00 2866.00 6.00 239.33 0.07 3.08 3.02 7.25 sdb 0.00 129.00 7.00 130.00 56.00 2076.00 28.00 1038.00 15.56 0.75 5.49 5.40 73.95 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.12 12.44 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 3.50 0.00 3.00 0.00 52.00 0.00 26.00 17.33 0.03 10.00 3.67 1.10 sdb 0.00 182.50 0.00 182.50 0.00 2920.00 0.00 1460.00 16.00 0.99 5.44 5.44 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.49 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.01 0.00 12.06 0.00 6.03 12.00 0.01 6.00 6.00 0.60 sdb 0.00 184.92 0.00 185.43 0.00 2962.81 0.00 1481.41 15.98 1.01 5.45 5.38 99.70 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.43 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 184.08 0.00 184.08 0.00 2945.27 0.00 1472.64 16.00 0.99 5.39 5.38 99.00 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.31 87.56 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.02 15.33 6.67 1.00 sdb 0.00 181.00 0.00 181.00 0.00 2896.00 0.00 1448.00 16.00 0.99 5.48 5.49 99.40 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.19 12.37 87.45 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 178.00 0.00 178.50 0.00 2852.00 0.00 1426.00 15.98 1.00 5.61 5.55 99.10 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.37 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 179.50 0.00 179.50 0.00 2872.00 0.00 1436.00 16.00 0.99 5.52 5.53 99.25 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.44 87.50 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.50 0.00 3.50 0.00 40.00 0.00 20.00 11.43 0.07 20.00 4.00 1.40 sdb 0.00 179.00 0.00 179.50 0.00 2868.00 0.00 1434.00 15.98 1.02 5.68 5.53 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.19 12.41 87.34 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.00 0.00 12.00 0.00 6.00 12.00 0.01 6.50 6.50 0.65 sdb 0.00 183.50 0.00 183.50 0.00 2936.00 0.00 1468.00 16.00 0.99 5.40 5.41 99.25 Host 2: iostat -x 2 ================== avg-cpu: %user %nice %sys %iowait %idle 0.96 0.00 0.69 0.21 98.15 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 48.00 0.00 1.33 1.33 0.00 sda 0.01 5.31 0.23 1.55 17.96 54.93 8.98 27.47 40.76 0.07 41.59 1.21 0.22 sdb 0.03 3.99 0.84 0.47 113.52 35.67 56.76 17.83 114.36 0.03 23.00 2.55 0.33 sdc 0.05 37.80 0.58 1.50 131.96 314.37 65.98 157.19 214.93 0.43 205.85 2.84 0.59 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 40.35 0.00 3.52 3.52 0.00 avg-cpu: %user %nice %sys %iowait %idle 16.03 0.00 8.61 0.44 74.92 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.99 14.43 57.71 6.97 13775.12 171.14 6887.56 85.57 215.63 0.22 3.43 3.36 21.74 sdb 0.00 357.71 7.96 358.71 63.68 5731.34 31.84 2865.67 15.80 0.04 0.10 0.10 3.83 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 avg-cpu: %user %nice %sys %iowait %idle 15.62 0.00 8.81 0.56 75.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.50 0.00 56.00 0.00 13964.00 0.00 6982.00 0.00 249.36 0.22 3.90 3.89 21.80 sdb 0.00 635.00 7.00 635.00 64.00 10160.00 32.00 5080.00 15.93 0.06 0.09 0.09 5.55 sdc 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Thanks, Tina Sign in and you could WIN! Enter for your chance to win $1000 every day. Visit SignInAndWIN.ca today to learn more! You could win $1000 a day, now until May 12th, just for signing in to Windows Live Messenger. Check out SignInAndWIN.ca to learn more! Sign in to Windows Live Messenger, and enter for your chance to win $1000 a day?today until May 12th. Visit SignInAndWIN.ca Sign in today. When you sign in to Windows Live Messenger you could win $1000 a day until May 12th. Learn more at SignInAndWIN.ca Sign in to Windows Live Messenger, and enter for your chance to win $1000 a day?today until May 12th. Visit SignInAndWIN.ca _________________________________________________________________ Enter today for your chance to win $1000 a day?today until May 12th. Learn more at SignInAndWIN.ca http://g.msn.ca/ca55/215 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tinatianxia at hotmail.com Tue May 13 23:15:59 2008 From: tinatianxia at hotmail.com (Tina Tian) Date: Tue, 13 May 2008 16:15:59 -0700 Subject: Different performance In-Reply-To: References: Message-ID: Before my SA shutdown the OS and run diagnostic software, I tried to figure it out without shutting down so I ran 'dmidecode' and compare output from host1 and host2 (faster). Could you helpe to find something which affect performance? Thanks again. diff host1.out host2.out =================== 3c3< 62 structures occupying 3131 bytes.---> 62 structures occupying 3171 bytes.14,15c14,15< Version: 1.1.0< Release Date: 06/21/2006---> Version: 1.3.7> Release Date: 03/26/200748,49c48,49< Serial Number: F64YKB1< UUID: 44454C4C-3600-1034-8059-C6C04F4B4231---> Serial Number: Not Specified> UUID: Not Present57c57< Serial Number: ..CN1374066L0068.---> Serial Number: ..CN1374066A00K0.65c65< Serial Number: F64YKB1---> Serial Number: Not Specified481c481< Serial Number: 0503A717---> Serial Number: 0503A715500c500< Serial Number: 0503A913---> Serial Number: 0503CB17519c519< Serial Number: 0503CC14---> Serial Number: 0503CB13538c538< Serial Number: 0503F324---> Serial Number: 0503CB12728c728< DMI type 212, 117 bytes.---> DMI type 212, 127 bytes.731c731< D4 75 00 D4 70 00 71 00 00 10 2D 2E 03 00 11 7F---> D4 7F 00 D4 70 00 71 00 00 10 2D 2E 03 00 11 7F738c738< FF FF 00 00 00---> 00 00 26 F7 00 00 00 26 F7 08 FF FF 00 00 00740c740< DMI type 212, 187 bytes.---> DMI type 212, 197 bytes.743c743< D4 BB 01 D4 70 00 71 00 03 40 5A 6D 6B 00 78 7F---> D4 C5 01 D4 70 00 71 00 03 40 5A 6D 6B 00 78 7F751,754c751,755< 40 54 FB 00 1C 40 54 F7 08 1D 40 54 F7 00 6E 00< 58 FC 01 2D 00 58 FC 02 2E 00 58 FC 00 22 40 58< EF 10 23 40 58 EF 00 BB 00 58 F3 04 BC 00 58 F3< 08 BA 00 58 F3 00 FF FF 00 00 00---> 40 54 FB 00 1C 40 54 F7 08 1D 40 54 F7 00 43 40> 58 DF 20 42 40 58 DF 00 6E 00 58 FC 01 2D 00 58> FC 02 2E 00 58 FC 00 22 40 58 EF 10 23 40 58 EF> 00 BB 00 58 F3 04 BC 00 58 F3 08 BA 00 58 F3 00> FF FF 00 00 00783c784< DMI type 212, 27 bytes.---> DMI type 212, 47 bytes.786,787c787,789< D4 1B 04 D4 72 00 73 00 00 40 5D 5E 41 40 40 FE< 01 40 40 40 FE 00 FF FF 00 00 00---> D4 2F 04 D4 72 00 73 00 00 40 5D 5E 41 40 40 FE> 01 40 40 40 FE 00 CF 01 40 FD 02 D0 01 40 FD 00> 45 40 40 F7 08 44 40 40 F7 00 FF FF 00 00 00800c802< DE 10 00 DE 01 04 FF FF 00 00 00 00 00 00 00 01---> DE 10 00 DE 01 04 00 00 07 07 12 15 11 00 00 01 From: tinatianxia at hotmail.comTo: redhat-sysadmin-list at redhat.comDate: Mon, 12 May 2008 23:27:31 -0700Subject: RE: Different performance Bill, Thank you so much for your help. I forward your mail to my SA. She confirmed that host1 is somewhat different from host2 (faster one). Originally they were the same but host2 was crash before and vendor tech support changed motherboard. Regarding the different cash memory installed and different read/write logistics you mentioned in your reply mail, she will run diagnostics to find it (that is beyond my OS knowledge). Hopefully my SA could find something special and you could finally win a Big Mac. I will post the diagnostics result from my SA. I also want to thank you SAs for your help. As a DBA, this is my first time to ask for help in SA's discussion group - redhat-sysadmin-list at redhat.com and it is very good experience. Best Regards,Tina From: bill at magicdigits.comTo: redhat-sysadmin-list at redhat.comDate: Mon, 12 May 2008 15:55:56 -0700Subject: RE: Different performance Tina, Now maybe we're getting somewhere =). The write through is best set in BIOS. RedHat driver *may* correctly detect that setting. The drivers "assuming" write through as stated means that on a shutdown 0, the driver may not issue "flush the cache" commands. If you are quick on the trigger and kill power as soon as it says OK, and the computer gods don't like your most recent sacrafice, you can lose some of the cache writes (with no error logged). Now as far as identical systems - you can see that one is an Adaptec and one is a Dell PERC (dpti) Megaraid controller. I'd bet a Big Mac that the BIOS of the Adaptec is set differently than the PERC on write-through. Also the controllers may have significantly different cache memories installed and certainly different read/write logistics. If you have and choose to keep dirty cache, please put a UPS on the system and a battery backup in the controller. While the setting is great for performance, it is not pretty on failure. Bill Watson bill at magicdigits.com -----Original Message-----From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina TianSent: Monday, May 12, 2008 3:22 PMTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performanceBill, Belows are the output from dmesg. host2 is "write-through". Is host 1 really "write-through"? Host1:scsi0 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 2.0.14 aic7902: Ultra320 Wide Channel A, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBsscsi1 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 2.0.14 aic7902: Ultra320 Wide Channel B, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBsmegasas: 00.00.02.03 Mon Jan 30 16:30:45 PST 2006megasas: 0x1028:0x0015:0x1028:0x1f03: bus 2:slot 14:func 0ACPI: PCI interrupt 0000:02:0e.0[A] -> GSI 142 (level, low) -> IRQ 209scsi2 : LSI Logic SAS based MegaRAID driver Vendor: DP Model: BACKPLANE Rev: 1.00 Type: Enclosure ANSI SCSI revision: 05 Vendor: DELL Model: PERC 5/i Rev: 1.00 Type: Direct-Access ANSI SCSI revision: 05SCSI device sda: 142082048 512-byte hdwr sectors (72746 MB)sda: asking for cache data failedsda: assuming drive cache: write through sda: sda1 sda2 sda3 sda4 < sda5 >Attached scsi disk sda at scsi2, channel 2, id 0, lun 0 Vendor: DELL Model: PERC 5/i Rev: 1.00 Type: Direct-Access ANSI SCSI revision: 05SCSI device sdb: 1169686528 512-byte hdwr sectors (598880 MB)sdb: asking for cache data failedsdb: assuming drive cache: write through sdb: sdb1 sdb2 < >Attached scsi disk sdb at scsi2, channel 2, id 1, lun 0 Host2:scsi0 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 2.0.14 aic7902: Ultra320 Wide Channel A, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBsscsi1 : Adaptec AIC79XX PCI-X SCSI HBA DRIVER, Rev 2.0.14 aic7902: Ultra320 Wide Channel B, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBsmegasas: 00.00.02.03 Mon Jan 30 16:30:45 PST 2006megasas: 0x1028:0x0015:0x1028:0x1f03: bus 2:slot 14:func 0ACPI: PCI interrupt 0000:02:0e.0[A] -> GSI 142 (level, low) -> IRQ 209scsi2 : LSI Logic SAS based MegaRAID driver Vendor: DP Model: BACKPLANE Rev: 1.05 Type: Enclosure ANSI SCSI revision: 05 Vendor: DELL Model: PERC 5/i Rev: 1.03 Type: Direct-Access ANSI SCSI revision: 05SCSI device sda: 142082048 512-byte hdwr sectors (72746 MB)SCSI device sda: drive cache: write through sda: sda1 sda2 sda3 sda4 < sda5 >Attached scsi disk sda at scsi2, channel 2, id 0, lun 0 Vendor: DELL Model: PERC 5/i Rev: 1.03 Type: Direct-Access ANSI SCSI revision: 05SCSI device sdb: 1169686528 512-byte hdwr sectors (598880 MB)SCSI device sdb: drive cache: write through sdb: sdb1Attached scsi disk sdb at scsi2, channel 2, id 1, lun 0 Vendor: DELL Model: PERC 5/i Rev: 1.03 Type: Direct-Access ANSI SCSI revision: 05SCSI device sdc: 584843264 512-byte hdwr sectors (299440 MB)SCSI device sdc: drive cache: write through sdc: sdc1 sdc2Attached scsi disk sdc at scsi2, channel 2, id 2, lun 0Thanks,Tina From: bill at magicdigits.comTo: redhat-sysadmin-list at redhat.comDate: Mon, 12 May 2008 14:09:32 -0700Subject: RE: Different performance Tina, Is there any chance that host 1 is set to "write-through" and host 2 is set to dirty (trust me) writes on the hardware disk controller or on the database driver? Note how host 1 waits until all reading is done prior to the first write - host 2 has simultaneous read/writing. Bill Watson bill at magicdigits.com -----Original Message-----From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina TianSent: Monday, May 12, 2008 2:00 PMTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performanceI just did two tests. Belows are the vmstat output from the tests. Test 1: dd if=/dev/zero of=dd_out.out bs=1MB count=700 ( dd_out.out creates a file on the same folder of database file). As suggested by Doungwu. vmstat 2 20 on host 1: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 168 622076 76312 1279844 0 0 1 0 0 0 0 0 100 0 0 0 168 622076 76312 1279844 0 0 0 0 1003 133 0 0 100 0 1 0 168 967084 76312 935864 0 0 0 0 1004 136 0 4 96 0 0 2 168 573380 76372 1320864 0 0 2 130386 1317 352 0 9 72 18 vmstat 2 20 on host 2: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 11716 699612 229432 723212 0 0 1 0 1 0 1 1 98 0 2 1 11716 358916 229748 1060636 0 0 0 27648 1057 457 0 7 88 5 0 1 11716 11328 230080 1402984 0 0 0 141598 1205 512 0 9 72 20 Test 2: load a ascii data file to the sybase databae on host1 and host2. vmstat 2 20 on host 1: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 168 1106604 78268 797408 0 0 1 0 0 0 0 0 100 0 2 0 168 1104076 78300 799456 0 0 1092 58 1034 175 2 1 96 1 2 0 168 1090124 78316 813480 0 0 6984 0 1102 181 16 9 75 0 2 0 168 1076108 78332 827244 0 0 6984 0 1101 182 16 9 75 0 0 1 168 1070044 78368 832928 0 0 2738 1168 1187 400 6 2 82 10 0 1 168 1069980 78372 832924 0 0 2 1388 1177 364 0 0 87 12 0 1 168 1070012 78380 832916 0 0 0 1494 1190 392 0 0 88 12 0 1 168 1070068 78388 832908 0 0 0 1382 1175 361 0 0 87 12 0 1 168 1070068 78388 832908 0 0 0 1324 1168 345 0 0 87 12 0 1 168 1069996 78396 832900 0 0 0 1426 1181 373 0 0 88 12 0 1 168 1069996 78396 832900 0 0 0 1488 1188 387 0 0 87 12 0 1 168 1070052 78404 832892 0 0 0 1422 1181 374 0 0 87 12 0 1 168 1070052 78404 832892 0 0 0 1316 1167 343 0 0 88 12 0 0 168 1070428 78404 832892 0 0 0 690 1093 245 0 0 94 6 vmstat 2 20 on host 2: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 1 0 11716 8156 230820 1412904 0 0 1 0 1 0 1 1 98 0 0 0 11716 9784 230820 1412904 0 0 0 0 1008 1078 0 0 99 0 2 0 11716 4680 230852 1417292 0 0 2328 44 1061 462 5 3 92 1 2 0 11716 4440 230216 1417928 0 0 6936 3330 1515 1309 15 9 74 1 2 0 11716 4824 226236 1421908 0 0 6862 5060 1736 1740 16 9 75 1 0 0 11716 4676 219492 1429432 0 0 1912 4786 1636 1615 5 3 92 0 0 0 11716 6884 217968 1428616 0 0 64 6 1015 1100 2 0 97 1 Can you see something special? Thank you,Tina Date: Mon, 12 May 2008 13:59:43 -0500From: jolt at ti.comTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performance Tina, Could you run a vmstat output while under load to see how much memory is swapping and how quickly context switching is occurring? ?vmstat 5 20? Also, what kernel is running? ?uname ?a? From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina TianSent: Monday, May 12, 2008 1:37 PMTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performance Thank you, Joseph. Let me explain it. On both host 1 and host2, sybase software is in /sybase and sybase database is in /sybasedata. On host 2, we have amada backup software in /dev/sdc and I believe some amada demon was running when I ran iostat. (> From the output of host 2 you provided, the first stat shows sdc is taking some of the load). Host 2 do have additional higher performance drivers which are not being used by sybase database (/sybasedata) at all. Will database be benefit from their quicker swap? Belows are results from fdisk/mount/dmesg(swap) on host1 and host2. Host 1, fdisk -l:-----------------Disk /dev/sda: 72.7 GB, 72746008576 bytes255 heads, 63 sectors/track, 8844 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sda1 1 4 32098+ de Dell Utility/dev/sda2 5 1279 10241437+ 83 Linux/dev/sda3 * 1280 1406 1020127+ 83 Linux/dev/sda4 1407 8844 59745735 5 Extended/dev/sda5 1407 8844 59745703+ 8e Linux LVMDisk /dev/sdb: 598.8 GB, 598879502336 bytes255 heads, 63 sectors/track, 72809 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdb1 1 66868 537117178+ 83 Linux/dev/sdb2 66869 72809 47721082+ 5 Extended host 1, mount:---------------/dev/mapper/VolGroup_ID_27777-LogVol2 on / type ext3 (rw)none on /proc type proc (rw)none on /sys type sysfs (rw)none on /dev/pts type devpts (rw,gid=5,mode=620)usbfs on /proc/bus/usb type usbfs (rw)/dev/sda3 on /boot type ext3 (rw)none on /dev/shm type tmpfs (rw)/dev/mapper/VolGroup_ID_27777-LogVol3 on /tmp type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVol6 on /usr type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVol5 on /var type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVolHome on /home type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVolSybase on /sybase type ext3 (rw)/dev/mapper/VolGroup_ID_27777-LogVolTranLog on /tranlog type ext3 (rw)/dev/sdb1 on /sybasedata type ext3 (rw)none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)Host 1, dmesg|grep swap------------------------Adding 1998840k swap on /dev/VolGroup_ID_27777/LogVol1. Priority:-1 extents:1Adding 2097144k swap on /dev/VolGroup_ID_27777/LogVol0. Priority:-2 extents:1 Host 2, fdisk -l----------------Disk /dev/sda: 72.7 GB, 72746008576 bytes255 heads, 63 sectors/track, 8844 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sda1 1 4 32098+ de Dell Utility/dev/sda2 5 1534 12289725 83 Linux/dev/sda3 * 1535 1661 1020127+ 83 Linux/dev/sda4 1662 8844 57697447+ 5 Extended/dev/sda5 1662 8844 57697416 8e Linux LVMDisk /dev/sdb: 598.8 GB, 598879502336 bytes255 heads, 63 sectors/track, 72809 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdb1 * 1 72809 584838261 83 LinuxDisk /dev/sdc: 299.4 GB, 299439751168 bytes255 heads, 63 sectors/track, 36404 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdc1 1 4370 35101993+ 83 Linux/dev/sdc2 4371 36404 257313105 83 LinuxDisk /dev/sdd: 320.0 GB, 320072933376 bytes255 heads, 63 sectors/track, 38913 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdd1 1 38913 312568641 83 Linuxhost 2, mount:---------------/dev/mapper/VolGroup_ID_787-LogVol1 on / type ext3 (rw)none on /proc type proc (rw)none on /sys type sysfs (rw)none on /dev/pts type devpts (rw,gid=5,mode=620)usbfs on /proc/bus/usb type usbfs (rw)/dev/sda3 on /boot type ext3 (rw)none on /dev/shm type tmpfs (rw)/dev/mapper/VolGroup_ID_787-LogVol2 on /tmp type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVol5 on /usr type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVol4 on /var type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVolHome on /home type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVolSybase on /sybase type ext3 (rw)/dev/mapper/VolGroup_ID_787-LogVolTranlog on /tranlog type ext3 (rw)/dev/sdb1 on /sybasedata type ext3 (rw)/dev/sdc1 on /pkgs type ext3 (rw)/dev/sdc2 on /amanda-data type ext3 (rw)none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)host 2 dmesg |grep swap:------------------------ host 1 : dmesg |grep swapAdding 1769464k swap on /dev/VolGroup_ID_787/LogVol0. Priority:-1 extents:1 Best Regards,Tina Date: Mon, 12 May 2008 07:11:12 -0500From: jolt at ti.comTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performance Tina, How are the partitions laid out on the two systems? It is likely that something OS related is accessing sda and sdb or host 1 while being spread across more disks in host 2. From the output of host 2 you provided, the first stat shows sdc is taking some of the load. Regardless of the RAM being the same in both systems, is there much swapping? Swapping on higher performance drives will be quicker. Regards, Joseph From: redhat-sysadmin-list-bounces at redhat.com [mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina TianSent: Friday, May 09, 2008 10:42 PMTo: redhat-sysadmin-list at redhat.comSubject: RE: Different performance The DB is Sybase ASE 15.0.2. Identical configuration on two hosts. My SA also confirmed that two hosts are almost identical except host2(faster DB load) has extra two disks sdc and sdd, sdc and sdd are with higer RPM=15k. The rest of disks sda and adb are identical on two hosts, with RPM=7k. On both host 1 and host 2, DBs are on /dev/sdb only. Best Regards,Tina To: redhat-sysadmin-list at redhat.comDate: Fri, 9 May 2008 16:58:19 -0600From: larry.sorensen at juno.comSubject: Re: Different performance Please include information on the databases including versions. It could just be different configurations on the databases. Are the patches up to date and equal on both servers? On Fri, 9 May 2008 14:11:25 -0700 Tina Tian writes: I am a DBA. I have identical database servers running on two Linux redhat 4, host 1 and host 2. When I was running the same bulk load to database (load a data file to database), host 2 was much faster than host 1. On both host1 and host2, database are using file system mount on /dev/sda and /dev/sdb. I checked with my SA, host1 and host2 have same CPU, RAM, file system configuration. The only different is that host 2 has extra HD capacity with higher 15k RPM. But the extra 2 HDs(sdc and sdd) are dedicated to other applications, not used by database at all. My questions are:-----------------1. On host2 (faster), the extra faster HDs(/dev/sdc and sdd) are not used by database. Does it still affect IO performance of /dev/sda and /dev/sdb ? 2. During database bulk load testing, host 1(slower) shows longer service IO time (svctm) and longer IO waiting time(await). What other possible reason can cause this problem? Any idea? I did post the same issue to database discussion group and they suggested me to check OS performance(svctm). Below is the result from iostat on host1(slower) and host2(faster) during bulk load: Host 1: iostat -x 2===================== avg-cpu: %user %nice %sys %iowait %idle 0.15 0.00 0.07 0.28 99.49 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.01 0.59 0.24 0.19 29.22 6.17 14.61 3.08 83.49 0.01 21.71 3.84 0.16 sdb 0.04 10.05 0.89 3.74 117.37 110.34 58.69 55.17 49.13 0.10 21.76 4.48 2.08 avg-cpu: %user %nice %sys %iowait %idle 15.74 0.00 8.99 0.31 74.95 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 1.99 0.00 57.71 0.00 14025.87 0.00 7012.94 0.00 243.03 0.21 3.58 3.53 20.35 sdb 0.00 0.00 11.94 0.00 95.52 0.00 47.76 0.00 8.00 0.02 2.04 2.04 2.44 avg-cpu: %user %nice %sys %iowait %idle 6.18 0.00 2.37 9.24 82.20 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.50 0.50 23.00 1.00 5732.00 12.00 2866.00 6.00 239.33 0.07 3.08 3.02 7.25 sdb 0.00 129.00 7.00 130.00 56.00 2076.00 28.00 1038.00 15.56 0.75 5.49 5.40 73.95 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.12 12.44 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 3.50 0.00 3.00 0.00 52.00 0.00 26.00 17.33 0.03 10.00 3.67 1.10 sdb 0.00 182.50 0.00 182.50 0.00 2920.00 0.00 1460.00 16.00 0.99 5.44 5.44 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.49 87.38 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.01 0.00 12.06 0.00 6.03 12.00 0.01 6.00 6.00 0.60 sdb 0.00 184.92 0.00 185.43 0.00 2962.81 0.00 1481.41 15.98 1.01 5.45 5.38 99.70 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.43 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 184.08 0.00 184.08 0.00 2945.27 0.00 1472.64 16.00 0.99 5.39 5.38 99.00 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.31 87.56 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.02 15.33 6.67 1.00 sdb 0.00 181.00 0.00 181.00 0.00 2896.00 0.00 1448.00 16.00 0.99 5.48 5.49 99.40 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.19 12.37 87.45 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 178.00 0.00 178.50 0.00 2852.00 0.00 1426.00 15.98 1.00 5.61 5.55 99.10 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.12 12.37 87.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdb 0.00 179.50 0.00 179.50 0.00 2872.00 0.00 1436.00 16.00 0.99 5.52 5.53 99.25 avg-cpu: %user %nice %sys %iowait %idle 0.00 0.00 0.06 12.44 87.50 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1.50 0.00 3.50 0.00 40.00 0.00 20.00 11.43 0.07 20.00 4.00 1.40 sdb 0.00 179.00 0.00 179.50 0.00 2868.00 0.00 1434.00 15.98 1.02 5.68 5.53 99.30 avg-cpu: %user %nice %sys %iowait %idle 0.06 0.00 0.19 12.41 87.34 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.50 0.00 1.00 0.00 12.00 0.00 6.00 12.00 0.01 6.50 6.50 0.65 sdb 0.00 183.50 0.00 183.50 0.00 2936.00 0.00 1468.00 16.00 0.99 5.40 5.41 99.25 Host 2: iostat -x 2 ================== avg-cpu: %user %nice %sys %iowait %idle 0.96 0.00 0.69 0.21 98.15 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 48.00 0.00 1.33 1.33 0.00 sda 0.01 5.31 0.23 1.55 17.96 54.93 8.98 27.47 40.76 0.07 41.59 1.21 0.22 sdb 0.03 3.99 0.84 0.47 113.52 35.67 56.76 17.83 114.36 0.03 23.00 2.55 0.33 sdc 0.05 37.80 0.58 1.50 131.96 314.37 65.98 157.19 214.93 0.43 205.85 2.84 0.59 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 40.35 0.00 3.52 3.52 0.00 avg-cpu: %user %nice %sys %iowait %idle 16.03 0.00 8.61 0.44 74.92 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.99 14.43 57.71 6.97 13775.12 171.14 6887.56 85.57 215.63 0.22 3.43 3.36 21.74 sdb 0.00 357.71 7.96 358.71 63.68 5731.34 31.84 2865.67 15.80 0.04 0.10 0.10 3.83 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 avg-cpu: %user %nice %sys %iowait %idle 15.62 0.00 8.81 0.56 75.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sda 1.50 0.00 56.00 0.00 13964.00 0.00 6982.00 0.00 249.36 0.22 3.90 3.89 21.80 sdb 0.00 635.00 7.00 635.00 64.00 10160.00 32.00 5080.00 15.93 0.06 0.09 0.09 5.55 sdc 0.00 1.00 0.00 1.50 0.00 20.00 0.00 10.00 13.33 0.00 0.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Thanks, Tina Sign in and you could WIN! Enter for your chance to win $1000 every day. Visit SignInAndWIN.ca today to learn more! You could win $1000 a day, now until May 12th, just for signing in to Windows Live Messenger. Check out SignInAndWIN.ca to learn more! Sign in to Windows Live Messenger, and enter for your chance to win $1000 a day?today until May 12th. Visit SignInAndWIN.ca Sign in today. When you sign in to Windows Live Messenger you could win $1000 a day until May 12th. Learn more at SignInAndWIN.ca Sign in to Windows Live Messenger, and enter for your chance to win $1000 a day?today until May 12th. Visit SignInAndWIN.ca Sign in to Windows Live Messenger, and enter for your chance to win $1000 a day?today until May 12th. Visit SignInAndWIN.ca _________________________________________________________________ Find hidden words, unscramble celebrity names, or try the ultimate crossword puzzle with Live Search Games. Play now! http://g.msn.ca/ca55/212 -------------- next part -------------- An HTML attachment was scrubbed... URL: