Different performance

Olt, Joseph jolt at ti.com
Mon May 12 18:59:43 UTC 2008


Tina,

 

Could you run a vmstat output while under load to see how much memory is
swapping and how quickly context switching is occurring?  "vmstat 5 20"

Also, what kernel is running?  "uname -a"

 

________________________________

From: redhat-sysadmin-list-bounces at redhat.com
[mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina Tian
Sent: Monday, May 12, 2008 1:37 PM
To: redhat-sysadmin-list at redhat.com
Subject: RE: Different performance

 

 
Thank you, Joseph.
 
Let me explain it. On both host 1 and host2, sybase software is in
/sybase and sybase database is in /sybasedata. On host 2, we have amada
backup software in /dev/sdc and I believe some amada demon was running
when I ran iostat. (> From the output of host 2 you provided, the first
stat shows sdc is taking some of the load).
 
Host 2 do have additional higher performance drivers which are not being
used by sybase database (/sybasedata) at all. Will database be benefit
from their quicker swap?
 
Belows are results from fdisk/mount/dmesg(swap) on host1 and host2. 
 

Host 1, fdisk -l:
-----------------
Disk /dev/sda: 72.7 GB, 72746008576 bytes
255 heads, 63 sectors/track, 8844 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1           4       32098+  de  Dell Utility
/dev/sda2               5        1279    10241437+  83  Linux
/dev/sda3   *        1280        1406     1020127+  83  Linux
/dev/sda4            1407        8844    59745735    5  Extended
/dev/sda5            1407        8844    59745703+  8e  Linux LVM
Disk /dev/sdb: 598.8 GB, 598879502336 bytes
255 heads, 63 sectors/track, 72809 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1       66868   537117178+  83  Linux
/dev/sdb2           66869       72809    47721082+   5  Extended

 
host 1, mount:
---------------
/dev/mapper/VolGroup_ID_27777-LogVol2 on / type ext3 (rw)
none on /proc type proc (rw)
none on /sys type sysfs (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
usbfs on /proc/bus/usb type usbfs (rw)
/dev/sda3 on /boot type ext3 (rw)
none on /dev/shm type tmpfs (rw)
/dev/mapper/VolGroup_ID_27777-LogVol3 on /tmp type ext3 (rw)
/dev/mapper/VolGroup_ID_27777-LogVol6 on /usr type ext3 (rw)
/dev/mapper/VolGroup_ID_27777-LogVol5 on /var type ext3 (rw)
/dev/mapper/VolGroup_ID_27777-LogVolHome on /home type ext3 (rw)
/dev/mapper/VolGroup_ID_27777-LogVolSybase on /sybase type ext3 (rw)
/dev/mapper/VolGroup_ID_27777-LogVolTranLog on /tranlog type ext3 (rw)
/dev/sdb1 on /sybasedata type ext3 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

Host 1, dmesg|grep swap
------------------------
Adding 1998840k swap on /dev/VolGroup_ID_27777/LogVol1.  Priority:-1
extents:1
Adding 2097144k swap on /dev/VolGroup_ID_27777/LogVol0.  Priority:-2
extents:1
 
Host 2, fdisk -l
----------------
Disk /dev/sda: 72.7 GB, 72746008576 bytes
255 heads, 63 sectors/track, 8844 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1           4       32098+  de  Dell Utility
/dev/sda2               5        1534    12289725   83  Linux
/dev/sda3   *        1535        1661     1020127+  83  Linux
/dev/sda4            1662        8844    57697447+   5  Extended
/dev/sda5            1662        8844    57697416   8e  Linux LVM
Disk /dev/sdb: 598.8 GB, 598879502336 bytes
255 heads, 63 sectors/track, 72809 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1       72809   584838261   83  Linux
Disk /dev/sdc: 299.4 GB, 299439751168 bytes
255 heads, 63 sectors/track, 36404 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1        4370    35101993+  83  Linux
/dev/sdc2            4371       36404   257313105   83  Linux
Disk /dev/sdd: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1       38913   312568641   83  Linux

host 2, mount:
---------------
/dev/mapper/VolGroup_ID_787-LogVol1 on / type ext3 (rw)
none on /proc type proc (rw)
none on /sys type sysfs (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
usbfs on /proc/bus/usb type usbfs (rw)
/dev/sda3 on /boot type ext3 (rw)
none on /dev/shm type tmpfs (rw)
/dev/mapper/VolGroup_ID_787-LogVol2 on /tmp type ext3 (rw)
/dev/mapper/VolGroup_ID_787-LogVol5 on /usr type ext3 (rw)
/dev/mapper/VolGroup_ID_787-LogVol4 on /var type ext3 (rw)
/dev/mapper/VolGroup_ID_787-LogVolHome on /home type ext3 (rw)
/dev/mapper/VolGroup_ID_787-LogVolSybase on /sybase type ext3 (rw)
/dev/mapper/VolGroup_ID_787-LogVolTranlog on /tranlog type ext3 (rw)
/dev/sdb1 on /sybasedata type ext3 (rw)
/dev/sdc1 on /pkgs type ext3 (rw)
/dev/sdc2 on /amanda-data type ext3 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

host 2 dmesg |grep swap:
------------------------
 
host 1 : dmesg |grep swap
Adding 1769464k swap on /dev/VolGroup_ID_787/LogVol0.  Priority:-1
extents:1

 
Best Regards,
Tina

________________________________

Date: Mon, 12 May 2008 07:11:12 -0500
From: jolt at ti.com
To: redhat-sysadmin-list at redhat.com
Subject: RE: Different performance

Tina,

 

How are the partitions laid out on the two systems?  It is likely that
something OS related is accessing sda and sdb or host 1 while being
spread across more disks in host 2.  From the output of host 2 you
provided, the first stat shows sdc is taking some of the load.
Regardless of the RAM being the same in both systems, is there much
swapping?  Swapping on higher performance drives will be quicker.

 

Regards,

 

Joseph

 

________________________________

From: redhat-sysadmin-list-bounces at redhat.com
[mailto:redhat-sysadmin-list-bounces at redhat.com] On Behalf Of Tina Tian
Sent: Friday, May 09, 2008 10:42 PM
To: redhat-sysadmin-list at redhat.com
Subject: RE: Different performance

 

The DB is Sybase ASE 15.0.2. Identical configuration on two hosts. My SA
also confirmed that two hosts are almost identical except host2(faster
DB load) has extra two disks sdc and sdd, sdc and sdd are with higer
RPM=15k.   The rest of disks sda and adb are identical on two hosts,
with RPM=7k.   On both host 1 and host 2, DBs are on /dev/sdb only.
 
 
Best Regards,
Tina

________________________________

To: redhat-sysadmin-list at redhat.com
Date: Fri, 9 May 2008 16:58:19 -0600
From: larry.sorensen at juno.com
Subject: Re: Different performance

Please include information on the databases including versions. It could
just be different configurations on the databases. Are the patches up to
date and equal on both servers?

 

On Fri, 9 May 2008 14:11:25 -0700 Tina Tian <tinatianxia at hotmail.com>
writes:

	I am a DBA. I have identical database servers running on two
Linux redhat 4, host 1 and host 2. When I was running the same bulk load
to database (load a data file to database), host 2 was much faster than
host 1. 
	 
	On both host1 and host2, database are using file system mount on
/dev/sda and /dev/sdb.
	 
	I checked with my SA, host1 and host2 have same CPU, RAM, file
system configuration. The only different is that host 2 has extra HD
capacity with higher 15k RPM. But the extra 2 HDs(sdc and sdd) are
dedicated to other applications, not used by database at all. 
	 
	My questions are:
	-----------------
	1. On host2 (faster), the extra faster HDs(/dev/sdc and sdd) are
not used by database. Does it still affect IO performance of /dev/sda
and /dev/sdb ?
	 
	2. During database bulk load testing, host 1(slower) shows
longer service IO time (svctm) and longer IO waiting time(await).  
	   What other possible reason can cause this problem? Any idea?
	 
	I did post the same issue to database discussion group and they
suggested me to check OS performance(svctm).
	 
	 
	Below is the result from iostat on host1(slower) and
host2(faster) during bulk load:
	 
	Host 1: iostat -x 2
	=====================

	 

	avg-cpu:  %user   %nice    %sys %iowait   %idle

	           0.15    0.00    0.07    0.28   99.49

	 

	Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s
wkB/s avgrq-sz avgqu-sz   await  svctm  %util

	sda          0.01   0.59  0.24  0.19   29.22    6.17    14.61
3.08    83.49     0.01   21.71   3.84   0.16

	sdb          0.04  10.05  0.89  3.74  117.37  110.34    58.69
55.17    49.13     0.10   21.76   4.48   2.08

	 

	avg-cpu:  %user   %nice    %sys %iowait   %idle

	          15.74    0.00    8.99    0.31   74.95

	 

	Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s
wkB/s avgrq-sz avgqu-sz   await  svctm  %util

	sda          1.99   0.00 57.71  0.00 14025.87    0.00  7012.94
0.00   243.03     0.21    3.58   3.53  20.35

	sdb          0.00   0.00 11.94  0.00   95.52    0.00    47.76
0.00     8.00     0.02    2.04   2.04   2.44

	 

	avg-cpu:  %user   %nice    %sys %iowait   %idle

	           6.18    0.00    2.37    9.24   82.20

	 

	Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s
wkB/s avgrq-sz avgqu-sz   await  svctm  %util

	sda          0.50   0.50 23.00  1.00 5732.00   12.00  2866.00
6.00   239.33     0.07    3.08   3.02   7.25

	sdb          0.00 129.00  7.00 130.00   56.00 2076.00    28.00
1038.00    15.56     0.75    5.49   5.40  73.95

	 

	avg-cpu:  %user   %nice    %sys %iowait   %idle

	           0.06    0.00    0.12   12.44   87.38

	 

	Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s
wkB/s avgrq-sz avgqu-sz   await  svctm  %util

	sda          0.00   3.50  0.00  3.00    0.00   52.00     0.00
26.00    17.33     0.03   10.00   3.67   1.10

	sdb          0.00 182.50  0.00 182.50    0.00 2920.00     0.00
1460.00    16.00     0.99    5.44   5.44  99.30

	 

	avg-cpu:  %user   %nice    %sys %iowait   %idle

	           0.00    0.00    0.12   12.49   87.38

	 

	Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s
wkB/s avgrq-sz avgqu-sz   await  svctm  %util

	sda          0.00   0.50  0.00  1.01    0.00   12.06     0.00
6.03    12.00     0.01    6.00   6.00   0.60

	sdb          0.00 184.92  0.00 185.43    0.00 2962.81     0.00
1481.41    15.98     1.01    5.45   5.38  99.70

	 

	avg-cpu:  %user   %nice    %sys %iowait   %idle

	           0.00    0.00    0.06   12.43   87.51

	 

	Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s
wkB/s avgrq-sz avgqu-sz   await  svctm  %util

	sda          0.00   0.00  0.00  0.00    0.00    0.00     0.00
0.00     0.00     0.00    0.00   0.00   0.00

	sdb          0.00 184.08  0.00 184.08    0.00 2945.27     0.00
1472.64    16.00     0.99    5.39   5.38  99.00

	 

	avg-cpu:  %user   %nice    %sys %iowait   %idle

	           0.00    0.00    0.12   12.31   87.56

	 

	Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s
wkB/s avgrq-sz avgqu-sz   await  svctm  %util

	sda          0.00   1.00  0.00  1.50    0.00   20.00     0.00
10.00    13.33     0.02   15.33   6.67   1.00

	sdb          0.00 181.00  0.00 181.00    0.00 2896.00     0.00
1448.00    16.00     0.99    5.48   5.49  99.40

	 

	avg-cpu:  %user   %nice    %sys %iowait   %idle

	           0.00    0.00    0.19   12.37   87.45

	 

	Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s
wkB/s avgrq-sz avgqu-sz   await  svctm  %util

	sda          0.00   0.00  0.00  0.00    0.00    0.00     0.00
0.00     0.00     0.00    0.00   0.00   0.00

	sdb          0.00 178.00  0.00 178.50    0.00 2852.00     0.00
1426.00    15.98     1.00    5.61   5.55  99.10

	 

	avg-cpu:  %user   %nice    %sys %iowait   %idle

	           0.00    0.00    0.12   12.37   87.51

	 

	Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s
wkB/s avgrq-sz avgqu-sz   await  svctm  %util

	sda          0.00   0.00  0.00  0.00    0.00    0.00     0.00
0.00     0.00     0.00    0.00   0.00   0.00

	sdb          0.00 179.50  0.00 179.50    0.00 2872.00     0.00
1436.00    16.00     0.99    5.52   5.53  99.25

	 

	avg-cpu:  %user   %nice    %sys %iowait   %idle

	           0.00    0.00    0.06   12.44   87.50

	 

	Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s
wkB/s avgrq-sz avgqu-sz   await  svctm  %util

	sda          0.00   1.50  0.00  3.50    0.00   40.00     0.00
20.00    11.43     0.07   20.00   4.00   1.40

	sdb          0.00 179.00  0.00 179.50    0.00 2868.00     0.00
1434.00    15.98     1.02    5.68   5.53  99.30

	 

	avg-cpu:  %user   %nice    %sys %iowait   %idle

	           0.06    0.00    0.19   12.41   87.34

	 

	Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s
wkB/s avgrq-sz avgqu-sz   await  svctm  %util

	sda          0.00   0.50  0.00  1.00    0.00   12.00     0.00
6.00    12.00     0.01    6.50   6.50   0.65

	sdb          0.00 183.50  0.00 183.50    0.00 2936.00     0.00
1468.00    16.00     0.99    5.40   5.41  99.25

	 

	
	 

	Host 2: iostat -x 2

	==================

	avg-cpu:  %user   %nice    %sys %iowait   %idle

	           0.96    0.00    0.69    0.21   98.15

	 

	Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s
wkB/s avgrq-sz avgqu-sz   await  svctm  %util

	hda          0.00   0.00  0.00  0.00    0.00    0.00     0.00
0.00    48.00     0.00    1.33   1.33   0.00

	sda          0.01   5.31  0.23  1.55   17.96   54.93     8.98
27.47    40.76     0.07   41.59   1.21   0.22

	sdb          0.03   3.99  0.84  0.47  113.52   35.67    56.76
17.83   114.36     0.03   23.00   2.55   0.33

	sdc          0.05  37.80  0.58  1.50  131.96  314.37    65.98
157.19   214.93     0.43  205.85   2.84   0.59

	sdd          0.00   0.00  0.00  0.00    0.00    0.00     0.00
0.00    40.35     0.00    3.52   3.52   0.00

	 

	avg-cpu:  %user   %nice    %sys %iowait   %idle

	          16.03    0.00    8.61    0.44   74.92

	 

	Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s
wkB/s avgrq-sz avgqu-sz   await  svctm  %util

	hda          0.00   0.00  0.00  0.00    0.00    0.00     0.00
0.00     0.00     0.00    0.00   0.00   0.00

	sda          1.99  14.43 57.71  6.97 13775.12  171.14  6887.56
85.57   215.63     0.22    3.43   3.36  21.74

	sdb          0.00 357.71  7.96 358.71   63.68 5731.34    31.84
2865.67    15.80     0.04    0.10   0.10   3.83

	sdc          0.00   0.00  0.00  0.00    0.00    0.00     0.00
0.00     0.00     0.00    0.00   0.00   0.00

	sdd          0.00   0.00  0.00  0.00    0.00    0.00     0.00
0.00     0.00     0.00    0.00   0.00   0.00

	 

	avg-cpu:  %user   %nice    %sys %iowait   %idle

	          15.62    0.00    8.81    0.56   75.00

	 

	Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s
wkB/s avgrq-sz avgqu-sz   await  svctm  %util

	hda          0.00   0.00  0.00  0.00    0.00    0.00     0.00
0.00     0.00     0.00    0.00   0.00   0.00

	sda          1.50   0.00 56.00  0.00 13964.00    0.00  6982.00
0.00   249.36     0.22    3.90   3.89  21.80

	sdb          0.00 635.00  7.00 635.00   64.00 10160.00    32.00
5080.00    15.93     0.06    0.09   0.09   5.55

	sdc          0.00   1.00  0.00  1.50    0.00   20.00     0.00
10.00    13.33     0.00    0.00   0.00   0.00

	sdd          0.00   0.00  0.00  0.00    0.00    0.00     0.00
0.00     0.00     0.00    0.00   0.00   0.00

	 

	 

	Thanks,

	Tina

	
	 

	
________________________________


	Sign in and you could WIN! Enter for your chance to win $1000
every day. Visit SignInAndWIN.ca today to learn more!
<http://g.msn.ca/ca55/216>  

	 

 

________________________________

You could win $1000 a day, now until May 12th, just for signing in to
Windows Live Messenger. Check out SignInAndWIN.ca to learn more!
<http://g.msn.ca/ca55/211> 

 

________________________________

Sign in to Windows Live Messenger, and enter for your chance to win
$1000 a day-today until May 12th. Visit SignInAndWIN.ca
<http://g.msn.ca/ca55/210> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/redhat-sysadmin-list/attachments/20080512/a334f0aa/attachment.htm>


More information about the redhat-sysadmin-list mailing list