[Linux-cluster] GFS (1 & partially 2) performance problems

Michael Lackner michael.lackner at mu-leoben.at
Thu Jun 17 08:05:39 UTC 2010


Hello, Kit!

I concatenated the two mails of yours in my quote, I hope that's ok?

I do have a HP MSA2312fc here, yes. Fibrechannel. With EXT3 and XFS
performance was pretty good, but of course, those aren't cluster-aware and
can only ever be tested in single-node configuration (I didn't try multiple
volumes with 1 volume for each client with EXT/XFS though, since we need
a shared filesystem). If I test GFS1/2, both are also reasonably fast in
single-node config for reads (those 180-190MB/s I was talking about).

In single node operation I have never seen such drastic drops as you 
have, no
single-digit MB/s numbers..

As for the multipathing, this is what multipath -l tells me:

mpath0 (3600c0ff000da8493da059a4b01000000) dm-0 HP,MSA2312fc
[size=2.7T][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
 \_ 2:0:2:1 sdc 8:32  [active][undef]
 \_ 2:0:3:1 sdd 8:48  [active][undef]
\_ round-robin 0 [prio=0][enabled]
 \_ 2:0:0:1 sda 8:0   [active][undef]
 \_ 2:0:1:1 sdb 8:16  [active][undef]

In "/etc/multipath.conf" I have only set the necessary device 
blacklistings and
the "user_friendly_names yes" option, nothing else. But i don't think 
this can
have performance implications? Slowest single FC link is 4Gbps, which would
equal a theoretical maximum of 512MB/s (+full duplex?) per link. I'm no 
expert
here, but I  would guess that the multipath scheduling is not too 
important for
GFS performance.

It seems just to be responsible for choosing the right FC links to 
transfer data
over. But even if all of my three clients would choose to use the same 
link to
the MSA2312fc, it should still be ok? Switch monitoring however tells 
me, that
the client transfers are being distributed over several links that the 
MSA has
anyway.

Round-Robin at work I suppose. FC links pretty much under-utilized...

Thanks!

Kit Gerrits wrote:
> Didn't you have that HP MSA with the fibrechannel interfaces?
>
> I have exactly the same device, also with HP DL380 and HP DL 580 hosts with
> two FC interfaces.
> I've seen similarly insane statistics using only ext2fs mounts. (even worse,
> around 7MB/s)
> It went away after a while, but I have no idea where it came from or why it
> left.
> (I was backing up files with tar-over-ssh)
>
> I would really like to know how you get rid of it, if ever.
>
>
> Multipathing has a round-robin and a failover scheduler, which can be
> configures in /etc/multipath.conf
>
> The path_selector value only seems to support round-ronin:
> http://storagefoo.blogspot.com/2006/08/linux-native-multipathing-device.html
>
>
> Maybe this helps:
> 		#
> 		# name    : path_grouping_policy
> 		# scope   : multipath
> 		# desc    : path grouping policy to apply to this
> multipath
> 		# values  : failover, multibus, group_by_serial
> 		# default : failover
> 		#
> 		path_grouping_policy	multibus
>
> Specifies the default path grouping policy to apply to unspecified
> multipaths. Possible values include:
> failover = 1 path per priority group
> multibus = all valid paths in 1 priority group
> group_by_serial = 1 priority group per detected serial number
> group_by_prio = 1 priority group per path priority value
> group_by_node_name = 1 priority group per target node name
> The default value is failover. 
>
>
> -----Original Message-----
> From: linux-cluster-bounces at redhat.com
> [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Michael Lackner
> Sent: dinsdag 15 juni 2010 14:04
> To: linux clustering
> Subject: Re: [Linux-cluster] GFS (1 & partially 2) performance problems
>
> Hello!
>
> I tried to do R/W tests comparing 4kB blocksize to 1MB blocksize now, and
> the difference in performance was negligible. Also, GFS2 was almost on the
> same speed level when compared to GFS1 for Reads (see below why..). I/O
> scheduler is "cfq" by the way. I never really cared about the I/O scheduler
> since I do not yet understand the differences between the available ones
> anyway.
>
> But, I found out something else. As suggested by Steven in his reply, I ran
> tests both on the GFS1/2 filesystems, and also on the raw blockdevice, and
> surprisingly the  results were almost the same!
>
> So: GFS1 as well as GFS2 3-Node concurrent, sequential Reads showed a total
> of 40MB/s (GFS1) and 45MB/s (GFS2) using a blocksize of 1MB. For single-node
> sequential read the performance went up to a nice 180-190MB/s for both FS
> versions.
>
> Now, the surprising part: Doing a dd read on the raw blockdevice with 3
> nodes showed a total of only ~60MB/s!! Almost as low as reading from GFS1/2
> with multiple nodes at the same time!! When reading the raw blockdevice on a
> single node, I got slightly over 190MB/s again.
>
> So, this concurrent read issue seems not to be a GFS1 or GFS2 problem, but
> more a problem of the underlying storage. This is extremely surprising and a
> bit shocking I must say.
>
> I guess for the Reads I will need to check the SAN itself, see if I can do
> any optimization on it..  That thing can't possibly be that bad when it
> comes to reading..
>
> Thanks a lot for your ideas so far!
>
> Jankowski, Chris wrote:
>   
>> Michael,
>>
>> For comparison, could you do your dd(1) tests with a very large block size
>>     
> (1 MB) and tell us the results, please?
>   
>> I have a vague hunch that the problem may have something to do with
>>     
> coalescing or not of IO operations.
>   
>> Also, which IO scheduler are you using?
>>
>> Thanks abnd regards,
>>
>> Chris Jankowski
>>
>>
>> -----Original Message-----
>> From: linux-cluster-bounces at redhat.com 
>> [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Michael Lackner
>> Sent: Tuesday, 15 June 2010 00:22
>> To: linux clustering
>> Subject: Re: [Linux-cluster] GFS (1 & partially 2) performance 
>> problems
>>
>> Hello!
>>
>> Thanks for your reply. I unfortunately forgot to mention, HOW I was
>>     
> actually testing, stupid.
>   
>> I tested with dd, doing 4kB blocksize reads and writes, 160GB total
>>     
> testfile size per node.
>   
>> I read from /dev/zero for writing tests and wrote to /dev/null for reading
>>     
> tests. So, totally sequential, somewhat small blocksize (equal to filesystem
> BS).
>   
>> The performance was measured directly on the Fibrechannel Switch, which
>>     
> offers nice per-port monitoring for that purpose.
>   
>> I have yet to do some serious read testing on GFS2. I have aborted my
>> GFS2 tests as
>> write performance was not up to GFS1 to begin with. My older GFS2
>>     
> benchmarks (i did this with a 2-node configuration before) are lost, I will
> need to re-do them to give you some numbers.
>   
>> After each write test I did a "sync" to flush everything to disks.  I did
>>     
> not do this before or after read tests though..
>   
>> As you mentioned Journal Size, "gfs_tool counters <mountpoint>" said, that
>>     
> only 2-3% logspace were in use after the tests (I guess this is the per-node
> fs journal?).
>   
>> As for the direct I/O tests, by that you mean testing without ANY 
>> caching going on, a synchronous write? What I did before was test EXT3 
>> (~190MB/s) and XFS
>> (~320MB/s)
>> on the Storage Array. I think what I'm getting here is raw throughput,
>>     
> since I am not monitoring in the OS, but at the Fibrechannel Switch itself..
>   
>> I will do GFS2 read tests similiar to those conducted for GFS1. I'll be
>>     
> able to do that tomorrow morning, then I can post the numbers here.
>   
>> Thanks!
>>
>> Steven Whitehouse wrote:
>>   
>>     
>>> Hi,
>>>
>>> On Mon, 2010-06-14 at 14:00 +0200, Michael Lackner wrote:
>>>   
>>>     
>>>       
>>>> Hello!
>>>>
>>>> I am currently building a Cluster sitting on CentOS 5 for GFS usage.
>>>>
>>>> At the moment, the storage subsystem consists of an HP MSA2312 
>>>> Fibrechannel SAN linked to an FC 8gbit switch. Three client machines 
>>>> are connected to that switch over 8gbit FC. The disks themselves are
>>>> 12 * 15.000rpm SAS configured in RAID-5 with two hotspares.
>>>>
>>>> Now, the whole storage shall be shared (single filesystem), here GFS 
>>>> comes in.
>>>>
>>>> The Cluster is only 3 nodes large at the moment, more nodes will be 
>>>> added later on. I am currently testing GFS1 and GFS2 for performance.
>>>> Lock Management is done over single 1Gbit Ethernet Links (1 per 
>>>> machine).
>>>>
>>>> Thing is, with GFS1 I get far better performance than with the newer
>>>> GFS2 across the board, with a few tunable parameters set, for writes
>>>> GFS1 is roughly twice as fast.
>>>>
>>>>     
>>>>       
>>>>         
>>> What tests are you running? GFS2 is generally faster than GFS1 except 
>>> for streaming writes, which is an area that we are putting some 
>>> effort into solving currently. Small writes (one fs block (4k 
>>> default) or
>>> less) on GFS2 are much faster than on GFS1.
>>>
>>>   
>>>     
>>>       
>>>> But, concurrent reads are totally abysmal. The total write 
>>>> performance (all nodes combined) sits around 280-330Mbyte/sec, 
>>>> whereas the READ performance is as low as 30-40Mbyte/sec when doing 
>>>> concurrent reads. Surprisingly, single-node read is somewhat ok at 
>>>> 180Mbyte/sec, but as soon as several nodes are reading from GFS 
>>>> (version 1 at the
>>>> moment) at the same time,  things turn ugly.
>>>>
>>>>     
>>>>       
>>>>         
>>> Reads on GFS2 should be much faster than GFS1, so it sounds as if 
>>> something isn't working correctly for some reason. For cached data, 
>>> reads on GFS2 should be as fast as ext2/3 since the code path is 
>>> identical (to the page cache) and only changes if pages are not cached.
>>> GFS1 does its locking at a higher level, so there will be more 
>>> overhead for cached reads in general.
>>>
>>> Do make sure that if you are preparing the test files for reading all 
>>> from one node (or even just a different node to that on which you sre 
>>> running the read tests) that you need to sync them to disk on that 
>>> node before starting the tests to avoid issues with caching.
>>>
>>>   
>>>     
>>>       
>>>> This is strange, because for writes, global performance across the 
>>>> cluster increases slightly when adding more nodes. But for reads, 
>>>> the oppsite seems to be true.
>>>>
>>>> For read and write tests, separate testfiles were created and read 
>>>> for each node, with each testfile sitting in its own subdirectory, 
>>>> so no node would access another nodes file.
>>>>
>>>>     
>>>>       
>>>>         
>>> That sounds like a good test set up to me.
>>>
>>>   
>>>     
>>>       
>>>> GFS1 created with the following mkfs.gfs parameters:
>>>> "-b 4096 -J 128 -j 16 -r 2048 -p lock_dlm"
>>>> (4kB blocksite, 16 * 128MB journals, 2GB resource groups, 
>>>> Distributed
>>>> LockManager)
>>>>
>>>> Mount Options set: "noatime,nodiratime,noquota"
>>>>
>>>> Tunables set: "glock_purge 50, statfs_slots 128, statfs_fast 1, 
>>>> demote_secs 20"
>>>>     
>>>>       
>>>>         
>>> You shouldn't normally need to set the glock_purge and demote_secs to 
>>> anything other than the default. These settings no longer exist in
>>> GFS2 since it makes use of the shrinker subsystem provided by the VM 
>>> and is auto-tuning. If your workload is metadata heavy, you could try 
>>> boosting the journal size and/or the incore_log_blocks tunable.
>>>
>>>   
>>>     
>>>       
>>>> Also, in /etc/cluster/cluster.conf, I added this:
>>>> <dlm plock_ownership="1" plock_rate_limit="0"/> <gfs_controld 
>>>> plock_rate_limit="0"/>
>>>>
>>>> Any ideas on how to figure out what's going wrong, and how to tune
>>>> GFS1 for better concurrent read performance, or tune GFS2 in general 
>>>> to be competitive/better than GFS1?
>>>>
>>>> I'm dreaming about 300MB/sec read, 300MB/sec write sequentially and 
>>>> somewhat good reaction times while under heavy sequential and/or 
>>>> random load. But for now, I just wanna get the seq reading to work 
>>>> acceptably fast.
>>>>
>>>> Thanks a lot for your help!
>>>>
>>>>     
>>>>       
>>>>         
>>> Can you try doing some I/O direct to the block device so that we can 
>>> get an idea of what the raw device can manage? Using dd both read and 
>>> write, across the nodes (different disk locations on each node to 
>>> simulate different files).
>>>
>>> I'm wondering if the problem might be due to the seek pattern 
>>> generated by the multiple read locations,
>>>
>>> Steve.
>>>     
>>>       
> --
> Michael Lackner
> Chair of Information Technology, University of Leoben IT Administration
> michael.lackner at mu-leoben.at | +43 (0)3842/402-1505
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
> No virus found in this incoming message.
> Checked by AVG - www.avg.com
> Version: 9.0.829 / Virus Database: 271.1.1/2939 - Release Date: 06/15/10
> 08:35:00
>
>   


-- 
Michael Lackner
Chair of Information Technology, University of Leoben
IT Administration
michael.lackner at mu-leoben.at | +43 (0)3842/402-1505




More information about the Linux-cluster mailing list