[Linux-cluster] rhel 6.1 gfs2 performance tests

Jordi Renye jordir at fib.upc.edu
Tue Jul 26 10:36:55 UTC 2011


Tests  run directly on gfs2. Soon, we would like
testing through samba clients.

Jordi Renye
LCFIB - UPC


El 25/07/2011 15:51, Steven Whitehouse escribió:
> Hi,
>
> On Fri, 2011-07-22 at 12:41 +0200, Jordi Renye wrote:
>> We are sharing gfs2 partition through samba
>> to three hundred clients aprox.
>>
>> Partition GFS2 is mounted in two nodes of
>> cluster.
>>
>> Clients can boot in linux and windows.
>>
>> There is one share for home folder, another
>> for profiles, another for shared applications and
>> data: there is 5 shares.
>>
>>>>   Also, did you mount with noatime, nodiratime?
>> Yes, I'm  mounting with these options.
>>
>> Jordi Renye
>> LCFIB - UPC
>>
>>
> Were the tests being run directly on gfs2, or via Samba in this case?
>
> Steve.
>
>> El 22/07/2011 12:32, Steven Whitehouse escribió:
>>> Hi,
>>>
>>> On Fri, 2011-07-22 at 12:08 +0200, Jordi Renye wrote:
>>>> Hi,
>>>>
>>>> We have configured redhat cluster RHEL 6.1  with two nodes.
>>>> We have seen that performance of GFS2 on writing  is
>>>> half of ext3 partition.
>>>>
>>>> For example, time of commands:
>>>>
>>>> time cp -Rp /usr /gfs2partition/usr
>>>> 0.681u 47.082s 7:01.80 11.3%    0+0k 561264+2994832io 0pf+0w
>>>>
>>>> whereas
>>>>
>>>>     cp -R /usr /ext3partition/usr
>>>> 0.543u 24.041s 4:16.86 9.5%     0+0k 2728584+3166184io 2pf+0w
>>>>
>>>> With  ping_pong tool from Samba.org we've got next results:
>>>>
>>>> Los resultados son los siguientes:
>>>>
>>>> ping_pong /gfs2partition/pingpongtestfile 3
>>>> 1582 locks/sec
>>>>
>>>> With ping_pong test r/w:
>>>>
>>>> ping_pong -rw /gfs2partition/pingpongtestfile 3
>>>> data increment = 2
>>>> 4 locks/sec
>>>>
>>>> Do you think we can get better performance? Do you think
>>>> are "normal" and "good" results ?
>>>>
>>>> Which recommendations do you tell us to get better performance?
>>>>
>>>> For example, we don't have a heartbeat network exclusively, but
>>>> we have only one networks interface for application network and cluster
>>>> network.
>>>> Could we get better performance with one dedicated cluster network( for
>>>> dlm,heartbeath,...).
>>>>
>>>> Thanks in advanced,
>>>>
>>> It depends what you are trying to optimise for... what is the actual
>>> application that you want to run?
>>>
>>> cp doesn't use fcntl locks to the best of my knowledge, so I doubt that
>>> will have any particular effect on the performance. Also it would be
>>> quite unusual for fcntl locks to have any effect on the performance of
>>> the fs as a whole.
>>>
>>> Usually the most important factor is how the workload is balances
>>> between nodes. Also, did you mount with noatime, nodiratime?
>>>
>>> Steve.
>>>
>>>
>>> --
>>> Linux-cluster mailing list
>>> Linux-cluster at redhat.com
>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster


-- 

        Jordi Renye Capel
o o o  Tècnic de Sistemes N1
o o o  Laboratori de Càlcul
o o o  Facultat d'Informàtica de Barcelona
U P C  Universitat Politècnica de Catalunya - Barcelona Tech

        E-mail : jordir at fib.upc.edu
        Tel.   : 16943
        Web    : http://www.fib.upc.edu/

======================================================================

Abans  d'imprimir aquest missatge, si us plau, assegureu-vos que sigui
necessari. El medi ambient és cosa de tots.

--[ http://www.fib.upc.edu/disclaimer/ ]------------------------------

ADVERTIMENT  /  TEXT  LEGAL:  Aquest  missatge pot contenir informació
confidencial  o  legalment protegida i està exclusivament adreçat a la
persona  o entitat destinatària. Si vostè no es el destinatari final o
persona  encarregada  de  recollir-lo, no està autoritzat a llegir-lo,
retenir-lo, modificar-lo, distribuir-lo, copiar-lo ni a revelar el seu
contingut.  Si ha rebut aquest correu electrònic per error, li preguem
que  informi  al  remitent  i elimini del seu sistema el missatge i el
material annex que pugui contenir. Gràcies per la seva col·laboració.





More information about the Linux-cluster mailing list