[Linux-cluster] GFS: more simple performance numbers

Daniel McNeil daniel at osdl.org
Tue Oct 19 23:44:06 UTC 2004


On Tue, 2004-10-19 at 11:05, Derek Anderson wrote:
> I've rerun the simple performance tests originally run by Daniel McNeil with 
> the addition of the gulm lock manager on the 2.6.8.1 kernel and GFS 6.0 on 
> the 2.4.21-20.EL kernel.
> 
> Notes:
> ======
> Storage:        RAID Array Tornado- Model: F4 V2.0
> HBA:            QLA2310
> Switch:         Brocade Silkworm 3200
> Nodes:          Dual Intel Xeon 2.40Ghz
>                 2GB memory
>                 100Mbs Ethernet
>                 2.6.8.1 Kernel/2.4.21-20.EL Kernel (with gfs 6)
> GuLM:           3-node cluster, 1 external dedicated lock manager
> DLM:            3-node cluster
> LVM:            Not used
> 
> 
> tar xvf linux-2.6.8.1.tar:
> --------------------------
>                         real            user            sys
> ext3 tar                0m3.509s        0m0.262s        0m2.471s
> ext3 sync               0m1.051s        0m0.001s        0m0.204s
> 
> gfs dlm 1 node tar      0m19.480s       0m0.474s        0m8.975s
> gfs dlm 1 node sync     0m3.167s        0m0.000s        0m0.195s
> 
> gfs dlm 2 node tar      0m29.225s       0m0.465s        0m9.670s
> gfs dlm 2 node sync     0m3.398s        0m0.000s        0m0.224s
> gfs dlm 2 node tar      0m43.355s       0m0.562s        0m10.159s
> gfs dlm 2 node sync     0m4.922s        0m0.000s        0m0.226s
> 
> gfs gulm 1 node tar     0m36.568s       0m0.491s        0m7.831s
> gfs gulm 1 node sync    0m3.243s        0m0.000s        0m0.152s
> 
> gfs gulm 2 node tar     0m57.271s       0m0.527s        0m8.025s
> gfs gulm 2 node sync    0m2.471s        0m0.000s        0m0.145s
> gfs gulm 2 node tar     0m40.271s       0m0.482s        0m7.799s
> gfs gulm 2 node sync    0m3.636s        0m0.000s        0m0.224s
> 
> gfs 6 gulm 1 node tar   0m32.671s       0m0.480s        0m7.480s
> gfs 6 gulm 1 node sync  0m3.436s        0m0.000s        0m0.120s
> 
> gfs 6 gulm 2 node tar   0m38.130s       0m0.440s        0m6.920s
> gfs 6 gulm 2 node sync  0m3.731s        0m0.000s        0m0.110s
> gfs 6 gulm 2 node tar   0m58.564s       0m0.500s        0m6.670s
> gfs 6 gulm 2 node sync  0m0.971s        0m0.000s        0m0.070s
> 
> du -s linux-2.6.8.1 (after untar):
> ----------------------------------
>                         real            user            sys
> ext3                    0m0.103s        0m0.018s        0m0.055s
> 
> gfs dlm 1 node          0m5.149s        0m0.041s        0m1.905s
> 
> gfs dlm 2 node          0m11.127s       0m0.045s        0m1.839s
> gfs dlm 2 node          0m8.883s        0m0.033s        0m1.904s
> 
> gfs gulm 1 node         0m0.355s        0m0.025s        0m0.330s
> 
> gfs gulm 2 node         0m0.358s        0m0.024s        0m0.334s
> gfs gulm 2 node         0m0.358s        0m0.020s        0m0.338s
> 
> gfs 6 gulm 1 node       0m0.314s        0m0.010s        0m0.290s
> 
> gfs 6 gulm 2 node       0m0.308s        0m0.050s        0m0.250s
> gfs 6 gulm 2 node       0m0.303s        0m0.000s        0m0.310s
> 
> Second du -s linux-2.6.8.1:
> ---------------------------
>                         real            user            sys
> ext3                    0m0.074s        0m0.025s        0m0.049s
> 
> gfs dlm 1 node          0m0.341s        0m0.027s        0m0.314s
> 
> gfs dlm 2 node          0m0.325s        0m0.024s        0m0.300s
> gfs dlm 2 node          0m0.324s        0m0.020s        0m0.304s
> 
> gfs gulm 1 node         0m0.354s        0m0.022s        0m0.332s
> 
> gfs gulm 2 node         0m0.357s        0m0.023s        0m0.334s
> gfs gulm 2 node         0m0.359s        0m0.021s        0m0.338s
> 
> gfs 6 gulm 1 node       0m0.299s        0m0.020s        0m0.280s
> 
> gfs 6 gulm 2 node       0m0.299s        0m0.000s        0m0.300s
> gfs 6 gulm 2 node       0m0.302s        0m0.010s        0m0.290s
> 
> rm -rf linux-2.6.8.1:
> ---------------------
>                         real            user            sys
> ext3                    0m0.695s        0m0.013s        0m0.646s
> 
> gfs dlm 1 node          0m10.056s       0m0.038s        0m4.720s
> 
> gfs dlm 2 node          0m12.032s       0m0.043s        0m4.789s
> gfs dlm 2 node          0m13.803s       0m0.052s        0m4.787s
> 
> gfs gulm 1 node         0m14.152s       0m0.066s        0m3.409s
> 
> gfs gulm 2 node         0m12.408s       0m0.039s        0m2.355s
> gfs gulm 2 node         0m14.216s       0m0.038s        0m2.560s
> 
> gfs 6 gulm 1 node       4m30.759s       0m0.140s        0m3.890s
> 
> gfs 6 gulm 2 node       4m42.095s       0m0.060s        0m5.580s
> gfs 6 gulm 2 node       4m49.479s       0m0.140s        0m4.450s
> 

What kind of raid setup is this (raid type, stripe size, etc)?
You are seeing some decent scaling, where I am not.
You have much faster processors.  
Is the DLM code cpu intensive?

Daniel




More information about the Linux-cluster mailing list