[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: a comparison of ext3, jfs, and xfs on hardware raid

On Jul 13, 2005  17:12 -0700, Jeffrey W. Baker wrote:
> I'm setting up a new file server and I just can't seem to get the
> expected performance from ext3.  Unfortunately I'm stuck with ext3 due
> to my use of Lustre.  So I'm hoping you dear readers will send me some
> tips for increasing ext3 performance.
> The system is using an Areca hardware raid controller with 5 7200RPM
> SATA disks.  The RAID controller has 128MB of cache and the disks each
> have 8MB.  The cache is write-back.  The system is Linux 2.6.12 on amd64
> with 1GB system memory.
> Using bonnie++ with a 10GB fileset, in MB/s:
>          ext3    jfs    xfs
> Read     112     188    141
> Write     97     157    167
> Rewrite   51      71     60
> These number were obtained using the mkfs defaults for all filesystems
> and the deadline scheduler.  As you can see JFS is kicking butt on this
> test.

One thing that is important for Lustre is performance of EAs.  See
http://samba.org/~tridge/xattr_results/ for a comparison.  Lustre
uses large inodes (-I 256 or larger) to store the EAs efficiently.

> Next I used pgbench to test parallel random I/O.  pgbench has
> configurable number of clients and transactions per client, and can
> change the size of its database.  I used a database of 100 million
> tuples (scale factor 1000).  I times 100,000 transactions on each
> filesystem, with 10 and 100 clients per run.  Figures are in
> transactions per second.
>               ext3  jfs  xfs
> 10 Clients      55   81   68
> 100 Clients     61  100   64
> Here XFS is not substantially faster but JFS continues to lead.  
> JFS is roughly 60% faster than ext3 on pgbench and 40-70% faster on
> bonnie++ linear I/O.

This is a bit surprising, I've never heard JFS as a leader in many
performance tests.  Is pgbench at all related to dbench?  The problem
with dbench is that for cases where the filesystem does no IO at all
it reports a best result.  In real life the data has to make it to
disk at some point.

See http://sudhaa.com/~benchmark/ext3/newtiobenchresults.ext3gold/newtiobench/newtiobench.html
for a comparison of ext3, xfs, jfs in the mode that Lustre runs in
(specifically column 7, 14, 18).

> Are there any tunables that I might want to adjust to get better
> performance from ext3?

Try creating your ext3 filesystem with a larger journal, as Lustre does:

mkfs -J size=400 ...

size is in MB, 400 might be excessive for your setup - I'd be interested
in hearing where the "sweet spot" is for journal size.  The latest e2fsprogs
use 128MB as the largest default size (up from 32MB) for large filesystems.

Cheers, Andreas
Andreas Dilger
Principal Software Engineer
Cluster File Systems, Inc.

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]