a comparison of ext3, jfs, and xfs on hardware raid

Sonny Rao sonny at burdell.org
Thu Jul 14 18:53:16 UTC 2005


On Thu, Jul 14, 2005 at 12:33:30PM -0600, Andreas Dilger wrote:
> On Jul 13, 2005  17:12 -0700, Jeffrey W. Baker wrote:
> > I'm setting up a new file server and I just can't seem to get the
> > expected performance from ext3.  Unfortunately I'm stuck with ext3 due
> > to my use of Lustre.  So I'm hoping you dear readers will send me some
> > tips for increasing ext3 performance.
> > 
> > The system is using an Areca hardware raid controller with 5 7200RPM
> > SATA disks.  The RAID controller has 128MB of cache and the disks each
> > have 8MB.  The cache is write-back.  The system is Linux 2.6.12 on amd64
> > with 1GB system memory.
> > 
> > Using bonnie++ with a 10GB fileset, in MB/s:
> > 
> >          ext3    jfs    xfs
> > Read     112     188    141
> > Write     97     157    167
> > Rewrite   51      71     60
> > 
> > These number were obtained using the mkfs defaults for all filesystems
> > and the deadline scheduler.  As you can see JFS is kicking butt on this
> > test.
> 
> One thing that is important for Lustre is performance of EAs.  See
> http://samba.org/~tridge/xattr_results/ for a comparison.  Lustre
> uses large inodes (-I 256 or larger) to store the EAs efficiently.
> 
> > Next I used pgbench to test parallel random I/O.  pgbench has
> > configurable number of clients and transactions per client, and can
> > change the size of its database.  I used a database of 100 million
> > tuples (scale factor 1000).  I times 100,000 transactions on each
> > filesystem, with 10 and 100 clients per run.  Figures are in
> > transactions per second.
> > 
> >               ext3  jfs  xfs
> > 10 Clients      55   81   68
> > 100 Clients     61  100   64
> > 
> > Here XFS is not substantially faster but JFS continues to lead.  
> > 
> > JFS is roughly 60% faster than ext3 on pgbench and 40-70% faster on
> > bonnie++ linear I/O.
> 
> This is a bit surprising, I've never heard JFS as a leader in many
> performance tests.  Is pgbench at all related to dbench?  The problem
> with dbench is that for cases where the filesystem does no IO at all
> it reports a best result.  In real life the data has to make it to
> disk at some point.

JFS tends to lead in two areas, low cpu utilization compared to other
filesystems, and on a new filesystem, layout is generally very good.

The low CPU utilization helps in environments where you have a lot of
filesystems or just a lot of I/O going on, we've seen on SPEC SFS that
JFS tends to be the best because of that.  (Yes, SPEC SFS is a rather
crazy workload, but then so are a lot of other common ones)

JFS's main weak point is on meta-data intensive workloads (like
dbench) because of deficiencies in the logging system and some
poorly placed synchronous operations which are currently being
tackled. 

We've also been slowly pushing in changes to improve JFS performance,
some of them have made it into 2.6.12.

Sonny




More information about the Ext3-users mailing list