[rhelv6-list] RHEL6.2 XFS brutal performence with lots of files

Jussi Silvennoinen jussi_rhel6 at silvennoinen.net
Mon Apr 15 08:50:35 UTC 2013


> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>           11.12    0.03    2.70    3.60    0.00   82.56
> 
> Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
> md127           134.36     10336.87     11381.45 19674692141 21662893316

Do use iostat -x to see more details which will give a better indication 
how busy the disks are.

> Running hdparm on one of the software raid5 drives reports decent numbers.
> 
> /dev/sdb:
>  Timing cached reads:   12882 MB in  2.00 seconds = 6448.13 MB/sec
>  Timing buffered disk reads:  396 MB in  3.06 seconds = 129.39 MB/sec
>
> running some crude dd tests shows reasonable numbers, I think.
> 
> # dd bs=1M count=1280 if=/dev/zero of=test conv=fdatasync
> 1342177280 bytes (1.3 GB) copied, 29.389 s, 45.7 MB/s

Yes crude and not very useful.

> I have other similiar filesystems on ext4 with similiar hardware and
> millions of small files as well.  I don't see such sluggishness with small
> files and directories there.  I guess I picked XFS for this filesystem
> initially because of its fast fsck times.

Are those other systems also employing software raid? In my 
experience, swraid is painfully slow with random writes. And your workload 
in this use case is exactly that.

> # grep md127 /proc/mounts 
> /dev/md127 /mesonet xfs
> rw,noatime,attr2,delaylog,sunit=1024,swidth=4096,noquota 0 0

inode64 is not used, I suspect it would have helped alot. Enabling it 
afterwards will not help for data which is already on the disk but it will 
help with new files.


-- 

   Jussi


More information about the rhelv6-list mailing list