[rhelv6-list] RHEL6.2 XFS brutal performence with lots of files
Daryl Herzmann
akrherz at iastate.edu
Mon Apr 15 14:02:28 UTC 2013
Thanks for your help.
On Mon, Apr 15, 2013 at 8:39 AM, Pat Riehecky <riehecky at fnal.gov> wrote:
> I've run into some terrible performance when I've had a lot of
> add/remove actions on the filesystem in parallel. They were mostly due to
> fragmentation. Alas, XFS can get some horrid fragmentation.
>
> xfs_db -c frag -r /dev/<node>
>
> should give you the stats on its fragmentation.
>
# xfs_db -c frag -r /dev/md127
actual 140539575, ideal 139998847, fragmentation factor 0.38%
Here's an iostat snapshot while I was running xfs_db, the tps numbers for
md127 seem strange. sd[b-f] are a part of the raid5....
avg-cpu: %user %nice %system %iowait %steal %idle
5.12 0.00 7.12 8.81 0.00 78.94
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 35.00 1032.00 656.00 1032 656
sdc 458.00 32451.00 680.00 32451 680
sde 451.00 30936.00 544.00 30936 544
sdb 573.00 32448.00 784.00 32448 784
sdd 527.00 30728.00 696.00 30728 696
sdf 593.00 31736.00 624.00 31736 624
md127 157986.00 157983.00 1592.00 157983 1592
> I can't speak for others, but I've got 'xfs_fsr' linked into
> /etc/cron.weekly/ on my personal systems with large XFS filesystems.
>
Seems like I shouldn't have to do that given the numbers above?
daryl
--
> Pat Riehecky
>
> Scientific Linux developerhttp://www.scientificlinux.org/
>
>
> _______________________________________________
> rhelv6-list mailing list
> rhelv6-list at redhat.com
> https://www.redhat.com/mailman/listinfo/rhelv6-list
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/rhelv6-list/attachments/20130415/1202e8b2/attachment.htm>
More information about the rhelv6-list
mailing list