[Linux-cluster] GFS block size

Adam Drew adrew at redhat.com
Tue Jan 4 19:17:38 UTC 2011


If your average file size is less than 1K then using a block size of 1k may be a good option. If you can fit your data in a single block you get the minor performance boost of using a stuffed inode so you never have to walk a list from your inode to your data block. The performance boost should be small but could add up to larger gains over time with lots of transactions. If your average data payload is less than the default block-size however, you'll end up losing the delta. So, from a filesystem perspective, using a 1k blocksize to store mostly sub-1k files may be a good idea. 

You additionally may want to experiment with reducing your resource group size. Blocks are organized into resource groups. If you are using 1k blocks and sub-1k files then you'll end up with tons of stuffed inodes per resource group. Some operations in GFS require locking the resource group metadata (such as deletes) so you may start to experience performance bottle-necks depending on usage patterns and disk layout.

All-in-all I'd be skeptical of the claim of large performance gains over time by changing rg size and block size but modest gains may be had. Still, some access patterns and filesystem layouts may experience greater performance gains with such tweaking. However, I would expect to see the most significant gains (in GFS1 at least) made by mount options and tuneables.

Regards,
Adam Drew

----- Original Message -----
From: "juncheol park" <nukejun at gmail.com>
To: "linux clustering" <linux-cluster at redhat.com>
Sent: Tuesday, January 4, 2011 1:42:45 PM
Subject: Re: [Linux-cluster] GFS block size

I also experimented 1k block size on GFS1. Although you can improve
the disk usage using a smaller block size, typically it is recommended
to use the block size same as the page size, which is 4k in Linux.

I don't remember all the details of results. However, for large files,
the overall performance of read/write operations with 1k block size
was much worse than the one with 4k block size. This is obvious,
though. If you don't care any performance degradation for large files,
it would be fine for you to use 1k.

Just my two cents,

-Jun


On Fri, Dec 17, 2010 at 3:53 PM, Jeff Sturm <jeff.sturm at eprize.com> wrote:
> One of our GFS filesystems tends to have a large number of very small files,
> on average about 1000 bytes each.
>
>
>
> I realized this week we'd created our filesystems with default options.  As
> an experiment on a test system, I've recreated a GFS filesystem with "-b
> 1024" to reduce overall disk usage and disk bandwidth.
>
>
>
> Initially, tests look very good—single file creates are less than one
> millisecond on average (down from about 5ms each).  Before I go very far
> with this, I wanted to ask:  Has anyone else experimented with the block
> size option, and are there any tricks or gotchas to report?
>
>
>
> (This is with CentOS 5.5, GFS 1.)
>
>
>
> -Jeff
>
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>

--
Linux-cluster mailing list
Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster




More information about the Linux-cluster mailing list