[Cluster-devel] [GFS2 PATCH] GFS2: Block reservation doubling scheme
Steven Whitehouse
swhiteho at redhat.com
Fri Oct 10 09:07:06 UTC 2014
Hi,
On 10/10/14 04:39, Bob Peterson wrote:
> ----- Original Message -----
>> ----- Original Message -----
>>> This patch introduces a new block reservation doubling scheme. If we
>> Maybe I sent this patch out prematurely. Instead of doubling the
>> reservation, maybe I should experiment with making it grow additively.
>> IOW, Instead of 32-64-128-256-512, I should use:
>> 32-64-96-128-160-192-224-etc...
>> I know other file systems using doubling schemes, but I'm concerned
>> about it being too aggressive.
> I tried an additive reservations algorithm. I basically changed the
> previous patch from doubling the reservation to adding 32 blocks.
> In other words, I replaced:
>
> + ip->i_rsrv_minblks <<= 1;
> with this:
> + ip->i_rsrv_minblks += RGRP_RSRV_MINBLKS;
>
> The results were not as good, but still very impressive, and maybe
> acceptable:
>
> Reservation doubling scheme:
> EXTENT COUNT FOR OUTPUT FILES = 310103
> EXTENT COUNT FOR OUTPUT FILES = 343990
> EXTENT COUNT FOR OUTPUT FILES = 332818
> EXTENT COUNT FOR OUTPUT FILES = 336852
> EXTENT COUNT FOR OUTPUT FILES = 334820
>
> Reservation additive scheme (32 blocks):
> EXTENT COUNT FOR OUTPUT FILES = 322406
> EXTENT COUNT FOR OUTPUT FILES = 341665
> EXTENT COUNT FOR OUTPUT FILES = 341769
> EXTENT COUNT FOR OUTPUT FILES = 348676
> EXTENT COUNT FOR OUTPUT FILES = 348079
>
> So I'm looking for opinions:
> (a) Stick with the original reservation doubling patch, or
> (b) Go with the additive version.
> (c) Any other ideas?
>
> Regards,
>
> Bob Peterson
> Red Hat File Systems
I think you are very much along the right lines. The issue is to ensure
that all the evidence that is available is taken into account in
figuring out how large a reservation to make. There are various clues,
such as the time between writes, the size of the writes, whether the
file gets closed between writes, whether the writes are contiguous and
so forth.
Some of those things are taken into account already, however we can
probably do better. We may be able to also take some hints from things
like calls to fsync (should we drop reservations that are small at this
point, since it likely signifies a significant point in the file, if
fsync is called?) or even detect well known non-linear write patterns,
e.g. backwards stride patterns or large matrix access patterns (by row
or column).
The struct file is really the best place to store this context
information, since if there are multiple writers to the same inode, then
there is a fair chance that they'll have separate struct files. Does
this happen in your test workload?
The readahead code can already detect some common read patterns, and it
also turns itself off if the reads are random. The readahead problem is
actually very much the same problem in that it tries to estimate which
reads are coming next based on the context that has been seen already,
so there may well be some lessons to be learned from that too.
I think its important to look at the statistics of lots of different
workloads, and to check them off against your candidate algorithm(s), to
ensure that the widest range of potential access patterns are taken into
account,
Steve.
More information about the Cluster-devel
mailing list