[Cluster-devel] [GFS2 PATCH] GFS2: Block reservation doubling scheme

Bob Peterson rpeterso at redhat.com
Tue Oct 14 13:44:14 UTC 2014


----- Original Message -----
> >>> This patch introduces a new block reservation doubling scheme. If we
> >> Maybe I sent this patch out prematurely. Instead of doubling the
> >> reservation, maybe I should experiment with making it grow additively.
> >> IOW, Instead of 32-64-128-256-512, I should use:
> >> 32-64-96-128-160-192-224-etc...
> >> I know other file systems using doubling schemes, but I'm concerned
> >> about it being too aggressive.
> > I tried an additive reservations algorithm. I basically changed the
> > previous patch from doubling the reservation to adding 32 blocks.
> > In other words, I replaced:
> >
> > +				ip->i_rsrv_minblks <<= 1;
> > with this:
> > +				ip->i_rsrv_minblks += RGRP_RSRV_MINBLKS;
> >
> > The results were not as good, but still very impressive, and maybe
> > acceptable:
(snip)
> I think you are very much along the right lines. The issue is to ensure
> that all the evidence that is available is taken into account in
> figuring out how large a reservation to make. There are various clues,
> such as the time between writes, the size of the writes, whether the
> file gets closed between writes, whether the writes are contiguous and
> so forth.
> 
> Some of those things are taken into account already, however we can
> probably do better. We may be able to also take some hints from things
> like calls to fsync (should we drop reservations that are small at this
> point, since it likely signifies a significant point in the file, if
> fsync is called?) or even detect well known non-linear write patterns,
> e.g. backwards stride patterns or large matrix access patterns (by row
> or column).
> 
> The struct file is really the best place to store this context
> information, since if there are multiple writers to the same inode, then
> there is a fair chance that they'll have separate struct files. Does
> this happen in your test workload?
> 
> The readahead code can already detect some common read patterns, and it
> also turns itself off if the reads are random. The readahead problem is
> actually very much the same problem in that it tries to estimate which
> reads are coming next based on the context that has been seen already,
> so there may well be some lessons to be learned from that too.
> 
> I think its important to look at the statistics of lots of different
> workloads, and to check them off against your candidate algorithm(s), to
> ensure that the widest range of potential access patterns are taken into
> account,
> 
> Steve.

Hi Steve,

Sorry it's taken me a bit to respond. I've been giving this a lot of thought
and doing a lot of experiments and tests.

I see multiple issues/problems, and my patches have been trying to address
them or solve them separately. You make some very good points here, so
I want to address them individually in the light of my latest findings.
I basically see three main performance problems:

1. Inter-node contention for resource groups. In the past, it was solved
   with "try" locks that ended with a chaos of block assignments.
   In RHEL7 and up, we eliminated them, but the contention came back and
   performance suffers. I posted a patch for this issue that allows each
   node in the cluster to "prefer" a unique set of resource groups. It
   improved reduced inter-node contention greatly and improved performance
   greatly. It was called "GFS2: Set of distributed preferences for rgrps"
   posted on October 8.
2. We need to more accurately predict the size of multi-block reservations.
   This is the issue you talk about here, and so far it's one that I
   haven't addressed yet.
3. We need a way to adjust those predictions if they're found to be
   inadequate. That's the problem I was addressing with the reservation
   doubling scheme or additive reservation scheme.

Issues 2 and 3 might possibly be treated as one issue: we could have a
self-adjusting reservation size system, based on a number of factors,
and I'm in the process of reworking how we do it. I've been doing lots of
experiments and running lots of tests against different workloads. You're
right that #2 is necessary, and I've verified that without it, some
workloads get faster while others get slower (although there's an overall
improvement).

Here are some thoughts:

1. Today, reservations are based on write size, which as you say, is
   not a very good predictor. We can do better.
2. My reservation doubling scheme helps, and reduces fragmentation, but
   we need a more sophisticated scheme.
3. I don't think the time between writes should affect the reservation
   because different applications have different dynamics.
4. Size of the writes are already taken into account. However, the way
   we do it now is kind of bogus. With every write, we adjust the size
   hint. But if the application is doing rewrites, it shouldn't matter.
   If it's writing backwards or at random locations, it might matter.
   Last night I experimented with a new scheme that basically only
   adjusts the size hint if block allocations are necessary. That way,
   applications that do a long sequence of: "large-appends" followed by
   "small rewrites" don't get their "append" size hint whacked by the
   small rewrite. This didn't help the customer application I'm testing,
   but it did help some of the other benchmarks I ran yesterday.
5. I don't like the idea of adjusting the reservation at fsync time.
   Similarly, I don't like the idea of adjusting the reservation at
   file close time. I think it makes the most sense to keep the info
   associated with the inode as we do today. My next iteration will
   hopefully not add any fields to the inode.
6. I like the idea of adjusting the reservation for non-linear writes,
   such as backwards writes, but I may have to do more testing. For
   example, if I do multiple writes to a file at: 2MB, 1MB, 500KB, etc.,
   is it better to reserve X blocks which will be assigned in reverse
   order? Or is it better to just reserve them as needed and have them
   more scattered but possibly more linear? Maybe testing will show.
7. In regards to storing the context information in the struct file:
   It depends on what information. Today, there is only one reservation
   structure, and reservation size per inode, whereas there can be many
   struct files for many writers to the inode. The question of whether
   a reservation is adequate is not so much about "will this reservation
   be adequate for this writer?". Rather, it's about "will this
   reservation be adequate for our most demanding writer?" All the
   rewriters in the world shouldn't affect the outcome of a single
   aggressive appender, for example.

To answer your question: I'd wager that yes, there are multiple writers
to at least some of the files, but I'm not sure how extensive it is.
The workload seems to have a good variety of linear and non-linear writes
as well. At least now I'm starting to use multiple benchmarks for my
tests.

Regards,

Bob Peterson
Red Hat File Systems





More information about the Cluster-devel mailing list