[dm-devel] [Lsf-pc] [LSF/MM TOPIC] a few storage topics

Loke, Chetan Chetan.Loke at netscout.com
Wed Jan 25 17:08:25 UTC 2012


> > How about tracking heuristics for 'read-hits from previous read-aheads'? 
> > If the hits are in acceptable range(user-configurable knob?) then keep seeking else back-off a little on the read-ahead?
> >
> 
> I'd been wondering about something similar to that. The basic scheme
> would be:
> 
>  - Set a page flag when readahead is performed
>  - Clear the flag when the page is read (or on page fault for mmap)
> (i.e. when it is first used after readahead)
> 
> Then when the VM scans for pages to eject from cache, check the flag
> and keep an exponential average (probably on a per-cpu basis) of the rate
> at which such flagged pages are ejected. That number can then be used to
> reduce the max readahead value.
> 
> The questions are whether this would provide a fast enough reduction in
> readahead size to avoid problems? and whether the extra complication is
> worth it compared with using an overall metric for memory pressure?
> 

Steve - I'm not a VM guy so can't help much. But if we maintain a separate list
of pages 'fetched with read-ahead' then we can use the flag you suggested above.
So when memory pressure is triggered:
a) Evict these pages (which still have the page-flag set) first as they were a pure opportunistic bet from our side.
b) scale-down(or just temporarily disable?) on read-aheads till the pressure goes low.
c) admission control - disable(?) read-aheads for new threads/processes that are created? Then enable once we are ok?

> There may well be better solutions though,

Quite possible. But we need to start somewhere with the adaptive logic otherwise we will just keep on increasing(second guessing?) the upper bound and assuming that's what applications want. Increasing it to MB[s] may not be attractive for desktop users. If we raise it to MB[s] then desktop distro's might scale it down to KB[s].Exactly opposite of what enterprise distro's could be doing today.
   

> Steve.
> 
Chetan Loke




More information about the dm-devel mailing list