[dm-devel] Why does dm-thin pool metadata space map use 4K page to carry index ?

jianchao wang jianchao.wan9 at gmail.com
Thu Sep 5 13:52:15 UTC 2019


Hi Joe

Thanks for your kindly response

On Thu, Sep 5, 2019 at 6:38 PM Joe Thornber <thornber at redhat.com> wrote:

> On Thu, Sep 05, 2019 at 02:43:28PM +0800, jianchao wang wrote:
> > But why does it use this 4K page instead of btree as the disk sm ?
> >
> > The brb mechanism seem be able to avoid the nested block allocation
> > when do COW on the metadata sm btree.
> >
> > Would anyone please help to tell why does it use this 4K page instead of
> a
> > btree ?
>
> It's a long time since I wrote this, so I can't remember the order that
> things
> were written.  It may well be that brb mechanism for avoiding recursive
> allocations
> came after the on disk formats were defined.  Irrespective of that the
> single page
> pointing to index pages should perform better.
>
> Is the 16G limit to the metadata device causing you issues?
>

Yes, we are planing to build a 200T pool at least and there are both normal
thin device
and snapshot running on it.  Smaller block size would be better, but 16G is
not enough.

Actually, I have modified the metadata sm code to use btree as the disk sm.
In my test
environment, I have used ~20G metadata.

Thanks
Jianchao
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/dm-devel/attachments/20190905/d2c52441/attachment.htm>


More information about the dm-devel mailing list