[dm-devel] Why does dm-thin pool metadata space map use 4K page to carry index ?

jianchao wang jianchao.wan9 at gmail.com
Thu Sep 5 06:43:28 UTC 2019


Hi

As the code,

The metadata space map use following structure which locates on a 4K page
on disk
to carry the disk_index_entry.

The on-disk format of metadata spacemap

The metadata space's bitmap root is a
#define MAX_METADATA_BITMAPS 255
struct disk_metadata_index {
    __le32 csum;
    __le32 padding;
    __le64 blocknr;

    struct disk_index_entry index[MAX_METADATA_BITMAPS];
} __packed;

It will be read in when open the pool
sm_ll_open_metadata
  -> set ll callbacks
  -> ll->open_index
metadata_ll_open
---
    r = dm_tm_read_lock(ll->tm, ll->bitmap_root,
                &index_validator, &block);
    if (r)
        return r;

    memcpy(&ll->mi_le, dm_block_data(block), sizeof(ll->mi_le));
    dm_tm_unlock(ll->tm, block);

---
The size of struct disk_metadata_index is 4096.
The disk_index_entry's size is 8 bytes

4096 * 8 / 2 = 16K    blocks per page

metadata block = 4K

256 * 16K * 4K = 16G

Then it have a 6G limit on metadata blocks size.

But why does it use this 4K page instead of btree as the disk sm ?

The brb mechanism seem be able to avoid the nested block allocation
when do COW on the metadata sm btree.

Would anyone please help to tell why does it use this 4K page instead of a
btree ?


Thanks in advance
Jianchao
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/dm-devel/attachments/20190905/d2852c24/attachment.htm>


More information about the dm-devel mailing list