[dm-devel] dm-thin: issues about resize the pool metadata size

Wun-Yen Liang burton.paramountcy at gmail.com
Wed Nov 6 12:27:52 UTC 2013


Hi, folks:

Sorry for the insufficient information about my question last week.

This is my environment for the test.
    Kernel version : 3.12.0-rc7+
    LVM version : 2.02.103

I had currently done some test with the dm-thin targets.
Here is my simple script to create a pool with 32MB metadata, and a volumn
on it.

$ sudo vgcreate vg /dev/sda1
$ sudo lvcreate --type thin-pool --thinpool tpool vg --size 800G
--poolmetadatasize 32M --alloc anywhere -c 64k
$ sudo lvcreate --name lv vg --virtualsize 800G --type thin --thinpool tpool

After formating, mounting, and some I/O test, the pool and volumn work well.
Then I try to expand the metadata device to 64M with the following command

$ sudo lvresize --poolmetadata +32M vg/tpool

But I got error message on the terminal
    Extending logical volume tpool_tmeta to 64.00 MiB.
    device-mapper: resume ioctl on  failed: No space left on device
    Unable to resume vg-tpool-tpoolool (253:3)
    Problem reactivating tpool
    libdevmapper exiting with 2 device(s) still suspended.

The mounted volumn seems not being accessible because of being suspended.
It can only works normally from being suspended until I execute the
following command.
$ sudo lvresize --size -32M /dev/mapper/vg-tpool_tmeta

I try to find out the reason why -ENOSPC happens. Here is function
sm_ll_extend().

static int sm_metadata_extend(struct dm_space_map *sm, dm_block_t
extra_blocks)
{
...
dm_block_t old_len = smm->ll.nr_blocks;

smm->begin = old_len; // So smm->begin = smm->ll.nr_blocks here
memcpy(&smm->sm, &bootstrap_ops, sizeof(smm->sm));

/*
 * Extend.
 */
r = sm_ll_extend(&smm->ll, extra_blocks);
 /*
 * Switch back to normal behaviour.
 */
memcpy(&smm->sm, &ops, sizeof(smm->sm));
 for (i = old_len; !r && i < smm->begin; i++)
r = sm_ll_inc(&smm->ll, i, &ev);

...
}

In this function, smm->begin is set as smm->ll.nr_blocks.
Space-map operations are replaced with another set of dm_space_map
operations, bootstrap_ops, in order to flick into a mode, where all blocks
get allocated in the new area.
Just like what it did in function dm_sm_metadata_create().

After these steps, the ops will be replaced again with the original
dm_space_map structure.

In function sm_ll_extend(), there is a forloop used to incrementally extend
the nr_blocks.
dm_tm_new_block() is called in each loop, here is the call tree of
dm_tm_new_block():

    dm_tm_new_block()
         -> dm_sm_new_block()
              -> sm_bootstrap_new_block()

In function sm_bootstrap_new_block()
        ...
/*
 * We know the entire device is unused.
 */
if (smm->begin == smm->ll.nr_blocks)
return -ENOSPC;

Because the smm->begin is set as smm->ll.nr_blocks in function
sm_ll_extend(), sm_bootstrap_new_block() will always returns -ENOSPC here.
It seems like the reason why I get -ENOSPC error and cannot successfully
resize the metadata size.
The smm->ll.nr_blocks is updated after the for loop.

By the way, it will be success if I try to extend the metedata size with
smaller size, such as
$ sudo lvresize --poolmetadata +4M vg/tpool
$ sudo lvresize --poolmetadata +8M vg/tpool
$ sudo lvresize --poolmetadata +12M vg/tpool

This is because old nr_blocks and new nr_blocks(say, old block number plus
extra_blocks) are the same after the calculation of dm_sector_div_up(), the
for loop in sm_ll_extend() will not being executed when nr_blocks is the
same.

Compare with steps when dm_sm_metadata_create(), the smm->begin is set as
(superblock + 1),and the ll.nr_blocks is set as 0 in function
sm_ll_new_metadata(). So -ENOSPC will not be returned when creating a new
pool.

So this is my question, is the following statements in
sm_bootstrap_new_block() needed? Or do I misunderstanding something in the
procedure of extending pool metadata?

/*
 * We know the entire device is unused.
 */
if (smm->begin == smm->ll.nr_blocks)
return -ENOSPC;

Any help would be grateful. Thanks.



2013/10/30 Wun-Yen Liang <burton.paramountcy at gmail.com>

> Hi, folks,
>
> I noticed that the dm-thin metadata can be resized since LVM tools
> v2.02.99, and the related code in kernel had been added since 3.10, say,
> maybe_resize_metadata_dev() in pool_resume in dm-thin.c.
>
> I'm currently doing some experiments with dm-thin targets, with
> linux-3.11、LVM2.2.02.103
> Here is my script to create thin-pool and thin-volumn on my device, a
> 200GB pool with 32M metadata
>     $ sudo vgcreate vg /dev/sda1
>     $ sudo lvcreate --type thin-pool --thinpool tpool vg --size 200G
> --poolmetadatasize 32M -c 1M
>     $ sudo lvcreate --name lv vg --virtualsize 100G --type thin --thinpool
> tpool
>
> Then, I try to increase the metadata size with the following command,
>     $ sudo lvresize --poolmetadata +4M vg/tpool
>       Extending logical volume tp_tmeta to 36.00 MiB.
>       Logical volume tpool successfully resized
>
> by executing "dmsetup status"/ "dmsetup table" command, the length of the
> *_tmeta volumn is successfully expended with 4MB size.
>
> But when trying to resize the metadata lv to 64MB
>     $ sudo lvresize --poolmetadata +28M vg/tpool
>       Extending logical volume tpool_tmeta to 64.00 MiB.
>       device-mapper: resume ioctl on  failed: No space left on device
>       Unable to resume vg-tpool-tpool (253:3)
>       Problem reactivating tpool
>       libdevmapper exiting with 2 device(s) still suspended.
>
> I get error message in this step.
> By executing vgs/lvs command, I still have free space in my volumn group
> and pool.
> The dm table of *_tmeta device seems had been modified, but resumed fail.
>
> The same kind of error also happens when I try to expend the metadata
> device with a big size, like 1GB.
> After tracing the code, the -ENOSPC seems being returned from
> sm_bootstrap_new_block() in dm-space-map-metadata.c.
>
> Here is my questions.
> Did I miss some steps before resizing the metadata device?
> Or, did the resizing of dm-thin metadata is limitted? If yes, what is the
> limitation?
>
> Any help would be grateful. Thanks
>
> Burton
>



-- 
Regards,
Q毛
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/dm-devel/attachments/20131106/ad3aba9e/attachment.htm>


More information about the dm-devel mailing list