[dm-devel] dm-cache: please check/repair metadata
Zdenek Kabelac
zdenek.kabelac at gmail.com
Sun Dec 18 12:44:40 UTC 2016
Dne 18.12.2016 v 01:34 Ian Pilcher napsal(a):
> On 12/08/2016 08:53 AM, Ian Pilcher wrote:
>> Running cache_repair against the metadata device gives me this error:
>>
>> transaction_manager::new_block() couldn't allocate new block
>>
>> I strongly suspect that my metadata device is too small. It was sized
>> with the algorithm that I posted to this list about a year ago:
>>
>> https://www.redhat.com/archives/dm-devel/2015-November/msg00221.html
>>
>> Looking at the source code for cache_metadata_size, I see that it adds
>> additional space for "hints", which the old algorithm didn't account
>> for.
>
> I finally got around to testing my hypothesis, and I can confirm that
> the size of the metadata device is indeed the problem. With a larger
> metadata device, cache_repair succeeds, and I am able to assemble the
> cache device.
>
> So I obviously need to change the formula that I'm using to calculate
> the size of the metadata device, which begs the question ... what is the
> CANONICAL formula for doing this?
>
> lvmcache(7) says, "The size of this LV should be 1000 times smaller
> than the cache data LV, with a minimum size of 8MiB." But this is
> definitely *not* the formula used by cache_metadata_size, and
> cache_metadata_size seems to assume that hints will never be larger
> than 4 bytes.
Hi
I'm not exactly sure what are you doing, are you maintaining your own dm cache
volumes ? (Writing your own volume manager and not using lvm2 - what is lvm2
missing/doing wrong ?)
Are you going to write also your own recovery support ?
Otherwise it's normally a business of lvm2 to maintain proper size of
cached LVs (and even resize them when needed via monitoring)
This will be necessary once we will support online resize of cache pools &
cached volumes.
From the source code of lvm2 it seems it uses about 44 bytes per cache chunk
and with some transaction overhead for metadata (this could be possible
lowered for 'smq' policy...)
Regards
Zdenek
More information about the dm-devel
mailing list