[dm-devel] Question about dm target size

Zdenek Kabelac zkabelac at redhat.com
Wed Nov 12 11:48:03 UTC 2014

Dne 12.11.2014 v 12:00 Josef Bacik napsal(a):
> On 11/12/2014 03:45 AM, Zdenek Kabelac wrote:
>> Dne 12.11.2014 v 03:30 Josef Bacik napsal(a):
>>> Sorry for top posting, my phone client doesn't do inline.
>>> I'm splitting the disk in half, writing to alternating sides of the
>>> disk and keeping track of where which block is so when the power fail
>>> event occurs the subsequent reads come from the corresponding mirror
>>> in the disk. The disk needs to appear to be size/2 for the mkfs to
>>> know the correct size, but my target needs to be able to write up to
>>> size.  I looked at thinp but it reflects the full size right? It's
>>> just like a sparse find correct? My ->map function does the right
>>> thing, doing the ->len trick makes it all work out right, but this is
>>> really isolated testing. Thanks,
>> Just a side note - maybe you would like to rather extend functionality
>> of dm-flakey  target ?
> I did that first but it ended up being really ugly.  With all the varying
> functionality in dm-flakey it ended up making the table format horrible and
> the code a spaghetti of if (test_bit()) crap everywhere. Thanks,

The other thing that lvm2 test suite is using is -

we take a base 'linear' device mapped on some origin - and we create a 
segmented device where individual segment are either mapper to 'original' 
device or to a 'zero' origin or  'error' origin depending on
whether you want to read  0 or 'err' from a device.

Example of such mapping 'trick':

normal mapping for device pv1:

pv1: 0 69120 linear 7:2 0

mapping with single error segment sector of device pv1:

pv1: 0 2050 linear 7:2 0
pv1: 2050 1 error
pv1: 2051 67069 linear 7:2 2051

You could reload table mapping for pv1 any-time with suspend/resume.


More information about the dm-devel mailing list