[linux-lvm] Filesystem corruption with LVM's pvmove onto a PV with a larger physical block size

Nir Soffer nsoffer at redhat.com
Mon Mar 4 23:22:39 UTC 2019


On Tue, Mar 5, 2019 at 12:45 AM Cesare Leonardi <celeonar at gmail.com> wrote:

> On 02/03/19 21:25, Nir Soffer wrote:
> > # mkfs.xfs /dev/test/lv1
> > meta-data=/dev/test/lv1          isize=512    agcount=4, agsize=25600
> blks
> >           =                       sectsz=512   attr=2, projid32bit=1
> >           =                       crc=1        finobt=1, sparse=0,
> > rmapbt=0, reflink=0
> > data     =                       bsize=4096   blocks=102400, imaxpct=25
> >           =                       sunit=0      swidth=0 blks
> > naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
> > log      =internal log           bsize=4096   blocks=855, version=2
> >           =                       sectsz=512   sunit=0 blks, lazy-count=1
> > realtime =none                   extsz=4096   blocks=0, rtextents=0
>
> Has the problem here the same root as for ext4? I guess sectsz should be
>  >=4096 to avoid troubles, isn't it?
>
> Just to draw some conlusion, could we say that currently, if we are
> going to move data around with LVM, it's better to check that the
> filesystem is using a block size >= than "blockdev --getbsz
> DESTINATIONDEVICE"? At least with ext4 and xfs.
>
> Something that couldn't be true with really small devices (< 500 MB).
>
> Is there already an open bug regarding the problem discussed in this
> thread?
>

There is this bug about lvextend:
https://bugzilla.redhat.com/1669751

And this old bug from 2011, discussing mixing PVs with different block size.
Comment 2 is very clear about this issue:
https://bugzilla.redhat.com/732980#c2

Nir
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20190305/063780f0/attachment.htm>


More information about the linux-lvm mailing list