[linux-lvm] How to handle Bad Block relocation with LVM?

Roland devzero at web.de
Wed Mar 15 16:00:16 UTC 2023


quite old thread - but damn interesting, though :)

 > Having the PE number, you can easily do
 > pvmove /dev/broken:PE /dev/somewhere-else

does somebody know if it's possible to easily remap a PE with standard
lvm tools,
instead of pvmoving it ?

trying to move data off from defective sectors can need a very long
time, especially
when multiple sectors being affected and if the disks are desktop drives.

let's think of some "re-partitioning" tool which sets up lvm on top of a
disk with bad
sectors and which scans/skips&remaps megabyte sized PE's to some spare
area, before the
disk is being used.  badblock remapping at the os level instead at the
disks controller

yes, most of you will tell it's a bad idea but i have a cabinet full of
disks with bad
sectors and i'd be really be curious how good and how long a zfs raidz
would work on top
of such "badblocks lvm".  at least, i'd like to experiment with that.
let's call it
academical project for learning purpose and for demonstration of lvm
strength :D

such "remapping" could look like this:

# pvs --segments -ovg_name,lv_name,seg_start_pe,seg_size_pe,pvseg_start 
-O pvseg_start -S vg_name=VGloop0
   VG      LV               Start SSize Start
   VGloop0 blocks_good      0     4     0
   VGloop0 blocks_bad       1     1     4
   VGloop0 blocks_good      5   195     5
   VGloop0 blocks_bad       2     1   200
   VGloop0 blocks_good    201   699   201
   VGloop0 blocks_spare     0   120   900
   VGloop0 blocks_good    200     1  1020
   VGloop0 blocks_good      4     1  1021
   VGloop0 blocks_bad       0     1  1022

blocks_good is LV with healty PE's, blocks_bad is LV with bad PE's and
blocks_spare is LV
where you take healthy PE's from as a replacement for bad PE's found in
blocks_good LV


 > linux-lvm] How to handle Bad Block relocation with LVM?
 > Lars Ellenberg lars.ellenberg at linbit.com
 > Thu Nov 29 14:04:01 UTC 2012
 >     Previous message (by thread): [linux-lvm] How to handle Bad Block
relocation with LVM?
 >     Next message (by thread): [linux-lvm] How to handle Bad Block
relocation with LVM?
 >     Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
 > On Thu, Nov 29, 2012 at 07:26:24AM -0500, Brian J. Murrell wrote:
 > > On 12-11-28 08:57 AM, Zdenek Kabelac wrote:
 > > >
 > > > Sorry, no automated tool.
 > >
 > > Pity,
 > >
 > > > You could possibly pvmove separated PEs manually with set of pvmove
 > > > commands.
 > >
 > > So, is the basic premise to just find the PE that is sitting on a bad
 > > block and just pvmove it into an LV created just for the purpose of
 > > holding PEs that are on bad blocks?
 > >
 > > So what happens when I pvmove a PE out of an LV?  I take it LVM moves
 > > the data (or at least tries in this case) on the PE being pvmoved onto
 > > another PE before moving it?
 > >
 > > Oh, but wait.  pvmove (typically) moves PEs between physical volumes.
 > > Can it be used to remap PEs like this?
 > So what do you know?
 > You either know that pysical sector P on some physical disk is broken.
 > Or you know that logical sector L in some logical volume is broken.
 > If you do
 > pvs --unit s --segment -o
 > That should give you all you need to transform them into each other,
 > and to transform the sector number to PE number.
 > Having the PE number, you can easily do
 > pvmove /dev/broken:PE /dev/somewhere-else
 > Or with alloc anywhere even elsewhere on the same broken disk.
 > # If you don't have an other PV available,
 > # but there are free "healthy" extents on the same PV:
 > # pvmove --alloc anywhere /dev/broken:PE /dev/broken
 > Which would likely not be the smartest idea ;-)
 > You should then create one LV named e.g. "BAD_BLOCKS",
 > which you would create/extend to cover that bad PE,
 > so that won't be re-allocated again later:
 > lvextend VG/BAD_BLOCKS -l +1 /dev/broken:PE
 > Better yet, pvchange -an /dev/broken,
 > so it won't be used for new LVs anymore,
 > and pvmove /dev/broken completely to somewhere else.
 > So much for the theory, how I would try to do this.
 > In case I would do this at all.
 > Which I probably won't, if I had an other PV available.
 >     ;-)
 > I'm unsure how pvmove will handle IO errors, though.
 > > > But I'd strongly recommend to get rid of such broken driver
quickly then
 > > > you loose any more data - IMHO it's the most efficient solution
cost &
 > > > time.
 > Right.
 >     Lars

More information about the linux-lvm mailing list