[linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected

Roberto Fastec roberto.fastec at gmail.com
Wed Apr 12 13:53:52 UTC 2023


"but shouldn't we perhaps leave it up to the end user / owner of the
hardware,  to decide when it's ready for the recycle bin ?"

with hard drives (forget SSDs, they are hardware accelerators absolutely unaffordable for data storing) it is not the user/owner that decide it , unless he doesn't want to suicide his data

is the SMART system that tells you that one first reallocation happen

Sectors reallocation are not allowed

Those few 100 or 250 sectors are just for the SMART system

As soon just one reallocation happen, it is time to waste the drive

Like with car tyres, when you reach the marker , it is time to waste them, but if you run too few kilometers per year, after few years the gum got dried and you create a risk for yourself and the others if you don waste them

This comparison applies to hard drives

If the weekly SMART test tells you that a drive is in pre-failure (just reallocations happen, few mean 3 - 4 - 5) you have been warned:

it is time to waste "the tyre"

You don't do that, you was warned



Il giorno 12 apr 2023, 14:37, alle ore 14:37, Roland <devzero at web.de> ha scritto:
>>Reall silly plan  - been there years back in time when drives were FAR
>more expensive with the price per GiB.
> >Todays - just throw bad drive to recycle bin - it's not worth to do
>this silliness.
>
>ok, i understand your point of view. and thank you for the input.
>
>but this applies to a world with endless ressources and when people can
>afford the new hardware.
>
>i think, with the same logic, you can designate some guy to be silly ,
>if he puts a patch on his bicycle inner tube instead
>of buying a new one, as they are cheap. with the patch, the tube is
>always worse then a new one. and he probably risks
>his own health, because of using a tube which may already have gotten
>porous...
>
>but shouldn't we perhaps leave it up to the end user / owner of the
>hardware,  to decide when it's ready for the recycle bin ?
>
>or should we perhaps wait for the next harddrive supply crisis (like in
>2011)?  then people start to get more creative in using
>what they have, because they have no other option...
>
> > whenever you want to create new arrangement for you disk with 'bad'
>areas,
> >you can always start from 'scratch' - since afterall - lvm2 ONLY
>manipulates with metadata within disk front -
> >so if you need to create new 'holes',
> >just   'pvcreate -f', vgcreate,   and 'lvcreate -Zn -Wn'
>>and then  'lvextend'  with normal  or  'lvextend --type error | --type
>zero' segment types around bad areas with specific size.
> >Once you are finished and your LV precisely matches your 'previous' 
>LV of you past VG - you can start to use this LV again with
> >new arrangement of  'broken zeroed/errored' areas.
>
>yes, i have already come to the conclusion, that it's always better to
>start from scratch like this. i dismissed the idea of
>excluding or relocating bad sectors.
>
> > But good advice from me - whenever  'smartctl' starts to show
>relocation block errors - it's the right moment to  'dd_rescue'
> > any LV to your new drive...
>
>yes, i'm totally aware that we walk on very thin ice here.
>
>but i'd really like to collect some real world data/information on how
>good such disk "recycling" can probably work.  i don't have
>any pointers for such and did not find any information, on "how fast a
>bad disk gets worse,  if it has irretrievable bad sectors
>and smart is reporting relocation errors. there seems not much
>information around for this...
>
>i guess such "broken" disks being used with zfs in a redundant setup,
>they could probable still serve a purpose. maybe not
>for production data, but probably good enough for "not so important"
>application.
>
>it's a little bit academic project. for my own fun. i like to fiddle
>with disks, lvm, zfs and that stuff....
>
>roland
>
>Am 12.04.23 um 12:20 schrieb Zdenek Kabelac:
>> Dne 09. 04. 23 v 20:21 Roland napsal(a):
>>>> Well, if the LV is being used for anything real, then I don't know
>of
>>>> anything where you could remove a block in the middle and still
>have a
>>>> working fs.   You can only reduce fs'es (the ones that you can
>reduce)
>>>
>>> my plan is to scan a disk for usable sectors and map the logical
>volume
>>> around the broken sectors.
>>>
>>> whenever more sectors get broken, i'd like to remove the broken ones
>>> to have
>>> a usable lv without broken sectors.
>>>
>>
>> Reall silly plan  - been there years back in time when drives were
>FAR
>> more expensive with the price per GiB.
>>
>> Todays - just throw bad drive to recycle bin - it's not worth to do
>> this silliness.
>>
>> HDD bad sectors are spreading - and slowly the surface gets
>destroyed....
>>
>> So if you make large 'head-room' around bad disk areas - if they are
>> concentrated on some disk area - and you know topology of you disk
>drive
>> like i.e. 1% free disk space before and after bad area - you could
>> possibly use disk for a little while more - but only to store
>> invaluable data....
>>
>>
>>> since you need to rebuild your data anyway for that disk, you can
>also
>>> recreate the whole logical volume.
>>>
>>> my question and my project is a little bit academic. i'd simply want
>>> to try
>>> out how much use you can have from some dead disks which are trash
>>> otherwise...
>>
>> You could always take  'vgcfgbackup'  of lvm2 metadata and make some
>> crazy transition of if with even  AWK/python/perl   -  but we really
>> tend to support
>> just some useful features - as there is already  'too much' and users
>> are getting often lost.
>>
>> One very simply & naive implementation could be going alonge this
>path -
>>
>> whenever you want to create new arrangement for you disk with 'bad'
>> areas,
>> you can always start from 'scratch' - since afterall - lvm2 ONLY
>> manipulates with metadata within disk front - so if you need to
>create
>> new 'holes',
>> just   'pvcreate -f', vgcreate,   and 'lvcreate -Zn -Wn'
>> and then  'lvextend'  with normal  or  'lvextend --type error |
>--type
>> zero' segment types around bad areas with specific size.
>> Once you are finished and your LV precisely matches your 'previous' 
>> LV of you past VG - you can start to use this LV again with new
>> arrangement of  'broken zeroed/errored' areas.
>>
>> I've some serious doubts about usability of this with any filesystem
>> :) but if you think this has some added value - fell free to use.
>> If the drive you play with would be 'discardable' (SSD/NVMe) then one
>> must take extra care there is no 'discard/TRIM' anywhere in the
>> process - as that would lose all data irrecoverably....
>>
>> But good advice from me - whenever  'smartctl' starts to show
>> relocation block errors - it's the right moment to  'dd_rescue' any
>LV
>> to your new drive...
>>>
>>> yes, pvmove is the other approach for that.
>>>
>>> but will pvmove continue/finish by all means when moving extents
>>> located on a
>>> bad sector ?
>>
>> pvmove  CANNOT be used with bad drives - it cannot deal with erroring
>> sectors and basically gets stuck there trying to mirror unrecoverable
>> disk areas...
>>
>> Regards
>>
>> Zdenek
>>
>>
>>
>
>_______________________________________________
>linux-lvm mailing list
>linux-lvm at redhat.com
>https://listman.redhat.com/mailman/listinfo/linux-lvm
>read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20230412/d4df972e/attachment-0001.htm>


More information about the linux-lvm mailing list