[linux-lvm] How do I remove a locked Logical Volume ([pvmove0]) left over after a disk error?

Derek Dongray derek at inverchapel.me.uk
Sat Mar 29 16:45:48 UTC 2014


Actually the solution turned out to be simple.

vgcfgbackup a
[edit out the volume pvmove0]
vgcfgrestore a

Problem solved.
.


On 29 March 2014 12:02, Derek Dongray <derek at inverchapel.me.uk> wrote:

> > pvmove --abort has to work - unless you have some ancient version of lvm2 tools?
>
> The first thing I tried, several times, including with '--force', no effect. The fact that the system allows move on the same LV I was trying to move when it failed would inidicate that it's not in a state where a pvmove is considered to be in progress.
>
> > What's the vresion in use ?
>
>
> # lvm version
>   LVM version:     2.02.104(2) (2013-11-13)
>   Library version: 1.02.83 (2013-11-13)
>   Driver version:  4.26.0
>
>
> It the current Debian testing version. Kernel 3.12-1-686-pae.
>
> > (You could always hack your lvm2 metadata in 'vi' - if you know what you
> > are doing...)
>
>
> I'm beginning to think that's the only way to get rid of it. Of course,
> the other solution is to simply ignore it as it doesn't seem to cause any
> problem!
>
> Derek.
>
>
> On 27 March 2014 11:32, Derek Dongray <derek at inverchapel.me.uk> wrote:
>
>> Following some disk errors while I was moving some extents, I now have a
>> hidden locked [pvmove0] which doesn't seem to have any physical extents
>> assigned, although it is shown as 4Mb long.
>>
>>     # lvs -a -o+seg_pe_ranges a/pvmove0
>>       LV        VG   Attr       LSize Pool Origin Data%  Move Log
>> Cpy%Sync Convert PE Ranges
>>       [pvmove0] a    vwC---v--- 4.00m
>>
>>  The simple 'lvremove a/pvmove0' (optionally with '--force') results in
>> the message 'Can't remove locked LV pvmove0'.
>>
>> 'pvmove --abort' does nothing. The presence of this volume doesn't seem
>> to affect other moves (which simply use [pvmove1]).
>>
>> In the config, the LV shows:
>>
>>                 pvmove0 {
>>                         id = "54veYD-hM8r-j214-MOD1-FGnV-3g7t-jRlZ7W"
>>                         status = ["READ", "WRITE", "LOCKED"]
>>                         flags = []
>>                         creation_host = "zotac"
>>                         creation_time = 1394764593      # 2014-03-14
>> 02:36:33 +0000
>>                         allocation_policy = "contiguous"
>>                         segment_count = 1
>>
>>                         segment1 {
>>                                 start_extent = 0
>>                                 extent_count = 1        # 4 Megabytes
>>
>>                                 type = "error"
>>                         }
>>                 }
>>
>> I noticed there's no physical volume associate with the LV although the
>> extent count of 1 explains why it's reported as 4Mb in size.
>>
>> I suspect that the only fix is to manually edit the config file to remove
>> the offending LV and then use `vgcfgrestore` (or possible simply edit the
>> config file from a rescue system) but would assume that I'm not the only
>> person to have this problem and would think there's a series of (possibly
>> undocumented) commands to clean this up.
>>
>> [FYI: the 'disk errors' were the almost simultaneous failure of 2 out of
>> 3 disks in a RAID array; fortunately one the drives only had a few bad
>> blocks so I was able to recover all but a few megabytes of a 500Gb volume
>> using 'ddrescue']
>>
>> --
>> Derek.
>>
>
>
>
> --
> Derek.
>



-- 
Derek.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20140329/9757003c/attachment.htm>


More information about the linux-lvm mailing list