[linux-lvm] broken fs after removing disk from group
Marc des Garets
marc at ttux.net
Thu Nov 13 09:47:42 UTC 2014
For example, what about if I take a new disk and I do this (/dev/sdc
being a new empty disk):
pvcreate --uuid NOskcl-8nOA-PpZg-DCtW-KQgG-doKw-n3J9xd /dev/sdc
NOskcl-8nOA-PpZg-DCtW-KQgG-doKw-n3J9xd is the id of the disk that died
before. This new disk is 1.8Tb instead of 298Gb though.
Then I restore the lvm metadata I posted in my previous email then
vgscan and vgchange like this:
vgcfgrestore VolGroup00
vgscan
vgchange -ay VolGroup00
And then I fsck:
e2fsck /dev/VolGroup00/lvolmedia
On 11/13/2014 08:21 AM, Marc des Garets wrote:
> I think something is possible. I still have the config from before it
> died. Below is how it was. The disk that died (and which I removed) is
> pv1 (/dev/sdc1) but it doesn't want to restore this config because it
> says the disk is missing.
>
> VolGroup00 {
> id = "a0p2ke-sYDF-Sptd-CM2A-fsRQ-jxPI-6sMc9Y"
> seqno = 4
> format = "lvm2" # informational
> status = ["RESIZEABLE", "READ", "WRITE"]
> flags = []
> extent_size = 8192 # 4 Megabytes
> max_lv = 0
> max_pv = 0
> metadata_copies = 0
>
> physical_volumes {
>
> pv0 {
> id = "dRhDoK-p2Dl-ryCc-VLhC-RbUM-TDUG-2AXeWQ"
> device = "/dev/sda4" # Hint only
>
> status = ["ALLOCATABLE"]
> flags = []
> dev_size = 874824678 # 417.149 Gigabytes
> pe_start = 2048
> pe_count = 106789 # 417.145 Gigabytes
> }
>
> pv1 {
> id = "NOskcl-8nOA-PpZg-DCtW-KQgG-doKw-n3J9xd"
> device = "/dev/sdc1" # Hint only
>
> status = ["ALLOCATABLE"]
> flags = []
> dev_size = 625142385 # 298.091 Gigabytes
> pe_start = 2048
> pe_count = 76311 # 298.09 Gigabytes
> }
>
> pv2 {
> id = "MF46QJ-YNnm-yKVr-pa3W-WIk0-seSr-fofRav"
> device = "/dev/sdb1" # Hint only
>
> status = ["ALLOCATABLE"]
> flags = []
> dev_size = 3906963393 # 1.81932 Terabytes
> pe_start = 2048
> pe_count = 476923 # 1.81932 Terabytes
> }
> }
>
> logical_volumes {
>
> lvolmedia {
> id = "aidfLk-hjlx-Znrp-I0Pb-JtfS-9Fcy-OqQ3EW"
> status = ["READ", "WRITE", "VISIBLE"]
> flags = []
> creation_host = "archiso"
> creation_time = 1402302740 # 2014-06-09
> 10:32:20 +0200
> segment_count = 3
>
> segment1 {
> start_extent = 0
> extent_count = 476923 # 1.81932
> Terabytes
>
> type = "striped"
> stripe_count = 1 # linear
>
> stripes = [
> "pv2", 0
> ]
> }
> segment2 {
> start_extent = 476923
> extent_count = 106789 # 417.145
> Gigabytes
>
> type = "striped"
> stripe_count = 1 # linear
>
> stripes = [
> "pv0", 0
> ]
> }
> segment3 {
> start_extent = 583712
> extent_count = 76311 # 298.09
> Gigabytes
>
> type = "striped"
> stripe_count = 1 # linear
>
> stripes = [
> "pv1", 0
> ]
> }
> }
> }
> }
>
> On 11/13/2014 12:11 AM, Fran Garcia wrote:
>> On Wed, Nov 12, 2014 at 11:16 PM, Marc des Garets wrote:
>>> Hi,
>>> [...]
>>> Now the problem is that I can't mount my volume because it says:
>>> wrong fs type, bad option, bad superblock
>>>
>>> Which makes sense as the size of the partition is supposed to be
>>> 2.4Tb but
>>> now has only 2.2Tb. Now the question is how do I fix this? Should I
>>> use a
>>> tool like testdisk or should I be able to somehow create a new physical
>>> volume / volume group where I can add my logical volumes which
>>> consist of 2
>>> physical disks and somehow get the file system right (file system is
>>> ext4)?
>> So you basically need a tool that will "invent" about 200 *Gb* of
>> missing filesystem? :-)
>>
>> I think you better start grabbing your tapes for a restore...
>>
>> ~f
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
More information about the linux-lvm
mailing list