[linux-lvm] trying to restore data after Harddisk breakdown
Kai Iskratsch
kai at stella.at
Sun Apr 27 04:59:02 UTC 2003
I had a lvm System which was made of 3 PVs the Harddisks hde (30 GB),hdf
(80 GB) and hdg (120 GB). The first harddisk on which I created the vg
was hdf, then I created 3 LVs on it. Moved the existing data from hde
the VG, added hde as a PV and expanded one of the LVs. After some
months I added hdg and resized all LVs. Some time later I moved a bit of
free Space between the LVs but forgot to backup my configuration.
Not very much longer one of the harddisks hde had a complete breakdown,
it couldn't be found by the bios anymore. Since I still had waranty on
it I sent it in and got some time later a replacement disk.
In the meantime I was able to extract the data from the 2 LVs which
didnt span on the defect disk by using LVM2 and using
""vgscan ; vgchange -a y -P"".
Im Using now kernel 2.4.20 and device-mapper + lvm checked out from cvs
yesterday night.
(Compilation was not that easy, i had to make some changes for
device-mapper
lib/ioctl/libdevmapper.c,v
479c479
< char *outbuf = (char *) dmi + dmi->data_offset;
---
> char *outbuf = (char *) dmi + dmi->data_start;
536c536
< dmt->dmi.v3->data_offset);
---
> dmt->dmi.v3->data_start);
671c671
< dmi->data_offset = sizeof(struct dm_ioctl);
---
> dmi->data_start = sizeof(struct dm_ioctl);
which i hope is correct since the struct dm_ioctrl had no member
data_offst and reading the comments for this struct suggested that
data_start should be an offset
and in lvm2 I needed to include the kernel Include files into the
INCLUDE part of the MAKEFILE)
Now I have the replacement disk which should be only one large segment
in one of the LVs. So I thought that it could maybe possible to get the
lvm up again by exracting the actual configuration that is used by
"vgscan ; vgchange -a y -P" and making a vgcfgbackup of it with lvm2 and
edit the backup since it is humanreadable and restore it to the new
harddisk.
so I tried to make a vgcfgbackup which worked with the -P option
and got somthing which seemed a bit irritating for me:
--------------------------------------------------------
# Generated by LVM2: Sun Apr 27 09:26:39 2003
contents = "Text Format Volume Group"
version = 1
description = "Created *after* executing './vgcfgbackup -P'"
creation_host = "fuckup" # Linux fuckup 2.4.20 #1 SMP Sun Apr 27
00:32:28 CEST 2003 i686
creation_time = 1051428399 # Sun Apr 27 09:26:39 2003
Data {
id = "PUHFUx-71YI-m24t-7aZQ-J9ZM-PSrD-F10Y3t"
seqno = 0
status = ["RESIZEABLE", "PARTIAL", "READ"]
system_id = "matin1023540844"
extent_size = 8192 # 4 Megabytes
max_lv = 255
max_pv = 255
physical_volumes {
pv0 {
id = "3yHeIp-kPYG-dqYk-3Qvz-czTC-m05v-Hs44sq"
device = "/dev/hdg1" # Hint only
status = ["ALLOCATABLE"]
pe_start = 8696
pe_count = 29310 # 114.492 Gigabytes
}
pv1 {
id = "834fZw-DSuv-0gGa-2313-Ud40-uqyh-mh3zbW"
device = "/dev/hdf1" # Hint only
status = ["ALLOCATABLE"]
pe_start = 8696
pe_count = 19078 # 74.5234 Gigabytes
}
}
logical_volumes {
MP3 {
id = "000000-0000-0000-0000-0000-0000-000000"
status = ["READ", "WRITE", "VISIBLE"]
allocation_policy = "next free"
read_ahead = 120
segment_count = 1
segment1 {
start_extent = 0
extent_count = 12800 # 50 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
}
DATA {
id = "000000-0000-0000-0000-0000-0000-000001"
status = ["READ", "WRITE", "VISIBLE"]
allocation_policy = "next free"
read_ahead = 120
segment_count = 1
segment1 {
start_extent = 0
extent_count = 5120 # 20 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 15360
]
}
}
VIDEO {
id = "000000-0000-0000-0000-0000-0000-000002"
status = ["READ", "WRITE", "VISIBLE"]
allocation_policy = "next free"
read_ahead = 120
segment_count = 7328
segment1 {
start_extent = 0
extent_count = 8830 # 34.4922 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 20480
]
}
segment2 {
start_extent = 8830
extent_count = 1 # 4 Megabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"Missing", 0
]
}
.... (many more Missing Segments)
segment7326 {
start_extent = 16154
extent_count = 1 # 4 Megabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"Missing", 0
]
}
segment7327 {
start_extent = 16155
extent_count = 19078 # 74.5234 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv1", 0
]
}
segment7328 {
start_extent = 35233
extent_count = 2560 # 10 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 12800
]
}
}
}
}
-----------------------------------------------
first this says that the 120GB disk is my PV0 and the 80 GB is my PV1,
second it says that my 2 working LVs are entirely on the 120 GB
Harddisk, but I created them before on the 80 GB Disk so I thought they
would be spanning over both discs.
Is this backup wrong, then how do I get the data that the system uses
when I mount the two working LVs readonly?
Or has lvm moved the LVs on one of my resizes so that this data is
correct and I only need to add the new PV and replace all the Missing
Segments by one on the new PV?
best regards
Kai Iskratsch
More information about the linux-lvm
mailing list