[linux-lvm] SOLVED Re: moving LVMs to another machine
Albert Everett
aeeverett at ualr.edu
Fri Jan 14 20:22:34 UTC 2011
Ray -
Thanks VERY much for your replies.
I found after I sent my msgs to this list that the second machine automagically created its own /etc/lvm, and when I did
# vgchange -a y
everything came up great.
It's cool that the LVM config info is stored to the PVs. My compliments to the developers.
Albert
On Jan 14, 2011, at 1:56 PM, Ray Morris wrote:
>> Also, found a backup of /etc/lvm from first machine.
>
> That metadata is (by default) stored on every PV (disk),
> so the backup is only needed if you accidentally change
> something and need to put it back the way it was.
>
>
> On Fri, 14 Jan 2011 13:41:17 -0600
> Albert Everett <aeeverett at ualr.edu> wrote:
>
>> Also, found a backup of /etc/lvm from first machine.
>>
>> Albert
>>
>> [root at login-0-0 lvm]# ls -R
>> .:
>> archive backup lvm.conf
>>
>> ./archive:
>> vg0_00000.vg vg0_00001.vg vg0_00002.vg vg0_00003.vg vg1_00000.vg
>> vg1_00001.vg
>>
>> ./backup:
>> vg0 vg1
>>
>> [root at login-0-0 lvm]# cat backup/vg0
>> # Generated by LVM2: Tue Jan 20 12:42:14 2009
>>
>> contents = "Text Format Volume Group"
>> version = 1
>>
>> description = "Created *after* executing 'lvcreate -l 26772 -n
>> lv0 /dev/vg0'"
>>
>> creation_host = "whatever" # Linux whatever 2.6.9-55.0.2.ELsmp
>> #1 SMP Tue Jun 26 14:14:47 EDT 2007 x86_64 creation_time =
>> 1232476934 # Tue Jan 20 12:42:14 2009
>>
>> vg0 {
>> id = "DlYlrX-oFu7-S3rf-GVH0-5sVp-cuQh-0VGIdy"
>> seqno = 4
>> status = ["RESIZEABLE", "READ", "WRITE"]
>> extent_size = 262144 # 128 Megabytes
>> max_lv = 0
>> max_pv = 0
>>
>> physical_volumes {
>>
>> pv0 {
>> id = "aX4eXc-5ADq-BTl2-sZzY-36kX-A0zx-bDKIQf"
>> device = "/dev/sdc1" # Hint only
>>
>> status = ["ALLOCATABLE"]
>> dev_size = 18446744073709551615 # 8
>> (null) pe_start = 384
>> pe_count = 13386 # 1.63403 Terabytes
>> }
>>
>> pv1 {
>> id = "fgu78M-yzS9-OPbV-2aXt-NgJP-A6dA-ZpQCkt"
>> device = "/dev/sdd1" # Hint only
>>
>> status = ["ALLOCATABLE"]
>> dev_size = 18446744073709551615 # 8
>> (null) pe_start = 384
>> pe_count = 13386 # 1.63403 Terabytes
>> }
>> }
>>
>> logical_volumes {
>>
>> lv0 {
>> id = "fOvYHo-jj3e-qoZK-nvU8-7yu3-z19U-siuqOP"
>> status = ["READ", "WRITE", "VISIBLE"]
>> segment_count = 2
>>
>> segment1 {
>> start_extent = 0
>> extent_count = 13386 # 1.63403
>> Terabytes
>>
>> type = "striped"
>> stripe_count = 1 # linear
>>
>> stripes = [
>> "pv0", 0
>> ]
>> }
>> segment2 {
>> start_extent = 13386
>> extent_count = 13386 # 1.63403
>> Terabytes
>>
>> type = "striped"
>> stripe_count = 1 # linear
>>
>> stripes = [
>> "pv1", 0
>> ]
>> }
>> }
>> }
>> }
>>
>> [root at login-0-0 lvm]# cat backup/vg1
>> # Generated by LVM2: Fri Jan 16 11:30:34 2009
>>
>> contents = "Text Format Volume Group"
>> version = 1
>>
>> description = "Created *after* executing 'lvcreate -L 4.72T -n
>> lv1 /dev/vg1'"
>>
>> creation_host = "whatever" # Linux whatever 2.6.9-55.0.2.ELsmp
>> #1 SMP Tue Jun 26 14:14:47 EDT 2007 x86_64 creation_time =
>> 1232127034 # Fri Jan 16 11:30:34 2009
>>
>> vg1 {
>> id = "Qjb9Fq-o5Jy-MH1n-453l-gQqp-iqqN-49sUIS"
>> seqno = 2
>> status = ["RESIZEABLE", "READ", "WRITE"]
>> extent_size = 262144 # 128 Megabytes
>> max_lv = 0
>> max_pv = 0
>>
>> physical_volumes {
>>
>> pv0 {
>> id = "VSTyo3-r8rE-5lym-G6F7-683r-fdZm-Urm5af"
>> device = "/dev/sde1" # Hint only
>>
>> status = ["ALLOCATABLE"]
>> dev_size = 18446744073684372087 # 8
>> (null) pe_start = 384
>> pe_count = 16287 # 1.98816 Terabytes
>> }
>>
>> pv1 {
>> id = "eXd2Ee-L55A-bO43-ucXR-GnGM-n4fo-k0vOXP"
>> device = "/dev/sdf1" # Hint only
>>
>> status = ["ALLOCATABLE"]
>> dev_size = 18446744073684372087 # 8
>> (null) pe_start = 384
>> pe_count = 16287 # 1.98816 Terabytes
>> }
>>
>> pv2 {
>> id = "4e0pIE-AoXc-2bb5-m6Q9-Eb2z-gmHW-Gu9Owl"
>> device = "/dev/sdg1" # Hint only
>>
>> status = ["ALLOCATABLE"]
>> dev_size = 1603013832 # 764.377
>> Gigabytes pe_start = 384
>> pe_count = 6115 # 764.375 Gigabytes
>> }
>> }
>>
>> logical_volumes {
>>
>> lv1 {
>> id = "SNdEg6-nshz-xtzf-ml4R-jraE-ukEZ-eqK6hE"
>> status = ["READ", "WRITE", "VISIBLE"]
>> segment_count = 3
>>
>> segment1 {
>> start_extent = 0
>> extent_count = 16287 # 1.98816
>> Terabytes
>>
>> type = "striped"
>> stripe_count = 1 # linear
>>
>> stripes = [
>> "pv0", 0
>> ]
>> }
>> segment2 {
>> start_extent = 16287
>> extent_count = 16287 # 1.98816
>> Terabytes
>>
>> type = "striped"
>> stripe_count = 1 # linear
>>
>> stripes = [
>> "pv1", 0
>> ]
>> }
>> segment3 {
>> start_extent = 32574
>> extent_count = 6093 # 761.625
>> Gigabytes
>>
>> type = "striped"
>> stripe_count = 1 # linear
>>
>> stripes = [
>> "pv2", 0
>> ]
>> }
>> }
>> }
>> }
>>
>> Begin forwarded message:
>>
>>> From: Albert Everett <aeeverett at ualr.edu>
>>> Date: January 14, 2011 1:25:24 PM CST
>>> To: linux-lvm at redhat.com
>>> Subject: moving LVMs to another machine
>>>
>>> This is my first time trying to do this, so please forgive me if
>>> what I'm asking is trivial. I'm anxious not to loose data.
>>>
>>> I have moved a Dell MD3000 with an MD1000 attached from one CentOS
>>> 4.5 x86_64 machine to another. I've installed Dell's drivers on the
>>> second machine and I see output below.
>>>
>>> /dev/sdb and sdc are on the MD3000; /dev/sdd, sde and sdf are on
>>> the MD1000. Filesystem on both is ext3, and I only used LVM to
>>> concatenate <2TB volumes because the MD3000 firmware required it at
>>> the time.
>>>
>>> I did not actively deactivate any volume groups or logical volumes
>>> before I made this move.
>>>
>>> Q: What are my next steps after lvscan to bring the two logical
>>> volumes back online? OS, etc are all on /dev/sda; the logical
>>> volumes just have extra stuff.
>>>
>>> Q: Am I right to assume that once lv0 and lv1 show as active, all I
>>> need to do is mount them somewhere, and that the filesystems they
>>> contain should be intact? I had no disk or controller failures that
>>> I know of.
>>>
>>> Albert
>>>
>>> [root at login-0-0 ~]# fdisk -l
>>>
>>> Disk /dev/sda: 749.6 GB, 749606010880 bytes
>>> 255 heads, 63 sectors/track, 91134 cylinders
>>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>>
>>> Device Boot Start End Blocks Id System
>>> /dev/sda1 * 1 3060 24579418+ 83 Linux
>>> /dev/sda2 3061 6120 24579450 83 Linux
>>> /dev/sda3 6121 8160 16386300 82 Linux swap
>>> /dev/sda4 8161 91134 666488655 5 Extended
>>> /dev/sda5 8161 91134 666488623+ 83 Linux
>>>
>>> Disk /dev/sdb: 1796.7 GB, 1796776919040 bytes
>>> 255 heads, 63 sectors/track, 218445 cylinders
>>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>>
>>> Device Boot Start End Blocks Id System
>>> /dev/sdb1 1 218445 1754659431 8e Linux LVM
>>>
>>> Disk /dev/sdc: 1796.7 GB, 1796776919040 bytes
>>> 255 heads, 63 sectors/track, 218445 cylinders
>>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>>
>>> Device Boot Start End Blocks Id System
>>> /dev/sdc1 1 218445 1754659431 8e Linux LVM
>>>
>>> Disk /dev/sdd: 2186.1 GB, 2186136256512 bytes
>>> 255 heads, 63 sectors/track, 265782 cylinders
>>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>>
>>> Device Boot Start End Blocks Id System
>>> /dev/sdd1 1 265782 2134893883+ 8e Linux LVM
>>>
>>> Disk /dev/sde: 2186.1 GB, 2186136256512 bytes
>>> 255 heads, 63 sectors/track, 265782 cylinders
>>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>>
>>> Device Boot Start End Blocks Id System
>>> /dev/sde1 1 265782 2134893883+ 8e Linux LVM
>>>
>>> Disk /dev/sdf: 820.7 GB, 820745076736 bytes
>>> 255 heads, 63 sectors/track, 99783 cylinders
>>> Units = cylinders of 16065 * 512 = 8225280 bytes
>>>
>>> Device Boot Start End Blocks Id System
>>> /dev/sdf1 1 99783 801506916 8e Linux LVM
>>>
>>> [root at login-0-0 ~]# pvscan
>>> PV /dev/sdd1 VG vg1 lvm2 [1.99 TB / 0 free]
>>> PV /dev/sde1 VG vg1 lvm2 [1.99 TB / 0 free]
>>> PV /dev/sdf1 VG vg1 lvm2 [764.38 GB / 2.75 GB free]
>>> PV /dev/sdb1 VG vg0 lvm2 [1.63 TB / 0 free]
>>> PV /dev/sdc1 VG vg0 lvm2 [1.63 TB / 0 free]
>>> Total: 5 [7.99 TB] / in use: 5 [7.99 TB] / in no VG: 0 [0 ]
>>>
>>> [root at login-0-0 ~]# vgscan
>>> Reading all physical volumes. This may take a while...
>>> Found volume group "vg1" using metadata type lvm2
>>> Found volume group "vg0" using metadata type lvm2
>>>
>>> [root at login-0-0 ~]# lvscan
>>> inactive '/dev/vg1/lv1' [4.72 TB] inherit
>>> inactive '/dev/vg0/lv0' [3.27 TB] inherit
>>>
>>> [root at login-0-0 ~]# pvdisplay
>>> --- Physical volume ---
>>> PV Name /dev/sdd1
>>> VG Name vg1
>>> PV Size 8192.00 EB / not usable 8192.00 EB
>>> Allocatable yes (but full)
>>> PE Size (KByte) 131072
>>> Total PE 16287
>>> Free PE 0
>>> Allocated PE 16287
>>> PV UUID VSTyo3-r8rE-5lym-G6F7-683r-fdZm-Urm5af
>>>
>>> --- Physical volume ---
>>> PV Name /dev/sde1
>>> VG Name vg1
>>> PV Size 8192.00 EB / not usable 8192.00 EB
>>> Allocatable yes (but full)
>>> PE Size (KByte) 131072
>>> Total PE 16287
>>> Free PE 0
>>> Allocated PE 16287
>>> PV UUID eXd2Ee-L55A-bO43-ucXR-GnGM-n4fo-k0vOXP
>>>
>>> --- Physical volume ---
>>> PV Name /dev/sdf1
>>> VG Name vg1
>>> PV Size 764.38 GB / not usable 1.60 MB
>>> Allocatable yes
>>> PE Size (KByte) 131072
>>> Total PE 6115
>>> Free PE 22
>>> Allocated PE 6093
>>> PV UUID 4e0pIE-AoXc-2bb5-m6Q9-Eb2z-gmHW-Gu9Owl
>>>
>>> --- Physical volume ---
>>> PV Name /dev/sdb1
>>> VG Name vg0
>>> PV Size 8192.00 EB / not usable 8192.00 EB
>>> Allocatable yes (but full)
>>> PE Size (KByte) 131072
>>> Total PE 13386
>>> Free PE 0
>>> Allocated PE 13386
>>> PV UUID aX4eXc-5ADq-BTl2-sZzY-36kX-A0zx-bDKIQf
>>>
>>> --- Physical volume ---
>>> PV Name /dev/sdc1
>>> VG Name vg0
>>> PV Size 8192.00 EB / not usable 8192.00 EB
>>> Allocatable yes (but full)
>>> PE Size (KByte) 131072
>>> Total PE 13386
>>> Free PE 0
>>> Allocated PE 13386
>>> PV UUID fgu78M-yzS9-OPbV-2aXt-NgJP-A6dA-ZpQCkt
>>>
>>> [root at login-0-0 ~]# vgdisplay
>>> --- Volume group ---
>>> VG Name vg1
>>> System ID
>>> Format lvm2
>>> Metadata Areas 3
>>> Metadata Sequence No 2
>>> VG Access read/write
>>> VG Status resizable
>>> MAX LV 0
>>> Cur LV 1
>>> Open LV 0
>>> Max PV 0
>>> Cur PV 3
>>> Act PV 3
>>> VG Size 4.72 TB
>>> PE Size 128.00 MB
>>> Total PE 38689
>>> Alloc PE / Size 38667 / 4.72 TB
>>> Free PE / Size 22 / 2.75 GB
>>> VG UUID Qjb9Fq-o5Jy-MH1n-453l-gQqp-iqqN-49sUIS
>>>
>>> --- Volume group ---
>>> VG Name vg0
>>> System ID
>>> Format lvm2
>>> Metadata Areas 2
>>> Metadata Sequence No 4
>>> VG Access read/write
>>> VG Status resizable
>>> MAX LV 0
>>> Cur LV 1
>>> Open LV 0
>>> Max PV 0
>>> Cur PV 2
>>> Act PV 2
>>> VG Size 3.27 TB
>>> PE Size 128.00 MB
>>> Total PE 26772
>>> Alloc PE / Size 26772 / 3.27 TB
>>> Free PE / Size 0 / 0
>>> VG UUID DlYlrX-oFu7-S3rf-GVH0-5sVp-cuQh-0VGIdy
>>>
>>> [root at login-0-0 ~]# lvdisplay
>>> --- Logical volume ---
>>> LV Name /dev/vg1/lv1
>>> VG Name vg1
>>> LV UUID SNdEg6-nshz-xtzf-ml4R-jraE-ukEZ-eqK6hE
>>> LV Write Access read/write
>>> LV Status NOT available
>>> LV Size 4.72 TB
>>> Current LE 38667
>>> Segments 3
>>> Allocation inherit
>>> Read ahead sectors 0
>>>
>>> --- Logical volume ---
>>> LV Name /dev/vg0/lv0
>>> VG Name vg0
>>> LV UUID fOvYHo-jj3e-qoZK-nvU8-7yu3-z19U-siuqOP
>>> LV Write Access read/write
>>> LV Status NOT available
>>> LV Size 3.27 TB
>>> Current LE 26772
>>> Segments 2
>>> Allocation inherit
>>> Read ahead sectors 0
>>>
>>
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>
>
>
>
> --
>
> Ray Morris
> support at bettercgi.com
>
> Strongbox - The next generation in site security:
> http://www.bettercgi.com/strongbox/
>
> Throttlebox - Intelligent Bandwidth Control
> http://www.bettercgi.com/throttlebox/
>
> Strongbox / Throttlebox affiliate program:
> http://www.bettercgi.com/affiliates/user/register.php
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
More information about the linux-lvm
mailing list