Problem reusing LVM partitions
Ajay Mulwani
ajaymulwani at gmail.com
Mon Jan 15 14:24:27 UTC 2007
Hello all,
Is there any other solution available to preserve some of the existing LVM
partitions?
I am also facing exactly the same issue that Steve has mentioned.
Thanks,
Ajay
On 1/5/07, Steve Robson <srobson at cadence.com> wrote:
>
> Hi all,
>
> I'm trying to install the OS onto a system, reusing the existing LVM
> partitions thereon.
>
> I'm using some LVM commands in a %pre section of the config file to
> scan for and create the device nodes and they appear to be executed as
> indicated by their logs. However, the device node files don't get
> created, hence anaconda complains that it can't locate the partitions I
> requested in the "--onpart" directive and stops.
>
> Using the console at alt-f2, if I execute verbatim the exact same
> commands, the device nodes are created. The logs of running the
> commands manually are identical to those when run from %pre. Can anyone
> offer any help or advice? Thanks in advance.
>
> -Steve
>
> Herewith, some supporting evidence:
> Error on the console:
> Unable to locate partition mapper/vg00-lroot to use for /.
> Press 'OK' to reboot your system
>
> Relevant sections from my kickstart config file:
> # reusing existing partitions
> part / --onpart=/dev/mapper/vg00-lroot
> part /boot --onpart=/dev/hda1
> part /opt --onpart=/dev/mapper/vg00-lopt
> part /var --onpart=/dev/mapper/vg00-lvar
> part /export/home --onpart=/dev/mapper/vg00-lhome --noformat
> part swap --onpart=/dev/mapper/vg00-lswap
>
> %pre --interpreter /bin/sh
> # Import LVM data to preserve existing partitions
> /usr/sbin/lvm vgscan --mknodes --ignorelockingfailure --verbose \
> > /tmp/vgscan.log 2>&1
> /usr/sbin/lvm vgchange -a y --ignorelockingfailure --verbose \
> > /tmp/vgchange.log 2>&1
>
> Contents of above named log files:
> vgscan.log:
> Creating directory "/var/lock/lvm"
> Wiping cache of LVM-capable devices
> Wiping internal VG cache
> Finding all volume groups
> Finding volume group "vg00"
> Creating directory "/etc/lvm/archive"
> Archiving volume group "vg00" metadata (seqno 6).
> Creating directory "/etc/lvm/backup"
> Creating volume group backup "/etc/lvm/backup/vg00" (seqno 6).
> Finding all logical volumes
> Reading all physical volumes. This may take a while...
> Found volume group "vg00" using metadata type lvm2
>
> vgchange.log
> Finding all volume groups
> Finding volume group "vg00"
> Found volume group "vg00"
> Creating vg00-lroot
> Loading vg00-lroot table
> Resuming vg00-lroot (253:0)
> Found volume group "vg00"
> Creating vg00-lopt
> Loading vg00-lopt table
> Resuming vg00-lopt (253:1)
> Found volume group "vg00"
> Creating vg00-lvar
> Loading vg00-lvar table
> Resuming vg00-lvar (253:2)
> Found volume group "vg00"
> Creating vg00-lswap
> Loading vg00-lswap table
> Resuming vg00-lswap (253:3)
> Found volume group "vg00"
> Creating vg00-lhome
> Loading vg00-lhome table
> Resuming vg00-lhome (253:4)
> Activated logical volumes in volume group "vg00"
> 5 logical volume(s) in volume group "vg00" now active
>
> Inspecting /dev/mapper reveals only a character file called "control".
> $ ls -l /dev/mapper
> crw------- 1 root 0 10, 63 Jan 3 11:45 control
>
> $ ls -l /dev/vg00
> ls: /dev/vg00: No such file or directory
>
> After running the commands manually:
> $ ls -l /dev/mapper
> crw------- 1 root 0 10, 63 Jan 3 11:45 control
> brw-rw---- 1 root 6 253, 2 Jan 3 12:09 vg00-lvar
> brw-rw---- 1 root 6 253, 3 Jan 3 12:09 vg00-lswap
> brw-rw---- 1 root 6 253, 0 Jan 3 12:09 vg00-lroot
> brw-rw---- 1 root 6 253, 1 Jan 3 12:09 vg00-lopt
> brw-rw---- 1 root 6 253, 4 Jan 3 12:09 vg00-lhome
>
> $ ls -l /dev/vg00
> lrwxrwxrwx 1 root 0 22 Jan 3 12:09 lhome -> /dev/mapper/vg00-lhome
> lrwxrwxrwx 1 root 0 21 Jan 3 12:09 lopt -> /dev/mapper/vg00-lopt
> lrwxrwxrwx 1 root 0 22 Jan 3 12:09 lroot -> /dev/mapper/vg00-lroot
> lrwxrwxrwx 1 root 0 22 Jan 3 12:09 lswap -> /dev/mapper/vg00-lswap
> lrwxrwxrwx 1 root 0 21 Jan 3 12:09 lvar -> /dev/mapper/vg00-lvar
>
> I added a "df" and an "lsmod" at the beginning of the %pre section to
> see whether the filesystems aren't mounted when the commands try to run,
> but they are. Which of course they must be otherwise the commands
> wouldn't be found. Anyway. here are the df and lsmod results for good
> measure.
>
> Filesystem Size Used Avail Use% Mounted on
> rootfs 6.0M 4.0M 1.8M 70% /
> /dev/root.old 6.0M 4.0M 1.8M 70% /
> bnkick:/images/2007/RHEL4.0_WS_x86
> 121G 87G 28G 76% /mnt/source
> /tmp/loop0 174M 174M 0 100% /mnt/runtime
>
> Module Size Used by Not tainted
> cramfs 42421 1 - Live 0xf886f000
> dm_mirror 30637 0 - Live 0xf8939000
> dm_mod 59605 3 dm_snapshot,dm_mirror,dm_zero, Live 0xf8a0e000
> dm_snapshot 17285 0 - Live 0xf89cd000
> dm_zero 2369 0 - Live 0xf88b5000
> ds 17349 44 - Live 0xf8918000
> e100 33733 0 - Live 0xf899c000
> edd 9505 0 - Live 0xf886b000
> ext3 116809 0 - Live 0xf8a90000
> fat 44001 2 msdos,vfat, Live 0xf8896000
> floppy 58481 0 - Live 0xf88fb000
> hermes 7617 2 orinoco_pci,orinoco, Live 0xf8835000
> jbd 71513 1 ext3, Live 0xf89fb000
> lockd 64105 1 nfs, Live 0xf88b7000
> loop 15817 2 - Live 0xf8888000
> mii 5185 1 e100, Live 0xf88a4000
> msdos 10177 0 - Live 0xf8932000
> nfs 232905 1 - Live 0xf8943000
> nfs_acl 3777 1 nfs, Live 0xf883d000
> orinoco 45261 1 orinoco_pci, Live 0xf89a6000
> orinoco_pci 7245 0 - Live 0xf892f000
> parport 37129 1 parport_pc, Live 0xf8a1e000
> parport_pc 24577 0 - Live 0xf8a29000
> pcmcia_core 63481 2 ds,yenta_socket, Live 0xf891e000
> raid0 7617 0 - Live 0xf8936000
> raid1 20033 0 - Live 0xf89b3000
> raid5 25281 0 - Live 0xf89be000
> raid6 101713 0 - Live 0xf89e1000
> scsi_mod 122573 2 sr_mod,sd_mod, Live 0xf897d000
> sd_mod 17217 0 - Live 0xf88a9000
> sr_mod 17381 0 - Live 0xf88af000
> sunrpc 162597 5 nfs,nfs_acl,lockd, Live 0xf88d2000
> uhci_hcd 31065 0 - Live 0xf88c8000
> vfat 14529 0 - Live 0xf888e000
> vga16fb 12201 1 - Live 0xf8829000
> vgastate 8257 1 vga16fb, Live 0xf882d000
> xor 13641 2 raid6,raid5, Live 0xf89b9000
> yenta_socket 18881 0 - Live 0xf890b000
>
> _______________________________________________
> Kickstart-list mailing list
> Kickstart-list at redhat.com
> https://www.redhat.com/mailman/listinfo/kickstart-list
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/kickstart-list/attachments/20070115/0d215f11/attachment.htm>
More information about the Kickstart-list
mailing list