[linux-lvm] RH 6.2 kernel problem with 0.8i

Michael Marxmeier mike at msede.com
Tue May 2 09:55:08 UTC 2000

Forwarded message ...

-------- Original Message --------
Message-ID: <390E294C.698DDB17 at t-online.de>
Date: Tue, 02 May 2000 03:03:08 +0200
From: Heinz.Mauelshagen at t-online.de (Heinz Mauelshagen)
Subject: Re: [linux-lvm] RH 6.2 kernel problem with 0.8i
References: <20000501194313.A25404 at omnifarious.mn.org>

"Eric M. Hopper" wrote:

>         I figured out my problem.  It's the RAID patches that RH adds.
>         RH adds a LOT of patches to the stock kernel.  Someone suggested
> grabbing a stock kernel and using that, but a lot of the RH patches are
> ones I really want, and I don't want to sift through them carefully
> figuring out which ones.
>         So, I grabbed the kernel source RPM, used rpm2cpio on it,
> unpacked the cpio, and then used patch -R (what a wonderful tool) to
> reverse the patches I didn't want out of the kernel source tree RH
> ships.
>         After that, the LVM patches applied just fine.  I only wanted
> RAID0 anyway, and LVM does that just fine by itself.  :-)


>         It works beautifully!
>         The only thing I could ask for (and it is something that would
> be complicated to dp) is to allow the root filesystem to be a logical
> volume.

Yes, it is partially supported by the lvmcreate_nitrd(8) tool
in the lvm distribution which creates a initial ram disk enabling
group activation and change of the root filesystem from the initial
ram disk
to a logical volume containing a root filesystem.
Nevertheless there's no support to setup the contents of the root
in the logical volume an the lilo configuration file.

>         A graphical (say tk or Python based) manager might be nice to,
> but there are already several people working on that.

>         Thanks a LOT for providing such a neat, useful tool.  Virtual
> memory for hard drives.  It's great!
>         One question...
>         Is the warning about moving the physical extents of a mounted
> logical volume based on hard evidence, or uneasiness?

The reason for this message is, that a power loss or a system damage
can cause a LVM metadata (VGDA) inconsistency which will force the
to restore the VGDA from a backup copy in /etc/lvmconf/.
Another rason is. that buffers contained in the buffer cache
which are not written to the physical volumes can get lost in this
case as well.

>         As I recall from Hans's talk, there shouldn't be any problem.  I
> think I remember that the blocks are locked from being read or written
> to while they're being moved.

Yes, they are locked and after the data move and metadata update, they
are unlocked

for further access again.

> And besides, the buffer cache entries
> point at the logical volume anyway.


> Have fun (if at all possible),

You as well ;-{)


More information about the linux-lvm mailing list