[linux-lvm] any way to locate vg's on, say, /etc?
Russell Coker
russell at coker.com.au
Tue Jan 9 14:39:13 UTC 2001
On Tuesday 09 January 2001 17:28, you wrote:
> > Why is it impossible for make LVM configuration something that gets
> > started automatically at boot when it's possible to make RAID start at
> > boot?
>
> see above: if you store enough data to vgimport the
> volumes at boot time (however it is done) then you
> don't need to store the volume group inodes in any
> persistent storage.
>
> catch is that storing all of the data used to recover the
> media on the media to be recovered makes recovery more
> difficult when the media fries [not if].
For serious systems hard disk failure modes should be considered as discrete
events. A hard drive is either working perfectly (all data written is read
back correctly) or it is failed (it has returned an error so we put it
offline and replace it with a hot spare or send an emergency alert to the
administrator).
All data should be mirrored. Hard drives are cheap. I recently bought
myself two 46G hard drives for less than I once paid for a single 70M drive,
and less than I later paid for a 330M drive.
My "gut feeling" is that drives are more susceptible to damage now. I know
of cases of older 3600rpm drives being dropped, being hit by a car while
operating (car entered building through the wall of the computer room), and
suffering numerous other mechanically damaging events without data loss. I
belive that modern 10K rpm drives are not as solid.
Also drives are more susceptible to heat problems. 3600rpm drives could
operate with all their air-holes blocked and while surrounded by other hard
drives without problems. You can't stack two new 10K rpm drives without good
fans.
If things are correctly setup then you can lose a single drive at any time
without data loss. If LVM on-disk data structures can be recovered with a
drive dead (which is the case if LVM is running on top of RAID 1) then there
shouldn't be a problem.
IMHO if you run LVM across multiple disks without RAID-1 backing then you
probably don't care much for your data. Probability of no-failure of the
system is obtained by multiplying the probability of no-failure of each part.
So if during a time period there is a 10% chance that a hard drive will fail
then the probability of no-failure is 0.9. Therefore the probability of two
drives in an LVM set not failing is 0.81 over the same time period.
> > One thing that I plan to do is create LVM, devfs, and RAID rescue disks
> > for Debian. So far I have not had time. :(
>
> if your software RAID gets blown you're in pretty deep
> water. HP's trick of forcing the boot, primbary swap
RAID rescue disks does not inherantly mean rescue from RAID problems. It
also means that if (for example) you have a non-autodetect RAID setup and you
want to fix config files on that file system that prevent booting (eg a
corrupted /etc/fstab) then you can do it. Also you need RAID enabled rescue
disks to install onto RAID in the first place, or to make a non-RAID root
file partition into a RAID partition.
> and root voumes to be on contiguous storage from cyl0
> makes recovering LVM a snap: just boot without it,
If I did that then I'd only have one other file system left on most of my
machines and thus would not be able to achieve much benefit from LVM!
> q: what is there to recover from a damaged devfs? i
> though the entire file system was virtual (a la /proc).
Correct, Devfs can't get damaged in any way that a reboot won't fix
(rm -rf /dev/* will be hard to fix without a reboot).
But if you use devfs for /etc/fstab, /etc/lilo.conf, etc then using rescue
disks that don't support devfs will be painful for you. Also once you start
using devfs everywhere you get used to the device names and don't want to
stop using them (having to remember two names for everything is painful).
--
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/ Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page
More information about the linux-lvm
mailing list