[rhn-users] LVM

Eddy Harvey eharvey at radiospire.com
Tue Jun 27 00:31:08 UTC 2006


What version of RHEL did you say you run?  (I know I saw it somewhere, but I
can't find it again now for some reason.)

I happen to know that some versions of RHEL are built without the PERC
driver.  I would suggest contacting Dell support to ask them if you're
running the right version of the OS.

Also, you're not trying to do this on the fly, are you?  There's no way that
can possibly work.  This defintiely requries at least one reboot.  And 99%
certain it involves reinstalling the OS too.

FWIW, the last time I worked on LVM, I wrote a very simple cheatsheet to
help me remember stuff.  Here ya go:

------- analogies for people who usually use fdisk/partition tools:

A physical disk has a partition of type 8E, Linux LVM
The partition can be made into a physical volume with pvcreate.
You can add physical volumes to a volume group with vgcreate

A volume group is a virtual disk, whose storage resides on physical volumes.
A volume group can be split into logical volumes with lvcreate.  Logical
volumes are virtual partitions. 

You can change the size of a logical volume with lvextend or lvreduce.

If you create some new physical volumes, you can add them to an existing
volume group using vgextend.  This will create some more space that you can
use in lvextend.


Most useful command:
	sudo /sbin/vgdisplay -Av



 

> -----Original Message-----
> From: rhn-users-bounces at redhat.com 
> [mailto:rhn-users-bounces at redhat.com] On Behalf Of Sead 
> Dzelil (Student)
> Sent: Monday, June 26, 2006 6:40 PM
> To: Red Hat Network Users List
> Subject: Re: [rhn-users] LVM
> 
> As I said, I am using hardware raid and a perc/cerc 
> controller. I can reconstruct the drive from RAID 1 to RAID 0 
> with no problem. All the data is still there and stuff. Now 
> the computer sees the two disk as one big disk but the 
> partitions are still on only half of the big disk. I really 
> hate this LVM stuff. I guess it's because I don't know it...
> 
> Sead
> 
> On Mon, 26 Jun 2006 18:29:18 -0400
>  "Lamon, Frank III" <Frank_LaMon at csx.com> wrote:
> > Lots of red flags all over the place here - converting a 
> mirrored set
> to a striped set on the fly sort of (it sounds like you 
> haven't reloaded the OS)? 
> > 
> > But let's see what you have now. Can you give us the output of the
> following commands?
> > 
> > fdisk -l
> > pvscan
> > pvdisplay
> > lvdisplay
> > vgdisplay
> > 
> > 
> > 
> > 
> > -----Original Message-----
> > From: rhn-users-bounces at redhat.com
> > [mailto:rhn-users-bounces at redhat.com]On Behalf Of Sead Dzelil 
> > (Student)
> > Sent: Monday, June 26, 2006 6:16 PM
> > To: gforte at udel.edu; Red Hat Network Users List
> > Subject: Re: [rhn-users] LVM
> > 
> > 
> > Thank you very much for taking the time to help me. I only have two 
> > 73GB hard drives right now and I need 100+GB of storage. I am not 
> > concerned about redundancy because the server is used for 
> > computations, not for important storage. Please help me out if you 
> > know your LVM. The computer sees the whole 146GB but the 
> volume group 
> > is on only 73GB. What can I too to resize it and make the 
> OS see the whole disk. Please help.
> > 
> > Thank You
> > 
> > On Mon, 26 Jun 2006 04:58:19 -0400
> >  Greg Forte <gforte at leopard.us.udel.edu> wrote:
> > > Wow, where to start ...
> > > 
> > > First of all, Travers: he's already got hardware raid, he said as
> > much: "... went into the RAID BIOS ...".  It's built-in to the 6800
> series.
> > > 
> > > Sead: your foremost problem is that you don't have enough 
> disk space
> > for any kind of meaningful redundancy if you need 100+ GB.  RAID0 
> > isn't really RAID at all (unless you replace "redundant" 
> with "risky") 
> > - RAID0 stripes the data across N of N disks with no parity data, 
> > which means if one disk fails the whole system is gone.  
> Instantly.  
> > It's basically JBOD with a performance boost due to 
> multiplexing reads 
> > and writes.  To put it bluntly, no one in their right mind 
> runs the OS 
> > off of a RAID0 volume.
> > > 
> > > Beyond that, I'm surprised (impressed?) that the OS even 
> still boots 
> > > -
> > after the conversion any data on the disks should be scrap. 
>  Maybe the 
> > newer Dell RAID controllers are able to convert non-destructively.  
> > I'll assume that's true, in which case the reason the OS 
> doesn't see 
> > the difference is because you still need to change both the 
> partition 
> > size (in this case, the logical volume extent size) and the 
> filesystem 
> > itself.  In which case you COULD theoretically use lvextend 
> to enlarge 
> > the LVM volume, and then resize2fs to grow the filesystem (assuming 
> > it's ext2/3, which it almost definitely is).  BUT, there's 
> still the 
> > problem I mentioned above.
> > > 
> > > The first thing you need to do is fix the physical disk problem.
> > Depending on how the machine is configured, this may be 
> easy or hard.
> > > A 6800 has 10 drive slots on the main storage backplane 
> (the bays on
> > the right), and if the two existing drives are on that 
> backplane then 
> > it _should_ be a simple matter of buying a third 73GB disk, 
> installing 
> > it, going into the RAID BIOS and converting again to RAID5 
> (assuming 
> > it can also do that conversion without trashing the disks - I'm 
> > guessing it can if it did RAID1 to RAID0), and then doing 
> lvextend and 
> > resize2fs as described above (I know, you want more detail, but you 
> > need the disk first ;-)
> > > 
> > > BUT ... I'm gonna go out on a limb and guess that the machine was
> > configured with the 1x2 secondary backplane in the 
> peripheral bay area 
> > on the left.  If that's the case, then you're not going to 
> be able to 
> > add a third disk in that area, and I don't think you can 
> configure a 
> > raid with disk members on different backplanes - and even 
> if you can, 
> > I'd guess the 10 bays in the main storage are all filled, or it 
> > wouldn't be configured with the extra backplane to begin 
> with.  You'd 
> > have to check with Dell tech support about that, to be sure.  But 
> > assuming all of my guesses are right, the only option left 
> is going to 
> > be to buy two larger disks and configure them for RAID1, 
> just like the 
> > two 73's you've got now.  The other bad news in that 
> situation is that 
> > you're probably going to have to reinstall from scratch - you could 
> > probably manage to image from the existing volume to the 
> new one, but 
> > it's also almost surely going to end up being more effort 
> (if you've 
> > never done that sort of thing
> > > before) than simply re-installing.
> > > 
> > > Good luck!  Once you do get the disk situation worked out, let us 
> > > know
> > and I (or someone else) can help you through the 
> lvextend+resize2fs, 
> > if necessary.  I suspect you won't end up needing that, though.
> > > 
> > > -g
> > > 
> > > Travers Hogan wrote:
> > > > It looks as if you have software raid 1. You cannot change 
> > > > this-you
> > must rebuild your system. I would also suggest getting a 
> hardware raid 
> > controller.
> > > > rgds
> > > > Trav
> > > > 
> > > > ________________________________
> > > > 
> > > > From: rhn-users-bounces at redhat.com on behalf of Sead Dzelil 
> > > > (Student)
> > > > Sent: Sun 25/06/2006 03:10
> > > > To: rhn-users at redhat.com
> > > > Subject: [rhn-users] LVM
> > > > 
> > > > 
> > > > 
> > > > I am a system administrator with no experience with lvm. I have 
> > > > used fdisk in the past and I was very comfortable with that. I 
> > > > have a very important question. I have a Dell PowerEdge 6800 
> > > > server that came with two 73GB hard drives in a RAID 1 
> > > > configuration. The order was placed wrongly, because we 
> need 100+ 
> > > > GB of storage. I went into the RAID BIOS and changed it 
> from RAID 
> > > > 1 to RAID 0. Now the RAID BIOS display the logical 
> volume with the full 146GB of storage.
> > > > 
> > > > The problem is that in the OS(RedHat Enterprise) 
> nothing has changed.
> > > > It still only sees the 73GB of storage. What can I do 
> to get the 
> > > > system to see the whole 146GB? I need as detail info as 
> possible 
> > > > because I have never used lvm before. Thank You in advance.
> > > > 
> > > > Sead
> > > > 
> > > > _______________________________________________
> > > > rhn-users mailing list
> > > > rhn-users at redhat.com
> > > > https://www.redhat.com/mailman/listinfo/rhn-users
> > > > 
> > > > 
> > > > 
> > > > 
> > > >
> --------------------------------------------------------------
> ----------
> > > > 
> > > > _______________________________________________
> > > > rhn-users mailing list
> > > > rhn-users at redhat.com
> > > > https://www.redhat.com/mailman/listinfo/rhn-users
> > > 
> > > _______________________________________________
> > > rhn-users mailing list
> > > rhn-users at redhat.com
> > > https://www.redhat.com/mailman/listinfo/rhn-users
> > 
> > _______________________________________________
> > rhn-users mailing list
> > rhn-users at redhat.com
> > https://www.redhat.com/mailman/listinfo/rhn-users
> > 
> > -----------------------------------------
> > This email transmission and any accompanying attachments 
> may contain 
> > CSX privileged and confidential information intended only 
> for the use 
> > of the intended addressee.  Any dissemination, 
> distribution, copying 
> > or action taken in reliance on the contents of this email by anyone 
> > other than the intended recipient is strictly prohibited.  
> If you have 
> > received this email in error please immediately delete it 
> and  notify 
> > sender at the above CSX email address.  Sender and CSX accept no 
> > liability for any damage caused directly or indirectly by 
> receipt of 
> > this email.
> > 
> > 
> > _______________________________________________
> > rhn-users mailing list
> > rhn-users at redhat.com
> > https://www.redhat.com/mailman/listinfo/rhn-users
> 
> _______________________________________________
> rhn-users mailing list
> rhn-users at redhat.com
> https://www.redhat.com/mailman/listinfo/rhn-users
> 




More information about the rhn-users mailing list