[rhn-users] LVM
Sead Dzelil (Student)
sead.dzelil at wku.edu
Tue Jun 27 22:45:46 UTC 2006
I did the fdisk step and rebooted OK. When I run pvdisplay I get the
following though:
[root at ip023-8 ~]# pvdisplay
--- Physical volume ---
PV Name /dev/sda5
VG Name VolGroup00
PV Size 66.06 GB / not usable 0
Allocatable yes
PE Size (KByte) 32768
Total PE 2114
Free PE 2
Allocated PE 2112
PV UUID akfgOT-b3oe-juDZ-0QGR-Y5Be-RezY-xjIZ8k
When I run vgdisplay I get this:
[root at ip023-8 ~]# vgdisplay
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 66.06 GB
PE Size 32.00 MB
Total PE 2114
Alloc PE / Size 2112 / 66.00 GB
Free PE / Size 2 / 64.00 MB
VG UUID x6mvEM-VYuO-muql-HqN7-jrTz-lr3n-a6d8k6
Notice the Free PE? What should I do?
Sead
On Mon, 26 Jun 2006 19:08:04 -0400
Greg Forte <gforte at leopard.us.udel.edu> wrote:
> OK. First, you need to extend the partition.
>
> run fdisk, delete both the linux LVM partition (sda5) and the
extended partition (sda4) - scary, huh? ;-) Then recreate them,
first the extended, starting at cylinder 284 and filling the available
space (out to 17816). Then recreate sda5, also starting at 284 and
filling the available space. Make sure you get the types set
correctly (fdisk should set sda4's automatically to 5, but you'll have
to set sda5's to 8e manually). triple check everything before you
commit changes - at least you can quit without committing if there's
any doubt.
>
> Assuming that works and nothing's toasted yet (you might want to
reboot just to verify that it's not busted), run pvdisplay again and
it should say something like 4400 Total PE, 2112 still allocated,
whatever the difference is Free. If so, run vgdisplay and it should
similarly show a bunch more Free PE. Run this:
>
> lvextend -t -l +## /dev/VolGroup00/LogVol00
>
> that's a lowercase L for the -l option, not a 1, and replace ## with
the
> value that was shown on the left for "Free PE" when you ran
vgdisplay.
> -t tells it to test - it won't actually make the change. Assuming
> everything's right, you should get something like:
>
> Test mode: Metadata will NOT be updated.
> Extending logical volume LogVol00 to 144.00 GB
> Logical volume LogVol00 successfully resized
>
> Obviously, the size shown will probably be different. If you see
that
> then re-run the lvextend command above without the -t option, and
you'll
> have extended the volume.
>
> Now all you need to do is expand the filesystem:
>
> resize2fs -p /dev/VolGroup00/LogVol00
>
> note that you aren't specifying a size; resize2fs will automatically
> detect the size of the "partititon" (logical volume) and grow the
> filesystem to fill it. There's no test mode for resize2fs, but it's
> pretty foolproof. The -p option just makes it display a progress
bar.
>
> And that should be it! If that all worked, 'df' should now show your
> newly grown disk.
>
> If not, you'll have to start over from scratch, I'm afraid.
>
> In retrospect, an easier way to fix this would've been to simply
split the RAID1 mirror, boot from one of the disks, then extend the
volume group to the second disk and then extend the logical volume as
described above. But you'd already run the conversion, I guess. You
could try converting it back and doing that instead, but that'll take
forever - the process described above should only take a few minutes -
assuming it doesn't blow everything away. ;-)
>
> Good luck, again!
>
> -g
>
>
> Sead Dzelil (Student) wrote:
> > OK. Here is the output of these commands:
> >
> > [root at ip023-8 ~]# fdisk -l
> >
> > Disk /dev/sda: 146.5 GB, 146548981760 bytes
> > 255 heads, 63 sectors/track, 17816 cylinders
> > Units = cylinders of 16065 * 512 = 8225280 bytes
> >
> > Device Boot Start End Blocks Id System
> > /dev/sda1 1 9 72261 de Dell
Utility
> > /dev/sda2 * 10 270 2096482+ 6 FAT16
> > /dev/sda3 271 283 104422+ 83 Linux
> > /dev/sda4 284 8908 69280312+ 5 Extended
> > /dev/sda5 284 8908 69280281 8e Linux LVM
> >
> > [root at ip023-8 ~]# pvscan
> > /dev/cdrom: open failed: No medium found
> > PV /dev/sda5 VG VolGroup00 lvm2 [66.06 GB / 64.00 MB free]
> > Total: 1 [66.06 GB] / in use: 1 [66.06 GB] / in no VG: 0 [0 ]
> >
> > [root at ip023-8 ~]# pvdisplay
> > --- Physical volume ---
> > PV Name /dev/sda5
> > VG Name VolGroup00
> > PV Size 66.06 GB / not usable 0
> > Allocatable yes
> > PE Size (KByte) 32768
> > Total PE 2114
> > Free PE 2
> > Allocated PE 2112
> > PV UUID akfgOT-b3oe-juDZ-0QGR-Y5Be-RezY-xjIZ8k
> >
> > [root at ip023-8 ~]# lvdisplay
> > --- Logical volume ---
> > LV Name /dev/VolGroup00/LogVol00
> > VG Name VolGroup00
> > LV UUID gBNQtN-YtVB-6cTR-nnYs-ayxt-9Xwo-qpKVNy
> > LV Write Access read/write
> > LV Status available
> > # open 1
> > LV Size 64.06 GB
> > Current LE 2050
> > Segments 1
> > Allocation inherit
> > Read ahead sectors 0
> > Block device 253:0
> >
> > --- Logical volume ---
> > LV Name /dev/VolGroup00/LogVol01
> > VG Name VolGroup00
> > LV UUID Pnep7s-BlfT-VUED-9dHi-0yH9-z1Wa-59cM0L
> > LV Write Access read/write
> > LV Status available
> > # open 1
> > LV Size 1.94 GB
> > Current LE 62
> > Segments 1
> > Allocation inherit
> > Read ahead sectors 0
> > Block device 253:1
> >
> > [root at ip023-8 ~]# vgdisplay
> > --- Volume group ---
> > VG Name VolGroup00
> > System ID
> > Format lvm2
> > Metadata Areas 1
> > Metadata Sequence No 3
> > VG Access read/write
> > VG Status resizable
> > MAX LV 0
> > Cur LV 2
> > Open LV 2
> > Max PV 0
> > Cur PV 1
> > Act PV 1
> > VG Size 66.06 GB
> > PE Size 32.00 MB
> > Total PE 2114
> > Alloc PE / Size 2112 / 66.00 GB
> > Free PE / Size 2 / 64.00 MB
> > VG UUID x6mvEM-VYuO-muql-HqN7-jrTz-lr3n-a6d8k6
> >
> > I hope you guys can help. Thanks in advance!
> >
> > Sead
> >
> >
> >
> >
> > On Mon, 26 Jun 2006 18:29:18 -0400
> > "Lamon, Frank III" <Frank_LaMon at csx.com> wrote:
> >> Lots of red flags all over the place here - converting a mirrored
set
> > to a striped set on the fly sort of (it sounds like you haven't
reloaded
> > the OS)?
> >> But let's see what you have now. Can you give us the output of the
> > following commands?
> >> fdisk -l
> >> pvscan
> >> pvdisplay
> >> lvdisplay
> >> vgdisplay
> >>
> >>
> >>
> >>
> >> -----Original Message-----
> >> From: rhn-users-bounces at redhat.com
> >> [mailto:rhn-users-bounces at redhat.com]On Behalf Of Sead Dzelil
(Student)
> >> Sent: Monday, June 26, 2006 6:16 PM
> >> To: gforte at udel.edu; Red Hat Network Users List
> >> Subject: Re: [rhn-users] LVM
> >>
> >>
> >> Thank you very much for taking the time to help me. I only have
two 73GB
> >> hard drives right now and I need 100+GB of storage. I am not
concerned
> >> about redundancy because the server is used for computations, not
for
> >> important storage. Please help me out if you know your LVM. The
computer
> >> sees the whole 146GB but the volume group is on only 73GB. What
can I
> >> too to resize it and make the OS see the whole disk. Please help.
> >>
> >> Thank You
> >>
> >> On Mon, 26 Jun 2006 04:58:19 -0400
> >> Greg Forte <gforte at leopard.us.udel.edu> wrote:
> >>> Wow, where to start ...
> >>>
> >>> First of all, Travers: he's already got hardware raid, he said as
> >> much: "... went into the RAID BIOS ...". It's built-in to the
6800
> > series.
> >>> Sead: your foremost problem is that you don't have enough disk
space
> >> for any kind of meaningful redundancy if you need 100+ GB. RAID0
isn't
> >> really RAID at all (unless you replace "redundant" with "risky") -
RAID0
> >> stripes the data across N of N disks with no parity data, which
means if
> >> one disk fails the whole system is gone. Instantly. It's
basically
> >> JBOD with a performance boost due to multiplexing reads and
writes. To
> >> put it bluntly, no one in their right mind runs the OS off of a
RAID0
> >> volume.
> >>> Beyond that, I'm surprised (impressed?) that the OS even still
boots -
> >> after the conversion any data on the disks should be scrap.
Maybe the
> >> newer Dell RAID controllers are able to convert non-
destructively. I'll
> >> assume that's true, in which case the reason the OS doesn't see
the
> >> difference is because you still need to change both the partition
size
> >> (in this case, the logical volume extent size) and the filesystem
> >> itself. In which case you COULD theoretically use lvextend to
enlarge
> >> the LVM volume, and then resize2fs to grow the filesystem
(assuming it's
> >> ext2/3, which it almost definitely is). BUT, there's still the
problem
> >> I mentioned above.
> >>> The first thing you need to do is fix the physical disk problem.
> >> Depending on how the machine is configured, this may be easy or
hard.
> >>> A 6800 has 10 drive slots on the main storage backplane (the
bays on
> >> the right), and if the two existing drives are on that backplane
then it
> >> _should_ be a simple matter of buying a third 73GB disk,
installing it,
> >> going into the RAID BIOS and converting again to RAID5 (assuming
it can
> >> also do that conversion without trashing the disks - I'm guessing
it can
> >> if it did RAID1 to RAID0), and then doing lvextend and resize2fs
as
> >> described above (I know, you want more detail, but you need the
disk
> >> first ;-)
> >>> BUT ... I'm gonna go out on a limb and guess that the machine was
> >> configured with the 1x2 secondary backplane in the peripheral bay
area
> >> on the left. If that's the case, then you're not going to be
able to
> >> add a third disk in that area, and I don't think you can
configure a
> >> raid with disk members on different backplanes - and even if you
can,
> >> I'd guess the 10 bays in the main storage are all filled, or it
wouldn't
> >> be configured with the extra backplane to begin with. You'd have
to
> >> check with Dell tech support about that, to be sure. But
assuming all
> >> of my guesses are right, the only option left is going to be to
buy two
> >> larger disks and configure them for RAID1, just like the two 73's
you've
> >> got now. The other bad news in that situation is that you're
probably
> >> going to have to reinstall from scratch - you could probably
manage to
> >> image from the existing volume to the new one, but it's also
almost
> >> surely going to end up being more effort (if you've never done
that sort
> >> of thing
> >>> before) than simply re-installing.
> >>>
> >>> Good luck! Once you do get the disk situation worked out, let
us know
> >> and I (or someone else) can help you through the
lvextend+resize2fs, if
> >> necessary. I suspect you won't end up needing that, though.
> >>> -g
> >>>
> >>> Travers Hogan wrote:
> >>>> It looks as if you have software raid 1. You cannot change this-
you
> >> must rebuild your system. I would also suggest getting a hardware
raid
> >> controller.
> >>>> rgds
> >>>> Trav
> >>>>
> >>>> ________________________________
> >>>>
> >>>> From: rhn-users-bounces at redhat.com on behalf of Sead Dzelil
(Student)
> >>>> Sent: Sun 25/06/2006 03:10
> >>>> To: rhn-users at redhat.com
> >>>> Subject: [rhn-users] LVM
> >>>>
> >>>>
> >>>>
> >>>> I am a system administrator with no experience with lvm. I have
used
> >>>> fdisk in the past and I was very comfortable with that. I have
a very
> >>>> important question. I have a Dell PowerEdge 6800 server that
came with
> >>>> two 73GB hard drives in a RAID 1 configuration. The order was
placed
> >>>> wrongly, because we need 100+ GB of storage. I went into the
RAID BIOS
> >>>> and changed it from RAID 1 to RAID 0. Now the RAID BIOS display
the
> >>>> logical volume with the full 146GB of storage.
> >>>>
> >>>> The problem is that in the OS(RedHat Enterprise) nothing has
changed.
> >>>> It still only sees the 73GB of storage. What can I do to get the
> >>>> system to see the whole 146GB? I need as detail info as possible
> >>>> because I have never used lvm before. Thank You in advance.
> >>>>
> >>>> Sead
> >>>>
> >>>> _______________________________________________
> >>>> rhn-users mailing list
> >>>> rhn-users at redhat.com
> >>>> https://www.redhat.com/mailman/listinfo/rhn-users
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> > -------------------------------------------------------------------
-----
> >>>> _______________________________________________
> >>>> rhn-users mailing list
> >>>> rhn-users at redhat.com
> >>>> https://www.redhat.com/mailman/listinfo/rhn-users
> >>> _______________________________________________
> >>> rhn-users mailing list
> >>> rhn-users at redhat.com
> >>> https://www.redhat.com/mailman/listinfo/rhn-users
> >> _______________________________________________
> >> rhn-users mailing list
> >> rhn-users at redhat.com
> >> https://www.redhat.com/mailman/listinfo/rhn-users
> >>
> >> -----------------------------------------
> >> This email transmission and any accompanying attachments may
> >> contain CSX privileged and confidential information intended only
> >> for the use of the intended addressee. Any dissemination,
> >> distribution, copying or action taken in reliance on the contents
> >> of this email by anyone other than the intended recipient is
> >> strictly prohibited. If you have received this email in error
> >> please immediately delete it and notify sender at the above CSX
> >> email address. Sender and CSX accept no liability for any damage
> >> caused directly or indirectly by receipt of this email.
> >>
> >>
> >> _______________________________________________
> >> rhn-users mailing list
> >> rhn-users at redhat.com
> >> https://www.redhat.com/mailman/listinfo/rhn-users
> >
> > _______________________________________________
> > rhn-users mailing list
> > rhn-users at redhat.com
> > https://www.redhat.com/mailman/listinfo/rhn-users
> >
>
> _______________________________________________
> rhn-users mailing list
> rhn-users at redhat.com
> https://www.redhat.com/mailman/listinfo/rhn-users
More information about the rhn-users
mailing list