[rhn-users] mount a lvm/ext3 fs?

Lamon, Frank III Frank_LaMon at csx.com
Wed Jul 6 10:22:38 UTC 2005


Make sure the /db01 mountpoint exists and try "mount
/dev/db01_vg/db01_lv /db01"

Frank

-----Original Message-----
From: rhn-users-bounces at redhat.com
[mailto:rhn-users-bounces at redhat.com]On Behalf Of Ray Stell
Sent: Wednesday, July 06, 2005 5:43 AM
To: rhn-users at redhat.com
Subject: [rhn-users] mount a lvm/ext3 fs?



Why won't this lv ext3 fs mount?

[root at pecan db01_vg]# vgdisplay
  Found duplicate PV vGyeWZJKrm05UabdUDDp6hqbllRa7vFZ: using /dev/sdc
not /dev/sda
  Found duplicate PV E93K30gmhfabsD9MtPxI8Tyu9WCC0tze: using /dev/sdd1
not /dev/sdb1
  Found duplicate PV yiYJWZ0YwFjJhQp3Cs2m2U7qzN0FO763: using /dev/sdg
not /dev/sde
  Found duplicate PV MQJygtK5A6N1PiP2mBfII65E6njJFGm3: using /dev/sdh
not /dev/sdf
  --- Volume group ---
  VG Name               db01_vg
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               39.11 GB
  PE Size               4.00 MB
  Total PE              10012
  Alloc PE / Size       8960 / 35.00 GB
  Free  PE / Size       1052 / 4.11 GB
  VG UUID               e0EKtU-gf6R-Wwiv-3Cvv-BTVl-rPPl-RHLN4X
   
[root at pecan db01_vg]# lvdisplay
  Found duplicate PV vGyeWZJKrm05UabdUDDp6hqbllRa7vFZ: using /dev/sdc
not /dev/sda
  Found duplicate PV E93K30gmhfabsD9MtPxI8Tyu9WCC0tze: using /dev/sdd1
not /dev/sdb1
  Found duplicate PV yiYJWZ0YwFjJhQp3Cs2m2U7qzN0FO763: using /dev/sdg
not /dev/sde
  Found duplicate PV MQJygtK5A6N1PiP2mBfII65E6njJFGm3: using /dev/sdh
not /dev/sdf
  --- Logical volume ---
  LV Name                /dev/db01_vg/db01_lv
  VG Name                db01_vg
  LV UUID                4DnGRP-j26y-dHw0-3VER-mF5i-0z1x-N7d1Mv
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                35.00 GB
  Current LE             8960
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:0
   
[root at pecan db01_vg]# mkfs.ext3 /dev/db01_vg/db01_lv 
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
4587520 inodes, 9175040 blocks
458752 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=12582912
280 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632,
2654208, 
        4096000, 7962624

Writing inode tables: done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 39 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

[root at pecan db01_vg]# mount -t ext3 /dev/mapper/db01_vg-db01_lv /db01
mount: wrong fs type, bad option, bad superblock on
/dev/mapper/db01_vg-db01_lv,
       or too many mounted file systems
============================================================
Ray Stell  stellr at vt.edu  (540) 231-4109  Tempus fugit  28^D

_______________________________________________
rhn-users mailing list
rhn-users at redhat.com
https://www.redhat.com/mailman/listinfo/rhn-users

-----------------------------------------
This email transmission and any accompanying attachments may contain CSX
privileged and confidential information intended only for the use of the
intended addressee.  Any dissemination, distribution, copying or action
taken in reliance on the contents of this email by anyone other than the
intended recipient is strictly prohibited.  If you have received this email
in error please immediately delete it and  notify sender at the above CSX
email address.  Sender and CSX accept no liability for any damage caused
directly or indirectly by receipt of this email.





More information about the rhn-users mailing list