disk partitions on that aren't recognized by mkfs + parted

raj sourabh rajsourabh1 at gmail.com
Thu May 5 20:43:59 UTC 2011


Hi,

The below output indicates;

 Disk is in use by some process/software.. Please make sure this by running:
   #fuser -v /dev/sdi1
   (This should not display anything if disk is not in use)

 Disk could be part of RAID or volume group
      Try this and check for device /dev/sdi1
      # lvdisplay

      and finally, The following should disply something if it was part of
any RAID
      # mdadm --stop /dev/sdi1


Regards,

Raj







On Thu, May 5, 2011 at 10:53 PM, Doll, Margaret Ann <margaret_doll at brown.edu
> wrote:

> fdisk /dev/sdi
>
> The number of cylinders for this disk is set to 243201.
> There is nothing wrong with that, but this is larger than 1024,
> and could in certain setups cause problems with:
> 1) software that runs at boot time (e.g., old versions of LILO)
> 2) booting and partitioning software from other OSs
>   (e.g., DOS FDISK, OS/2 FDISK)
>
> Command (m for help): p
>
> Disk /dev/sdi: 2000.3 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
>   Device Boot      Start         End      Blocks   Id  System
> /dev/sdi1               1      243201  1953512001   83  Linux
>
> Command (m for help): d
> Selected partition 1
>
> Command (m for help): p
>
> Disk /dev/sdi: 2000.3 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
>   Device Boot      Start         End      Blocks   Id  System
>
> Command (m for help): n
> Command action
>   e   extended
>   p   primary partition (1-4)
> p
> Partition number (1-4): 1
> First cylinder (1-243201, default 1):
> Using default value 1
> Last cylinder or +size or +sizeM or +sizeK (1-243201, default 243201):
> Using default value 243201
>
> Command (m for help): w
> The partition table has been altered!
>
> Calling ioctl() to re-read partition table.
> Syncing disks.
> [root at m3science ~]# partprobe
> Warning: Unable to open /dev/hda read-write (Read-only file system).
> /dev/hda has been opened read-only.
> Warning: /dev/sdh contains GPT signatures, indicating that it has a GPT
> table.  However, it does not have a valid fake msdos partition table, as it
> should.  Perhaps it was corrupted -- possibly by a program that doesn't
> understand GPT partition tables.  Or perhaps you deleted the GPT table, and
> are now using an msdos partition table.  Is this a GPT partition table?
> [root at m3science ~]# mke2fs -j /dev/sdi1
> mke2fs 1.39 (29-May-2006)
> /dev/sdi1 is apparently in use by the system; will not make a filesystem
> here!
>
>
>  On Thu, May 5, 2011 at 2:23 PM, raj sourabh <rajsourabh1 at gmail.com>
> wrote:
>
> > Ok, so things look fine till here when you have created partitions
> > sdh1,sdi1,sdj1,sdk1.. even after running partprobe if you are getting the
> > same here then try the follwing;
> >
> > # delete one of the partition through fdisk eg.sdi1
> > # after deletion check eg. # fdisk /dev/sdi and then list the partitions
> > (You should not see anything)
> > # Recreate the partition as primary
> > #  Partprobe
> > # and then try mke2fs -j /dev/sdi1
> >
> > Hope this should give some useful results.
> >
> > Regards,
> >
> > Raj
> >
> >
> >
> > On Thu, May 5, 2011 at 8:05 PM, Doll, Margaret Ann
> > <margaret_doll at brown.edu>wrote:
> >
> > > On Thu, May 5, 2011 at 11:33 AM, raj sourabh <rajsourabh1 at gmail.com>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > Please provide the output of following:
> > > >
> > > > #fdisk -l
> > > >
> > >
> > > for the four disks in question
> > >
> > > WARNING: GPT (GUID Partition Table) detected on '/dev/sdh'! The util
> > fdisk
> > > doesn't support GPT. Use GNU Parted.
> > >
> > >
> > > Disk /dev/sdh: 2000.3 GB, 2000398934016 bytes
> > > 255 heads, 63 sectors/track, 243201 cylinders
> > > Units = cylinders of 16065 * 512 = 8225280 bytes
> > >
> > >   Device Boot      Start         End      Blocks   Id  System
> > > /dev/sdh1               1      243201  1953512001   83  Linux
> > >
> > > Disk /dev/sdi: 2000.3 GB, 2000398934016 bytes
> > > 255 heads, 63 sectors/track, 243201 cylinders
> > > Units = cylinders of 16065 * 512 = 8225280 bytes
> > >
> > >   Device Boot      Start         End      Blocks   Id  System
> > > /dev/sdi1               1      243201  1953512001   83  Linux
> > >
> > > Disk /dev/sdj: 2000.3 GB, 2000398934016 bytes
> > > 255 heads, 63 sectors/track, 243201 cylinders
> > > Units = cylinders of 16065 * 512 = 8225280 bytes
> > >
> > >   Device Boot      Start         End      Blocks   Id  System
> > > /dev/sdj1               1      243201  1953512001   83  Linux
> > >
> > > Disk /dev/sdk: 2000.3 GB, 2000398934016 bytes
> > > 255 heads, 63 sectors/track, 243201 cylinders
> > > Units = cylinders of 16065 * 512 = 8225280 bytes
> > >
> > >   Device Boot      Start         End      Blocks   Id  System
> > > /dev/sdk1               1      243201  1953512001   83  Linux
> > >
> > >
> > > > #df -h
> > > >
> > >
> > > Filesystem            Size  Used Avail Use% Mounted on
> > > /dev/sda3             1.6G  982M  489M  67% /
> > > tmpfs                 1.8G     0  1.8G   0% /dev/shm
> > > /dev/sda10            883G  449G  389G  54% /home
> > > /dev/sdb1             4.1G  569M  3.4G  15% /var
> > > /dev/sdb2             913G  245G  622G  29% /home2
> > > /dev/sda9             730M  519M  173M  76% /oldvar
> > > /dev/sda8             1.1G   34M  976M   4% /tmp
> > > /dev/sda6             2.1G   72M  2.0G   4% /opt
> > > /dev/sda2             8.1G  3.6G  4.2G  46% /usr
> > > /dev/sda5             3.1G  2.3G  671M  78% /usr/local
> > > /dev/sda1             1.1G  120M  889M  12% /boot
> > > /dev/sdc               12T   12T  183G  99% /m3team
> > > /dev/mapper/vg1-lv1   7.1T  1.6T  5.2T  24% /m3team3
> > > quahog2:/LVM2/crism13
> > >                      4.9T  191G  4.5T   5% /m3team2
> > > porter2:/m3_usb1      1.8T   96K  1.7T   1% /m3_usb1
> > > porter2:/m3_usb2      1.8T  274G  1.5T  16% /m3_usb2
> > > none                  1.8G  104K  1.8G   1% /var/lib/xenstored
> > >
> > >
> > > eight disks were purchased and added to the system as the same time.  I
> > > successfully created a log volume group out of the first four; they are
> > > mounted on /m3team3.
> > >
> > > I used parted to create a GPT label on the disks.  Then I used fdisk to
> > > create one partition taking up all the space on the disk.  I then used
> > > "mkfs
> > > -t ext3 /dev/sdg1 (etc.) on all the partitions before I used pvcreate,
> > > vgcreate and lgcreate.
> > >
> > > The process worked on the first four disks.
> > >
> > > Thanks for your help
> > >
> > > >
> > > > Regards,
> > > >
> > > > Raj
> > > >
> > > > On Thu, May 5, 2011 at 5:00 PM, Doll, Margaret Ann
> > > > <margaret_doll at brown.edu>wrote:
> > > >
> > > > > I get the same error with mk2efs -j /dev/sdi1
> > > > >
> > > > > mke2fs 1.39 (29-May-2006)
> > > > > /dev/sdi1 is apparently in use by the system; will not make a
> > > filesystem
> > > > > here!
> > > > >
> > > > >
> > > > >  On Thu, May 5, 2011 at 8:50 AM, raj sourabh <
> rajsourabh1 at gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > Did you try using fdisk for partition? and the use partprobe.
> > > > > >
> > > > > > eg. #fdisk /dev/sdi
> > > > > >     # partprobe
> > > > > >     #mke2fs -j /dev/sdiX
> > > > > >
> > > > > > I hope this would help.
> > > > > >
> > > > > > Regards,
> > > > > >
> > > > > > Raj
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Thu, May 5, 2011 at 4:29 PM, Doll, Margaret Ann
> > > > > > <margaret_doll at brown.edu>wrote:
> > > > > >
> > > > > > > In this particular case, I have rebooted the system many times
> > and
> > > am
> > > > > > > unable
> > > > > > > to get mkfs to work.  The disk partitions are also not on the
> > same
> > > > disk
> > > > > > as
> > > > > > > /.  How do I get the disk partitions to work with mkfs?
> > > > > > >
> > > > > > > On Thu, May 5, 2011 at 8:25 AM, Corey Kovacs <
> > > corey.kovacs at gmail.com
> > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Important to note
> > > > > > > >
> > > > > > > > 1. It's not often / is repartitioned.
> > > > > > > > 2. This isn't a problem unique to RHEL.
> > > > > > > >
> > > > > > > > C
> > > > > > > >
> > > > > > > > Sent from my iPod
> > > > > > > >
> > > > > > > > On May 5, 2011, at 8:14 AM, "Marti, Robert" <RJM002 at shsu.edu
> >
> > > > wrote:
> > > > > > > >
> > > > > > > > > A reboot is required if you change partitions on the same
> > disk
> > > > that
> > > > > > > > houses /.
> > > > > > > > >
> > > > > > > > > On May 5, 2011, at 6:41, "Stainforth, Matthew (SD/DS)" <
> > > > > > > > Matthew.Stainforth at gnb.ca> wrote:
> > > > > > > > >
> > > > > > > > >>> the default behavior for RHEL6 but I am not sure when or
> IF
> > > it
> > > > > > > > >>> actually hit RHEL5. Sounds like it might have. In RHEL6 a
> > > > reboot
> > > > > is
> > > > > > > > >>> simply a requirement, full stop.
> > > > > > > > >>
> > > > > > > > >> In RHEL6 a reboot is required between repartitioning and
> > > > mkfs'ing?
> > > > > > >  What
> > > > > > > > a sad thing if true.
> > > > > > > > >>
> > > > > > > > >> --
> > > > > > > > >> redhat-list mailing list
> > > > > > > > >> unsubscribe mailto:redhat-list-request at redhat.com
> > > > > > ?subject=unsubscribe
> > > > > > > > >> https://www.redhat.com/mailman/listinfo/redhat-list
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > > redhat-list mailing list
> > > > > > > > > unsubscribe mailto:redhat-list-request at redhat.com
> > > > > > ?subject=unsubscribe
> > > > > > > > > https://www.redhat.com/mailman/listinfo/redhat-list
> > > > > > > >
> > > > > > > > --
> > > > > > > > redhat-list mailing list
> > > > > > > > unsubscribe mailto:redhat-list-request at redhat.com
> > > > > ?subject=unsubscribe
> > > > > > > > https://www.redhat.com/mailman/listinfo/redhat-list
> > > > > > > >
> > > > > > > --
> > > > > > > redhat-list mailing list
> > > > > > > unsubscribe mailto:redhat-list-request at redhat.com
> > > > ?subject=unsubscribe
> > > > > > > https://www.redhat.com/mailman/listinfo/redhat-list
> > > > > > >
> > > > > > --
> > > > > > redhat-list mailing list
> > > > > > unsubscribe mailto:redhat-list-request at redhat.com
> > > ?subject=unsubscribe
> > > > > > https://www.redhat.com/mailman/listinfo/redhat-list
> > > > > >
> > > > > --
> > > > > redhat-list mailing list
> > > > > unsubscribe mailto:redhat-list-request at redhat.com
> > ?subject=unsubscribe
> > > > > https://www.redhat.com/mailman/listinfo/redhat-list
> > > > >
> > > > --
> > > > redhat-list mailing list
> > > > unsubscribe mailto:redhat-list-request at redhat.com
> ?subject=unsubscribe
> > > > https://www.redhat.com/mailman/listinfo/redhat-list
> > > >
> > > --
> > > redhat-list mailing list
> > > unsubscribe mailto:redhat-list-request at redhat.com?subject=unsubscribe
> > > https://www.redhat.com/mailman/listinfo/redhat-list
> > >
> > --
> > redhat-list mailing list
> > unsubscribe mailto:redhat-list-request at redhat.com?subject=unsubscribe
> > https://www.redhat.com/mailman/listinfo/redhat-list
> >
> --
> redhat-list mailing list
> unsubscribe mailto:redhat-list-request at redhat.com?subject=unsubscribe
> https://www.redhat.com/mailman/listinfo/redhat-list
>



More information about the redhat-list mailing list