Solved:Trying to mount 13 Tb disk on RedHat system.

Margaret Doll Margaret_Doll at brown.edu
Fri Aug 14 14:26:03 UTC 2009


I  went back to parted, created one large partition
taking up all the disk.  I then used

mkfs 1 ext2

on the large partition.
exited parted

I then mounted the ext2 partition on the system

df -h
Filesystem            Size  Used Avail Use% Mounted on

/dev/sdc1              12T   52K   12T   1% /m3team

Thanks to everyone that gave me suggestions.

On Aug 14, 2009, at 9:02 AM, Margaret Doll wrote:

> Progress so far
>
> Used parted
> 	to set the  label to GPT
> 	to create one 12 Tb  partition
> 	to set the filetype to ext3
>
>
> dumpe2fs of /dev/scd1  looks good
>
> pvcreate /dev/sdc1
> pvdisplay /dev/sdc1
> 	shows a 11.82 Tb volume
> vgcreate /dev/vg0 /dev/sdc1
> vgdisplay
> 	shows a 11.82 Tb volume group
> lvcreate -L 11.82T -n lv0 /dev/vg0
> lvdisplay
> 	shows a 11.82 Tb logical volume group
>
> Problem comes with creating a filesystem on the logical volume
>
> mkfs.ext3 /dev/vg0/lv0
> mke2fs 1.39 (29-May-2006)
> mkfs.ext3: Filesystem too large. No more than 2**31-1 blocks
> 	 (8TB using a blocksize of 4k) are currently supported.
>
> Is 8 Tb the largest filesystem you can mount on a RedHat system.
>
> Our system is 2.6.18-128.4.1.el5xen... 86_64 x86_64 x86_64 GNU/Linux
>
> Thanks so far for everyone's help.
>
>
>
> On Aug 14, 2009, at 8:22 AM, Margaret Doll wrote:
>
>>
>> On Aug 14, 2009, at 4:50 AM, Nigel Wade wrote:
>>
>>> Margaret Doll wrote:
>>>> We got to create the 13 Tb partition on the  aux  disk on the  
>>>> RedHat system by using parted, but then mkfs.ext3 doesn't work on  
>>>> any partition larger than 2 Tb.
>>>
>>> It should do. I have 6 and 7.8TB filesystems. What version of RH  
>>> are you running? I suppose there may be limitations on some of the  
>>> kernels, I am running 64bit RHEL 4. However, I think there is  
>>> limitation of 8TB for ext2/3 on systems which have a max. block/ 
>>> page size of 4kb. You need 8kb blocks/pages to get to 16TB, or a  
>>> different filesystem type and it's unlikely your system can handle  
>>> that. Do you know what the max. page size if for your system?
>>
>> How would I find this?
>>>
>>>> If we split the aux disk into 2 Tb partitions, I understand from http://www.linuxnix.com/2009/04/logical-volume-manager-lvm-in-redhat.html
>>>> that we use fdisk to  change the partition type to  83 Linux  
>>>> LVM.  Unfortunately fdisk will only see the first 2 Tb partition,  
>>>> so we can't create a LVM
>>>> of the partitions.
>>>
>>> If you want to go this way create individual LUNs on your RAID.  
>>> There is nothing to be gained by partitioning the RAID. You can  
>>> join the LUNs using LVM. That would enable you to create an LVM of  
>>> 13TB, but you've gained nothing over using the bare device. You  
>>> don't need a partition table on a device, you can create a  
>>> filesystem on the entire device, /dev/sdc for example, rather  
>>> than /dev/sdc1 etc.
>>
>> Using parted, I think we did create and ext3 filesystem on the  
>> entire disk.
>>>
>>>> We went back to parted and created one 13 Tb partition.  Then  
>>>> inside  parted, we used
>>>> mkfs 0 ext3
>>>> The program said  that ext3  was not supported in this version of  
>>>> parted, but ext2 was.  I oked, ext2.
>>>> Inside parted
>>>> (parted) print
>>>> Model: IFT A16F-G2430 (scsi)
>>>> Disk /dev/sdc: 13.0TB
>>>> Sector size (logical/physical): 512B/512B
>>>> Partition Table: gpt
>>>> Number  Start   End     Size    File system  Name     Flags
>>>> 1      17.4kB  13.0TB  13.0TB  ext3         primary
>>>> the partition is  labeled as a ext3 file system.
>>>> Our 13 Tb partition was added to /etc/fstab as a ext3  filesystem  
>>>> and mounted on  the system.
>>>> "df -h" though lists it as a 1.8 Tb system.
>>>> /dev/sdc1             1.8T  196M  1.7T   1% /m3team
>>>
>>>
>>> Did you actually create a filesystem on that 13TB partition? Or  
>>> did you just use the filesystem which you had already created  
>>> previously when the partition was only 2TB? I'd be interested to  
>>> know what mkfs said when you asked it to create a 13TB filesystem,  
>>> for example what block size did it set.
>>>
>>> What does 'dumpe2fs /dev/sdc1' show?
>> dumpe2fs /dev/sdc1 > dump.rpt
>>
>> Where dump.rpt shows
>>
>> Filesystem volume name:   <none>
>> Last mounted on:          <not available>
>> Filesystem UUID:          2e6d48bb-5cde-4be0-af94-edf95ad64801
>> Filesystem magic number:  0xEF53
>> Filesystem revision #:    1 (dynamic)
>> Filesystem features:      has_journal resize_inode dir_index  
>> filetype needs_recovery sparse_super large_file
>> Default mount options:    (none)
>> Filesystem state:         clean
>> Errors behavior:          Continue
>> Filesystem OS type:       Linux
>> Inode count:              244154368
>> Block count:              488281245
>> Reserved block count:     24414062
>> Free blocks:              480569334
>> Free inodes:              244154357
>> First block:              0
>> Block size:               4096
>> Fragment size:            4096
>> Reserved GDT blocks:      907
>> Blocks per group:         32768
>> Fragments per group:      32768
>> Inodes per group:         16384
>> Inode blocks per group:   512
>> Filesystem created:       Thu Aug 13 15:26:09 2009
>> Last mount time:          Thu Aug 13 15:59:28 2009
>> Last write time:          Thu Aug 13 15:59:28 2009
>> Mount count:              1
>> Maximum mount count:      33
>> Last checked:             Thu Aug 13 15:26:09 2009
>> Check interval:           15552000 (6 months)
>> Next check after:         Tue Feb  9 14:26:09 2010
>> Reserved blocks uid:      0 (user root)
>> Reserved blocks gid:      0 (group root)
>> First inode:              11
>> Inode size:               128
>> Journal inode:            8
>> Default directory hash:   tea
>> Directory Hash Seed:      149a19fb-3302-49ff-8314-3a02abd591d1
>> Journal backup:           inode blocks
>> Journal size:             128M
>>
>>
>> Group 0: (Blocks 0-32767)
>> Primary superblock at 0, Group descriptors at 1-117
>> ...
>> ...
>>
>> Group 14900: (Blocks 488243200-488275967)
>> Block bitmap at 488243200 (+0), Inode bitmap at 488243201 (+1)
>> Inode table at 488243202-488243713 (+2)
>> 32254 free blocks, 16384 free inodes, 0 directories
>> Free blocks: 488243714-488275967
>> Free inodes: 244121601-244137984
>> Group 14901: (Blocks 488275968-488281244)
>> Block bitmap at 488275968 (+0), Inode bitmap at 488275969 (+1)
>> Inode table at 488275970-488276481 (+2)
>> 4763 free blocks, 16384 free inodes, 0 directories
>> Free blocks: 488276482-488281244
>> Free inodes: 244137985-244154368
>>
>>
>>>
>>> -- 
>>> Nigel Wade, System Administrator, Space Plasma Physics Group,
>>>          University of Leicester, Leicester, LE1 7RH, UK
>>> E-mail :    nmw at ion.le.ac.uk
>>> Phone :     +44 (0)116 2523548, Fax : +44 (0)116 2523555
>>>
>>> -- 
>>> redhat-list mailing list
>>> unsubscribe mailto:redhat-list-request at redhat.com? 
>>> subject=unsubscribe
>>> https://www.redhat.com/mailman/listinfo/redhat-list
>>
>> -- 
>> redhat-list mailing list
>> unsubscribe mailto:redhat-list-request at redhat.com?subject=unsubscribe
>> https://www.redhat.com/mailman/listinfo/redhat-list
>
> -- 
> redhat-list mailing list
> unsubscribe mailto:redhat-list-request at redhat.com?subject=unsubscribe
> https://www.redhat.com/mailman/listinfo/redhat-list




More information about the redhat-list mailing list