df, lvm and 6TB arrays oh my!

Mark Haney mhaney at ercbroadband.org
Mon Feb 13 22:36:18 UTC 2006


Robert Locke wrote:
> On Mon, 2006-02-13 at 16:27 -0500, Mark Haney wrote:
>   
>> Robert Locke wrote:
>>     
>>> On Mon, 2006-02-13 at 08:22 -0500, Mark Haney wrote:
>>>   
>>>       
>>>> Robert Locke wrote:
>>>>     
>>>>         
>>>>>>     
>>>>>>         
>>>>>>             
>>>>> OK, so lvdisplay showing the full amount is good, so the underlying
>>>>> virtual partition is 6.36TB....  Now the tune2fs I was suggesting was
>>>>> simply the -l option that would dump the ext2/3 superblock.  We should
>>>>> be able to see the "block count" and the "block size" which when
>>>>> multiplied should be 6.36TB.  Since you are at least on a 2.6 kernel, I
>>>>> don't think we have the 2TB filesystem limit, though I don't remember
>>>>> when that got bumped up to 32....
>>>>>
>>>>> --Rob
>>>>>
>>>>>   
>>>>>       
>>>>>           
>>>> I hate to be a complete idiot, but I can't get tune2fs to work on the 
>>>> lv, I must be doing something wrong.  Can you use tune2fs on an lvm volume?
>>>>
>>>>     
>>>>         
>>> <big grin>
>>>
>>> Should just be "tune2fs -l /dev/vgname/lvname".  So try:
>>>
>>> tune2fs -l /dev/Volume02/Volume02lv
>>> tune2fs -l /dev/Volume03/Volume03lv
>>>
>>> That should generate about a page of each....
>>>
>>> --Rob
>>>
>>>   
>>>       
>> I just got time to actually figure that out.  I'm not a complete 
>> imbecile, I swear.  Here's the output of one 6.36TB array:
>>
>> ilesystem volume name:   <none>
>> Last mounted on:          <not available>
>> Filesystem UUID:          d775f424-2a12-44ac-9260-c00013f963ee
>> Filesystem magic number:  0xEF53
>> Filesystem revision #:    1 (dynamic)
>> Filesystem features:      has_journal filetype needs_recovery 
>> sparse_super large_file
>> Default mount options:    (none)
>> Filesystem state:         clean
>> Errors behavior:          Continue
>> Filesystem OS type:       Linux
>> Inode count:              316325888
>> Block count:              632648704
>> Reserved block count:     31632435
>> Free blocks:              622713507
>> Free inodes:              316325877
>> First block:              0
>> Block size:               4096
>> Fragment size:            4096
>> Blocks per group:         32768
>> Fragments per group:      32768
>> Inodes per group:         16384
>> Inode blocks per group:   512
>> Filesystem created:       Thu Jan  5 13:41:11 2006
>> Last mount time:          Mon Feb 13 14:06:06 2006
>> Last write time:          Mon Feb 13 14:06:06 2006
>> Mount count:              2
>> Maximum mount count:      31
>> Last checked:             Thu Jan  5 13:41:11 2006
>> Check interval:           15552000 (6 months)
>> Next check after:         Tue Jul  4 14:41:11 2006
>> Reserved blocks uid:      0 (user root)
>> Reserved blocks gid:      0 (group root)
>> First inode:              11
>> Inode size:               128
>> Journal inode:            8
>> Default directory hash:   tea
>> Directory Hash Seed:      62d0ad33-5300-481b-8620-bbdb6fd30a5b
>> Journal backup:           inode blocks
>>
>>     
>
> Well, now we're in the not so good news....
>
> Just doing the math....
>
> (Block count * Block size) / 1024^4
> (632648704 * 4096) / 1024^4 = 2.35 TB
>
> So, what you are seeing in df is accurate as far as the filesystem is
> concerned....
>
> So, my thoughts are that perhaps someone originally created the logical
> volumes at 2.4TB, created the filesystems, and then tried to extend the
> logical volumes with lvextend?  Of course, while lvextend will grow the
> underlying logical volume it does not grow the filesystem.  That is done
> with the ext2online command, if the filesystem is mounted, and with
> resize2fs if the filesystem is unmounted.
>
> Can you check to see which of those two utilities you have available in
> FC2?  Whether you have both resize2fs and ext2online?  Of course,
> resizing the filesystem is potentially dangerous and presumes good
> quality backups, and read, lots of disclaimers at this point about
> eating your data for lunch......
>
> --Rob
>
>
>   
I created the volumes myself, all at once.  I verified the size of the 
volume at each step. I had thought that it's possible that ext3 only 
formatted the 2.4TB extent and that resizing it might work.  Fortunately 
I have one of these volumes not in use so I can test that and see how it 
goes.  Thanks for all your help on this.


-- 
Interdum feror cupidine partium magnarum Europae vincendarum

Mark Haney
Sr. Systems Administrator
ERC Broadband
(828) 350-2415




More information about the fedora-list mailing list