[rhn-users] 2GB Filesize Limit

Todd Warner taw at redhat.com
Fri Sep 3 15:56:48 UTC 2004


On Fri, 3 Sep 2004, Michael Gargiullo wrote:

> On Fri, 2004-09-03 at 10:04, Charith Perera wrote:
> > Thanks for the responses.
> > 
> > We're setting up Sybase which has the ability to use raw devices or files. So 
> > if we use files, then the 2GB limit is a concern. As Todd Warner suggested, 
> > it's really out of the scope of this mailing list, so I don't want to bring 
> > that into discussion here.
> > 
> > Thanks
> > 
> > Charith.
> <snip>
> 
> Use XFS. 
> 
> To quote their site " ...a theoretical 8388608TB file size. Large
> enough?"
> 
> You'll have to track down the packages yourself but, from the SGI
> website...

[taw at chimchim taw]$ ls -lh bigfile.txt
-rw-rw-r--    1 taw      taw          3.0G Sep  3 11:29 bigfile.txt

That's on a standard RHEL 3 AS box. I just dd'ed /dev/zero. I believe
ext3 supports up to 2^63. Not sure though off the top of my head.

Have a nice weekend, ya'll...
/me heads to the mountains.

-taw

> : Does XFS support large files (bigger then 2GB)?
> Yes, XFS supports files larger then 2GB. The large file support (LFS) is
> largely dependent on the C library of your computer. Glibc 2.2 and
> higher has full LFS support. If your C lib does not support it you will
> get errors that the valued is too large for the defined data type.
> Userland software needs to be compiled against the LFS compliant C lib
> in order to work. You will be able to create 2GB+ files on non LFS
> systems but the tools will not be able to stat them.
> Distributions based on Glibc 2.2.x and higher will function normally.
> Note that some userspace programs like tcsh do not correctly behave even
> if they are compiled against glibc 2.2.x
> You may need to contact your vendor/developer if this is the case.
> 
> Here is a snippet of email conversation with Steve Lord on the topic of
> the maximum filesize of XFS under linux.
> 
> I would challenge any filesystem running on Linux on an ia32, and using
> the page cache to get past the practical limit of 16 Tbytes using buffered
> I/O. At this point you run out of space to address pages in the cache since
> the core kernel code uses a 32 bit number as the index number of a page in the
> cache.
> 
> As for XFS itself, this is a constant definition from the code:
> 
> #define XFS_MAX_FILE_OFFSET ((long long)((1ULL<<63)-1ULL))
> 
> So 2^63 bytes is theoretically possible.
> 
> All of this is ignoring the current limitation of 2 Tbytes of address
> space for block devices (including logical volumes). The only way to
> get a file bigger than this of course is to have large holes in it.
> And to get past 16 Tbytes you have to used direct I/O.
> Which would would mean a theoretical 8388608TB file size. Large enough?
> 
> 
> _______________________________________________
> rhn-users mailing list
> rhn-users at redhat.com
> https://www.redhat.com/mailman/listinfo/rhn-users
> 

-- 
____________
 /odd Warner                                    <taw@{redhat,pobox}.com>
                Head Geek, QA/Sust.Eng. - Red Had Network
---------------------gpg info in the message headers--------------------
"When the going gets tough, you're halfway through a cliche" -Greg Dean(?)





More information about the rhn-users mailing list