[rhn-users] 2GB Filesize Limit

Charith Perera cperera at intertechmedia.com
Fri Sep 3 15:02:28 UTC 2004


Good info. Thanks.

Charith




On Friday 03 September 2004 10:16 am, Michael Gargiullo wrote:
> On Fri, 2004-09-03 at 10:04, Charith Perera wrote:
> > Thanks for the responses.
> >
> > We're setting up Sybase which has the ability to use raw devices or
> > files. So if we use files, then the 2GB limit is a concern. As Todd
> > Warner suggested, it's really out of the scope of this mailing list, so I
> > don't want to bring that into discussion here.
> >
> > Thanks
> >
> > Charith.
>
> <snip>
>
> Use XFS.
>
> To quote their site " ...a theoretical 8388608TB file size. Large
> enough?"
>
> You'll have to track down the packages yourself but, from the SGI
> website...
>
> : Does XFS support large files (bigger then 2GB)?
>
> Yes, XFS supports files larger then 2GB. The large file support (LFS) is
> largely dependent on the C library of your computer. Glibc 2.2 and
> higher has full LFS support. If your C lib does not support it you will
> get errors that the valued is too large for the defined data type.
> Userland software needs to be compiled against the LFS compliant C lib
> in order to work. You will be able to create 2GB+ files on non LFS
> systems but the tools will not be able to stat them.
> Distributions based on Glibc 2.2.x and higher will function normally.
> Note that some userspace programs like tcsh do not correctly behave even
> if they are compiled against glibc 2.2.x
> You may need to contact your vendor/developer if this is the case.
>
> Here is a snippet of email conversation with Steve Lord on the topic of
> the maximum filesize of XFS under linux.
>
> I would challenge any filesystem running on Linux on an ia32, and using
> the page cache to get past the practical limit of 16 Tbytes using buffered
> I/O. At this point you run out of space to address pages in the cache since
> the core kernel code uses a 32 bit number as the index number of a page in
> the cache.
>
> As for XFS itself, this is a constant definition from the code:
>
> #define XFS_MAX_FILE_OFFSET ((long long)((1ULL<<63)-1ULL))
>
> So 2^63 bytes is theoretically possible.
>
> All of this is ignoring the current limitation of 2 Tbytes of address
> space for block devices (including logical volumes). The only way to
> get a file bigger than this of course is to have large holes in it.
> And to get past 16 Tbytes you have to used direct I/O.
> Which would would mean a theoretical 8388608TB file size. Large enough?
>
>
> _______________________________________________
> rhn-users mailing list
> rhn-users at redhat.com
> https://www.redhat.com/mailman/listinfo/rhn-users





More information about the rhn-users mailing list