Ext4 and large (>8TB) files

Arun Nair arun at bvinetworks.com
Fri Mar 26 20:50:55 UTC 2010


Eric,

Thanks for the quick reply... see my responses inline...

On Fri, Mar 26, 2010 at 12:16 PM, Eric Sandeen <sandeen at redhat.com> wrote:

> On 03/26/2010 01:52 PM, Arun Nair wrote:
> > Hi -
> >
> > (I apologize for the ext4 question in an ext3 mailer, but I couldn't
> > find a user list for ext4.)
>
> linux-ext4 at vger.kernel.org :)  but that's ok.
>

Saw that but thought it was a dev-only list, sorry. Next time :)


>
> > Per my understanding, ext4 can support file sizes upto 16 TiB if you use
> > 4k blocks. I have a logical volume which uses ext4 with a 4k block size
> > but I am unable to create files that are 8TiB (8796093022208 bytes) or
> > larger.
> >
> > [root at camanoe] ls -l
> > total 8589935388
> > -rw-rw---- 1 root root 8796093022207 2010-03-26 11:43 bigfile
> >
> > [root at camanoe] echo x >> bigfile
> > -bash: echo: write error: File too large
>
> Perhaps echo isn't using largefile semantics?  Is this the first
> test you did, or is echo the simple testcase, and something else
> failed?
>

It's the simple test case. We found the problem when MySQL failed to expand
its ibdata file beyond 8 TB. I then tried dd as well with notrunc like you
mentioned, same error:

[root at camanoe]# dd oflag=append conv=notrunc if=/dev/zero of=bigfile bs=1
count=1
dd: writing `bigfile': File too large
1+0 records in
0+0 records out
0 bytes (0 B) copied, 0.000234712 s, 0.0 kB/s


> It works for me on rawhide x86_64:
>
> create a file with blocks past 8T:
> # xfs_io -F -f -c "pwrite 8T 1M" bigfile
> wrote 1048576/1048576 bytes at offset 8796093022208
> 1 MiB, 256 ops; 0.0000 sec (206.313 MiB/sec and 52816.1750 ops/sec)
>
> echo more into it:
> # echo x >> bigfile
>
> it really is that big:
> # ls -lh bigfile
> -rw-------. 1 root root 8.1T Mar 26 14:13 bigfile
>
> I don't have an x86 box to test quickly; try something besides echo,
> is what I'd suggest - xfs_io would work, or probably dd (with
> conv=notrunc if you want to append)
>

dd fails as mentioned above. xfs_io errors too:
[root at camanoe]# xfs_io -F -f -c "pwrite 8T 1M" bigfile2
pwrite64: File too large


> -Eric
>
>
BTW, my system is NOT 64-bit but my guess is this doesn't affect max file
size?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/ext3-users/attachments/20100326/22948c2a/attachment.htm>


More information about the Ext3-users mailing list