Ext4 and large (>8TB) files

Eric Sandeen sandeen at redhat.com
Fri Mar 26 19:16:24 UTC 2010


On 03/26/2010 01:52 PM, Arun Nair wrote:
> Hi -
> 
> (I apologize for the ext4 question in an ext3 mailer, but I couldn't
> find a user list for ext4.)

linux-ext4 at vger.kernel.org :)  but that's ok.

> Per my understanding, ext4 can support file sizes upto 16 TiB if you use
> 4k blocks. I have a logical volume which uses ext4 with a 4k block size
> but I am unable to create files that are 8TiB (8796093022208 bytes) or
> larger.
> 
> [root at camanoe] ls -l
> total 8589935388
> -rw-rw---- 1 root root 8796093022207 2010-03-26 11:43 bigfile
> 
> [root at camanoe] echo x >> bigfile
> -bash: echo: write error: File too large

Perhaps echo isn't using largefile semantics?  Is this the first
test you did, or is echo the simple testcase, and something else
failed?

It works for me on rawhide x86_64:

create a file with blocks past 8T:
# xfs_io -F -f -c "pwrite 8T 1M" bigfile
wrote 1048576/1048576 bytes at offset 8796093022208
1 MiB, 256 ops; 0.0000 sec (206.313 MiB/sec and 52816.1750 ops/sec)

echo more into it:
# echo x >> bigfile

it really is that big:
# ls -lh bigfile
-rw-------. 1 root root 8.1T Mar 26 14:13 bigfile

I don't have an x86 box to test quickly; try something besides echo,
is what I'd suggest - xfs_io would work, or probably dd (with
conv=notrunc if you want to append)

-Eric




More information about the Ext3-users mailing list