[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Ext4 and large (>8TB) files


Thanks for the quick reply... see my responses inline...

On Fri, Mar 26, 2010 at 12:16 PM, Eric Sandeen <sandeen redhat com> wrote:
On 03/26/2010 01:52 PM, Arun Nair wrote:
> Hi -
> (I apologize for the ext4 question in an ext3 mailer, but I couldn't
> find a user list for ext4.)

linux-ext4 vger kernel org :)  but that's ok.

Saw that but thought it was a dev-only list, sorry. Next time :)

> Per my understanding, ext4 can support file sizes upto 16 TiB if you use
> 4k blocks. I have a logical volume which uses ext4 with a 4k block size
> but I am unable to create files that are 8TiB (8796093022208 bytes) or
> larger.
> [root camanoe] ls -l
> total 8589935388
> -rw-rw---- 1 root root 8796093022207 2010-03-26 11:43 bigfile
> [root camanoe] echo x >> bigfile
> -bash: echo: write error: File too large

Perhaps echo isn't using largefile semantics?  Is this the first
test you did, or is echo the simple testcase, and something else

It's the simple test case. We found the problem when MySQL failed to expand its ibdata file beyond 8 TB. I then tried dd as well with notrunc like you mentioned, same error:

[root camanoe]# dd oflag=append conv=notrunc if=/dev/zero of=bigfile bs=1 count=1
dd: writing `bigfile': File too large
1+0 records in
0+0 records out
0 bytes (0 B) copied, 0.000234712 s, 0.0 kB/s

It works for me on rawhide x86_64:

create a file with blocks past 8T:
# xfs_io -F -f -c "pwrite 8T 1M" bigfile
wrote 1048576/1048576 bytes at offset 8796093022208
1 MiB, 256 ops; 0.0000 sec (206.313 MiB/sec and 52816.1750 ops/sec)

echo more into it:
# echo x >> bigfile

it really is that big:
# ls -lh bigfile
-rw-------. 1 root root 8.1T Mar 26 14:13 bigfile

I don't have an x86 box to test quickly; try something besides echo,
is what I'd suggest - xfs_io would work, or probably dd (with
conv=notrunc if you want to append)

dd fails as mentioned above. xfs_io errors too:
[root camanoe]# xfs_io -F -f -c "pwrite 8T 1M" bigfile2
pwrite64: File too large


BTW, my system is NOT 64-bit but my guess is this doesn't affect max file size?

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]