How to generate a large file allocating space

Andreas Dilger adilger.kernel at dilger.ca
Tue Nov 2 03:21:10 UTC 2010


On 2010-11-01, at 16:58, Alex Bligh wrote:
> --On 1 Nov 2010 15:45:12 Andreas Dilger <adilger.kernel at dilger.ca> wrote:
>> What is it you really want to do in the end?  Shared concurrent writers
>> to the same file?  High-bandwidth IO to the underlying disk?
> 
> High bandwidth I/O to the underlying disk is part of it - only one
> reader/writer per file. We're really using ext4 just for its extents
> capability, i.e. allocating space, plus the convenience of directory
> lookup to find the set of extents.
> 
> It's easier to do this than to write this bit from scratch, and the
> files are pretty static in size (i.e. they only grow, and grow
> infrequently by large amounts). The files on ext4 correspond to large
> chunks of disks we are combining together using an device-mapper
> type thing (but different), and on top of that lives arbitary real
> filing systems. Because our device-mapper type thing already
> understands what blocks have been written to, we already have a layer
> that prevents the data on the disk before the file's creation being
> exposed. That's why I don't need ext4 to zero them out. I suppose
> in that sense it is like the swap file case.
> 
> Oh, and because these files are allocated infrequently, I am not
> /that/ concerned about performance (famous last words). The performance
> critical stuff is done via direct writes to the SAN and don't even
> pass through ext4 (or indeed through any single host).

Actually, I think Ceph has a network block-device feature (recently submitted/committed to mainline), and Lustre has a prototype block-device feature as well.  

Cheers, Andreas









More information about the Ext3-users mailing list