[rhelv6-list] You suggestion for 'big' filesystem management Best Practice?

Greg Swift greg at nytefyre.net
Fri Oct 28 19:29:58 UTC 2011


On Fri, Oct 28, 2011 at 12:32, Peter Ruprecht <
peter.ruprecht at jila.colorado.edu> wrote:

> Greg Swift wrote:
>
>  On Fri, Oct 28, 2011 at 11:30, Masopust, Christian <
>> christian.masopust at siemens.**com <christian.masopust at siemens.com><mailto:
>> christian.masopust@**siemens.com <christian.masopust at siemens.com>>>
>> wrote:
>>
>>
>>     > Götz Reinicke wrote:
>>     > > Hi,
>>     > >
>>     > > we plan to set up a big file storage for media files like
>>     > uncompressed
>>     > > movies from student film projects, dvd images etc.
>>     > >
>>     > > It should be some sort of archive and will not bee accessed
>>     > by more than
>>     > > may be 5 people at the same time.
>>     > >
>>     > > The iSCSI RAID we have is about 26TB netto and I'm again
>>     > faced with the
>>     > > question: How many partitions, which filesystem, which
>>     > mount options etc.
>>     > >
>>     > > For the User it would be the most simpel thing, to have one big
>>     > > filesystem she/he could fill with all the data and dont has
>>     > to search
>>     > > e.g. on multiple volumes.
>>     > >
>>     > > On the other hand, if one big filesystem crashes or has do
>>     > be checked it
>>     > > will destroy a lot of data or the check will take hours ...
>>     > >
>>     > >
>>     > > Any suggestions pro or cons are welcome! :-)
>>     > >
>>     > > My favourite for now is 3 to 4 filesystems with the default ext4
>>     > > settings. (Redhat EL 5.7, may be soon 6.1)
>>     > >
>>     > > Thanks and best regards. Götz
>>     >
>>     > If you decide to go with RHEL6, xfs is a good bet for making one big
>>     > filesystem.  We have a setup similar to what you're
>>     > describing and have
>>     > had very solid stability and performance using xfs (default
>>     > filesystem
>>     > and mount settings.)  As far as I can see (and knocking on
>>     > wood), xfs is
>>     > now a lot less flaky than it seemed to be in the past.
>>     >
>>     >   -Peter
>>
>>    I can approve what Peter mentioned. I've been using xfs on my
>>    CentOS 5 system with 2 16TB arrays (each holding one single filesystem)
>>    for several years with absolutely no issues!
>>
>>
>> So in his intial request he mentioned concern about fsck times.  How has
>> this been for you guys (Christian and Peter) ?
>>
>> fwiw, I'm actually mixing both xfs with 30+TB total file system and
>> gluster in a different use case...  I just haven't had to fsck a system yet
>> so I am very curious about how that is performing for others.
>>
>> -greg
>>
>
> In testing, I purposely crashed the system while under light-moderate I/O
> load, and the xfs fs didn't need any recovery when it was remounted.  I
> don't have any real-world experience with how long it would take to
> xfs_check and xfs_repair a fs of that size that had gotten corrupted, sorry.
>  Though I will not be disappointed if I manage to avoid gaining that
> experience!
>
>
thats good to hear.  now that I think about it we've actually survived
several system crashes (bug in firm on cpu hardware) and I don't think any
of them have had to fsck.  hmm... maybe we'll force a check one of these
days to experiment.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/rhelv6-list/attachments/20111028/260c857a/attachment.htm>


More information about the rhelv6-list mailing list