From articpenguin3800 at gmail.com Tue Apr 1 22:46:47 2008 From: articpenguin3800 at gmail.com (John Nelson) Date: Tue, 01 Apr 2008 18:46:47 -0400 Subject: allocation Message-ID: <47F2BB57.5090701@gmail.com> does ext3 allocate space for files anywhere on the disk where there is free space or does it try to keep them all in one area like how ntfs or fat do? From htmldeveloper at gmail.com Wed Apr 2 05:18:53 2008 From: htmldeveloper at gmail.com (Peter Teoh) Date: Wed, 02 Apr 2008 13:18:53 +0800 Subject: allocation In-Reply-To: <47F2BB57.5090701@gmail.com> References: <47F2BB57.5090701@gmail.com> Message-ID: <47F3173D.40807@gmail.com> John Nelson wrote: > does ext3 allocate space for files anywhere on the disk where there is > free space or does it try to keep them all in one area like how ntfs > or fat do? > > _______________________________________________ > Ext3-users mailing list > Ext3-users at redhat.com > https://www.redhat.com/mailman/listinfo/ext3-users > > In fs/ext3/ialloc.c: /* * There are two policies for allocating an inode. If the new inode is * a directory, then a forward search is made for a block group with both * free space and a low directory-to-inode ratio; if that fails, then of * the groups with above-average free space, that group with the fewest * directories already is chosen. * * For other inodes, search forward from the parent directory\'s block * group to find a free inode. */ static int find_group_dir(struct super_block *sb, struct inode *parent) { int ngroups = EXT3_SB(sb)->s_groups_count; unsigned int freei, avefreei; struct ext3_group_desc *desc, *best_desc = NULL; int group, best_group = -1; And this: /* * There are two policies for allocating an inode. If the new inode is * a directory, then a forward search is made for a block group with both * free space and a low directory-to-inode ratio; if that fails, then of * the groups with above-average free space, that group with the fewest * directories already is chosen. * * For other inodes, search forward from the parent directory's block * group to find a free inode. */ struct inode *ext3_new_inode(handle_t *handle, struct inode * dir, int mode) { struct super_block *sb; struct buffer_head *bitmap_bh = NULL; struct buffer_head *bh2; int group; unsigned long ino = 0; Possibly u can look further from here. it is new to me too. From ling at fnal.gov Thu Apr 3 16:21:19 2008 From: ling at fnal.gov (Ling C. Ho) Date: Thu, 03 Apr 2008 11:21:19 -0500 Subject: Shrink ext3 filesystem , running out of inode questions Message-ID: <47F503FF.9030609@fnal.gov> Hi, I have an ext3 file system created with -T largefile4 option. Now it is running out of inode but it's only about 10% full. - Is there a way now to increase the number of inode without making a new file system? - If not, I am thinking about shrinking the file system, and then use the free up space to create a new file system with more inodes, and move the data over. Since I am already running out of inode, would I still be able to shrink the file system? Thanks, ... ling From ross at biostat.ucsf.edu Fri Apr 4 16:24:27 2008 From: ross at biostat.ucsf.edu (Ross Boylan) Date: Fri, 04 Apr 2008 09:24:27 -0700 Subject: with dir_index ls is slower than without? In-Reply-To: <20080331111807.6A293D148D@smtp.l00-bugdead-prods.de> References: <20080331111807.6A293D148D@smtp.l00-bugdead-prods.de> Message-ID: <1207326267.15549.5.camel@iron.psg.net> On Mon, 2008-03-31 at 13:18 +0200, Sebastian Reitenbach wrote: > Hi Nicolas, > > Nicolas KOWALSKI wrote: > > "Sebastian Reitenbach" writes: > > > > > installhost2:~ # time ls -la /mnt/index/ | wc -l > > > 500005 > > > > > > real 2m41.015s > > > user 0m4.568s > > > sys 0m6.520s > > > > > > > > > installhost2:~ # time ls -la /mnt/noindex/ | wc -l > > > 500005 > > > > > > real 0m10.792s > > > user 0m3.172s > > > sys 0m6.000s > > > > > > I expected the dir_index should speedup this a little bit? > > > I assume I'm still missing sth? > > > > I think the point of dir_index is "only" to quickly find in a large > > directory a file when you _already_ have its name. > > > > The performance of listing is not its purpose, and as you noted it, > > even makes performance worse. > > ah, that would explain what I've seen here. > > after reading your answer, I found this older mail in the archives: > http://osdir.com/ml/file-systems.ext3.user/2004-09/msg00029.html See also the https://www.redhat.com/archives/ext3-users/2007-October/msg00011.html thread I started about slow directory traversal. That includes reference to a library one can load to speed things up sometimes; I was never clear on exactly how to build and use it (I would need to get a daemon to use the library) and my only test failed. I later learned that tar, my test program, doesn't use the right system calls to benefit. > > So everything seems to depend on how the application is using the > filesystem. > Picking a single given file might be faster than with a plain ext3, but > scanning and opening all files in a directory might become slower. I wanted > to use the dir_index for some partitions, like for cyrus imap server, and Careful: it was problems backing up a cyrus imap spool that prompted my question. I just ran a cyrus backup and it took 35 hours. Incremental backups take 3. > for some other applications. I think I have to benchmark the applications, > to see whether they get a speed gain of the dir_index or not. > > kind regards > Sebastian > > > > _______________________________________________ > Ext3-users mailing list > Ext3-users at redhat.com > https://www.redhat.com/mailman/listinfo/ext3-users -- Ross Boylan wk: (415) 514-8146 185 Berry St #5700 ross at biostat.ucsf.edu Dept of Epidemiology and Biostatistics fax: (415) 514-8150 University of California, San Francisco San Francisco, CA 94107-1739 hm: (415) 550-1062 From carlo at alinoe.com Mon Apr 7 19:54:46 2008 From: carlo at alinoe.com (Carlo Wood) Date: Mon, 7 Apr 2008 21:54:46 +0200 Subject: [ext3grep] Re: Error compiling on Cent OS 4 In-Reply-To: <5675d6a00804070811x3ddb4c89y91addae5539541c@mail.gmail.com> References: <927bad90-e044-46ea-9357-4d096fc37d00@h1g2000prh.googlegroups.com> <20080407144824.GC585@alinoe.com> <5675d6a00804070811x3ddb4c89y91addae5539541c@mail.gmail.com> Message-ID: <20080407195446.GF585@alinoe.com> On Mon, Apr 07, 2008 at 12:11:18PM -0300, Ranieri Oliveira wrote: > /usr/include/ext2fs/bitops.h:440: error: invalid conversion from `unsigned > char*' to `char*' What is on line 440? Please keep ext3-users at redhat.com in the CC. -- Carlo Wood From carlo at alinoe.com Mon Apr 7 23:00:36 2008 From: carlo at alinoe.com (Carlo Wood) Date: Tue, 8 Apr 2008 01:00:36 +0200 Subject: [ext3grep] Re: Error compiling on Cent OS 4 In-Reply-To: <5675d6a00804071329p1ae4136dnc29d629f36cd740a@mail.gmail.com> References: <927bad90-e044-46ea-9357-4d096fc37d00@h1g2000prh.googlegroups.com> <20080407144824.GC585@alinoe.com> <5675d6a00804070811x3ddb4c89y91addae5539541c@mail.gmail.com> <20080407195446.GF585@alinoe.com> <5675d6a00804071329p1ae4136dnc29d629f36cd740a@mail.gmail.com> Message-ID: <20080407230036.GA24525@alinoe.com> On Mon, Apr 07, 2008 at 05:29:46PM -0300, Ranieri Oliveira wrote: > 437 _INLINE_ int ext2fs_find_first_bit_set(void * addr, unsigned size) > 439 { > 440 char *cp = (unsigned char *) addr; > 441 int res = 0, d0; > 442 > 443 if (!size) > 444 return 0; > 445 > 446 while ((size > res) && (*cp == 0)) { > 447 cp++; > 448 res += 8; > 449 } > 450 d0 = ffs(*cp); > 451 if (d0 == 0) > 452 return size; > 453 > 454 return res + d0 - 1; > 455 } That is an error in ext2progs. You can workaround the problem by changing char *cp = (unsigned char *) addr; into char *cp = (char *) addr; I'd think that better is to upgrade your ext2progs devel package. The current version has this right. -- Carlo Wood From ranieri85 at gmail.com Mon Apr 7 20:29:46 2008 From: ranieri85 at gmail.com (Ranieri Oliveira) Date: Mon, 7 Apr 2008 17:29:46 -0300 Subject: [ext3grep] Re: Error compiling on Cent OS 4 In-Reply-To: <20080407195446.GF585@alinoe.com> References: <927bad90-e044-46ea-9357-4d096fc37d00@h1g2000prh.googlegroups.com> <20080407144824.GC585@alinoe.com> <5675d6a00804070811x3ddb4c89y91addae5539541c@mail.gmail.com> <20080407195446.GF585@alinoe.com> Message-ID: <5675d6a00804071329p1ae4136dnc29d629f36cd740a@mail.gmail.com> 437 _INLINE_ int ext2fs_find_first_bit_set(void * addr, unsigned size) 439 { 440 char *cp = (unsigned char *) addr; 441 int res = 0, d0; 442 443 if (!size) 444 return 0; 445 446 while ((size > res) && (*cp == 0)) { 447 cp++; 448 res += 8; 449 } 450 d0 = ffs(*cp); 451 if (d0 == 0) 452 return size; 453 454 return res + d0 - 1; 455 } On Mon, Apr 7, 2008 at 4:54 PM, Carlo Wood wrote: > > On Mon, Apr 07, 2008 at 12:11:18PM -0300, Ranieri Oliveira wrote: > > /usr/include/ext2fs/bitops.h:440: error: invalid conversion from > `unsigned > > char*' to `char*' > > What is on line 440? > > Please keep ext3-users at redhat.com in the CC. > > -- > Carlo Wood > > --~--~---------~--~----~------------~-------~--~----~ > To post to this group, send email to ext3grep at googlegroups.com > To unsubscribe from this group, send email to > ext3grep-unsubscribe at googlegroups.com > For more options, visit this group at > http://groups.google.com/group/ext3grep?hl=en > -~----------~----~----~----~------~----~------~--~--- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhahn at rbmtechnologies.com Tue Apr 8 21:15:58 2008 From: jhahn at rbmtechnologies.com (Justin Hahn) Date: Tue, 8 Apr 2008 17:15:58 -0400 Subject: Extremely long FSCK. (>24 hours) Message-ID: Hello all, I recently encountered a problem that I thought I should bring to the ext3 devs. I've seen some evidence of similar issues in the past, but it wasn't clear that anyone had experienced it at quite this scale. The short summary is that I let 'e2fsck -C 0 -y -f' run for more than 24 hours on a 4.25Tb filesystem before having to kill it. It had been stuck at "70.1%" in Pass 2 (checking directory structure) for about 10 hours. e2fsck was using about 4.4Gb of RAM and was maxing out 1 CPU core (out of 8). This filesystem is used for disk-to-disk backups with dirvish[1] The volume was 4.25Gb large, and about 90% full. I was doing an fsck prior to running resize2fs, as required by said tool. (I ended up switching to ext2online, which worked fine.) I suspect the large # of hard links and the large file system size are what did me in. Fortunately, my filesystem is clean for now. What I'm worried about is the day when it actually needs a proper fsck to correct problems. I have no idea how long the fsck would have taken had I not cancelled it. I fear it would have been more than 48hours. Any suggestions (including undocumented command line options) I can try to accelerate this in the future would be welcome. As this system is for backups and is idle for about 12-16 hours a day, I can un-mount the volume and perform some (non-destructive!!) tests if there is interest. Unfortunately, I cannot provide remote access to the system for security reasons as this is our backup archive. I'm using CentOS 4.5 as my distro. 'uname -a' reports: Linux backups-00.dc-00.rbm.local 2.6.9-55.0.12.ELsmp #1 SMP Fri Nov 2 12:38:56 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux The underlying hardware is a Dell PE 2950, with a PERC 5i RAID controller and 6x 1Tb SATA drives and 8Gb of RAM. I/O performance has been fine for my purposes, but I have not benchmarked, tuned or tweaked it in any way. Thanks! --jeh [1] Dirvish is an rsync/hardlink based set of perl scripts -- see http://www.dirvish.org/ for more details. From articpenguin3800 at gmail.com Fri Apr 11 17:48:02 2008 From: articpenguin3800 at gmail.com (John Nelson) Date: Fri, 11 Apr 2008 13:48:02 -0400 Subject: copying Message-ID: <47FFA452.5090501@gmail.com> hi I got a very fragmented video file that is in about 3200 extents. I copied the file about 10 times to see if can get a lower amount of extents. Some times i got 30 and sometimes 1000. My drive is 90% free. Does ext3 have trouble finding free space for files that are being copied? The video file is about 600 Megabytes. From balu.manyam at gmail.com Sat Apr 12 01:42:08 2008 From: balu.manyam at gmail.com (Balu manyam) Date: Sat, 12 Apr 2008 07:12:08 +0530 Subject: Extremely long FSCK. (>24 hours) In-Reply-To: References: Message-ID: <995392220804111842p35804b49v5792fe17da15101c@mail.gmail.com> justin - you may wish to refer the email ...with sub:forced fsck (again?) in the archives .... HTH Manyam On Wed, Apr 9, 2008 at 2:45 AM, Justin Hahn wrote: > Hello all, > > I recently encountered a problem that I thought I should bring to the ext3 > devs. I've seen some evidence of similar issues in the past, but it wasn't > clear that anyone had experienced it at quite this scale. > > The short summary is that I let 'e2fsck -C 0 -y -f' run for more than 24 > hours on a 4.25Tb filesystem before having to kill it. It had been stuck at > "70.1%" in Pass 2 (checking directory structure) for about 10 hours. e2fsck > was using about 4.4Gb of RAM and was maxing out 1 CPU core (out of 8). > > This filesystem is used for disk-to-disk backups with dirvish[1] The > volume was 4.25Gb large, and about 90% full. I was doing an fsck prior to > running resize2fs, as required by said tool. (I ended up switching to > ext2online, which worked fine.) > > I suspect the large # of hard links and the large file system size are > what did me in. Fortunately, my filesystem is clean for now. What I'm > worried about is the day when it actually needs a proper fsck to correct > problems. I have no idea how long the fsck would have taken had I not > cancelled it. I fear it would have been more than 48hours. > > Any suggestions (including undocumented command line options) I can try to > accelerate this in the future would be welcome. As this system is for > backups and is idle for about 12-16 hours a day, I can un-mount the volume > and perform some (non-destructive!!) tests if there is interest. > Unfortunately, I cannot provide remote access to the system for security > reasons as this is our backup archive. > > I'm using CentOS 4.5 as my distro. > > 'uname -a' reports: > Linux backups-00.dc-00.rbm.local 2.6.9-55.0.12.ELsmp #1 SMP Fri Nov 2 > 12:38:56 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux > > The underlying hardware is a Dell PE 2950, with a PERC 5i RAID controller > and 6x 1Tb SATA drives and 8Gb of RAM. I/O performance has been fine for my > purposes, but I have not benchmarked, tuned or tweaked it in any way. > > Thanks! > > --jeh > > [1] Dirvish is an rsync/hardlink based set of perl scripts -- see > http://www.dirvish.org/ for more details. > > _______________________________________________ > Ext3-users mailing list > Ext3-users at redhat.com > https://www.redhat.com/mailman/listinfo/ext3-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aronmalache at 163.com Sat Apr 12 10:03:04 2008 From: aronmalache at 163.com (Aron) Date: Sat, 12 Apr 2008 18:03:04 +0800 (CST) Subject: What can I do to speed up my file system copy Message-ID: <6783351.376031207994584730.JavaMail.coremail@bj163app106.163.com> Just like the title,what can I do to speed up my file system copy.I heard about that using cache can speed up big file's copy,but I don't know how to do that.If possible,I prefer that when I copy some small files, the system don't use cache but write on the harddisk immediately,but when I copy a file that it is bigger than a specific size,the system use the cache to speed up the operate.I also heard that NFS server have some feature like this,but I am not sure,because I really know little about this. -------------- next part -------------- An HTML attachment was scrubbed... URL: From defconoii at gmail.com Sat Apr 12 11:09:17 2008 From: defconoii at gmail.com (defcon) Date: Sat, 12 Apr 2008 04:09:17 -0700 Subject: wipe freespace on ext3? Message-ID: Hey all, I have failed at wiping my freespace on my hard drive, what I have done is: [dd] if=/dev/urandom of=bigfile and I also have tried sfill from the secure delete package from thc. I have successfully recovered all deleted files after wiping the hard drive with foremost, since this is a journaling filesystem is there any way around this, is anyone experienced in this and have tried to wipe/recover files on ext3? Thanks defcon From sandeen at redhat.com Sat Apr 12 15:27:09 2008 From: sandeen at redhat.com (Eric Sandeen) Date: Sat, 12 Apr 2008 10:27:09 -0500 Subject: wipe freespace on ext3? In-Reply-To: References: Message-ID: <4800D4CD.3040403@redhat.com> defcon wrote: > Hey all, I have failed at wiping my freespace on my hard drive, what I > have done is: > [dd] if=/dev/urandom of=bigfile > and I also have tried sfill from the secure delete package from thc. > I have successfully recovered all deleted files after wiping the hard > drive with foremost, since this is a journaling filesystem is there > any way around this, is anyone experienced in this and have tried to > wipe/recover files on ext3? > Thanks > defcon If you think journaling is the issue, tune2fs -O ^has_journal, mount it as ext2, and run your wiper again. However, in almost all cases I would not expect that you'll be able to recover "all deleted files" from a properly wiped filesystem, even with a journal. (the exception might be something like only a few files deleted, and journaled data mode, and a very smart recovery tool, although I'd expect the wiper to wrap the log anyway, depending on how it runs). -Eric From forest at alittletooquiet.net Sat Apr 12 15:36:14 2008 From: forest at alittletooquiet.net (Forest Bond) Date: Sat, 12 Apr 2008 11:36:14 -0400 Subject: wipe freespace on ext3? In-Reply-To: References: Message-ID: <20080412153614.GC3954@storm.local.network> Hi, On Sat, Apr 12, 2008 at 04:09:17AM -0700, defcon wrote: > Hey all, I have failed at wiping my freespace on my hard drive, what I > have done is: > [dd] if=/dev/urandom of=bigfile > and I also have tried sfill from the secure delete package from thc. > I have successfully recovered all deleted files after wiping the hard > drive with foremost, since this is a journaling filesystem is there > any way around this, is anyone experienced in this and have tried to > wipe/recover files on ext3? If you wiped so little data that the kernel cached it all and you removed bigfile before the cache could be flushed, perhaps the disk never got touched. Maybe try `sync' after running dd. -Forest -- Forest Bond http://www.alittletooquiet.net http://www.pytagsfs.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From sandeen at redhat.com Wed Apr 16 14:47:33 2008 From: sandeen at redhat.com (Eric Sandeen) Date: Wed, 16 Apr 2008 09:47:33 -0500 Subject: Shrink ext3 filesystem , running out of inode questions In-Reply-To: <47F503FF.9030609@fnal.gov> References: <47F503FF.9030609@fnal.gov> Message-ID: <48061185.9030808@redhat.com> Ling C. Ho wrote: > Hi, > > I have an ext3 file system created with -T largefile4 option. Now it is > running out of inode but it's only about 10% full. > > - Is there a way now to increase the number of inode without making a > new file system? Growing the filesystem would add more inodes, but it's still not probably what you want. You'd still have the same inode::block ratio. > - If not, I am thinking about shrinking the file system, and then use > the free up space to create a new file system with more inodes, and move > the data over. Since I am already running out of inode, would I still be > able to shrink the file system? Shrinking should not require more inodes; that sounds like a decent plan. -Eric > Thanks, > ... > ling From ling at fnal.gov Wed Apr 16 14:57:03 2008 From: ling at fnal.gov (Ling C. Ho) Date: Wed, 16 Apr 2008 09:57:03 -0500 Subject: Shrink ext3 filesystem , running out of inode questions In-Reply-To: <48061185.9030808@redhat.com> References: <47F503FF.9030609@fnal.gov> <48061185.9030808@redhat.com> Message-ID: <480613BF.5040905@fnal.gov> Hi, I tried creating a new file system and shrinking it. The inode count reduced after the shrink. So, would shrinking the one filesystem I have with all inode already used create problem? ie. where would the inodes be reallocated after the number of block groups get reduced? Thanks, ... ling Eric Sandeen wrote: > Ling C. Ho wrote: > >> Hi, >> >> I have an ext3 file system created with -T largefile4 option. Now it is >> running out of inode but it's only about 10% full. >> >> - Is there a way now to increase the number of inode without making a >> new file system? >> > > Growing the filesystem would add more inodes, but it's still not > probably what you want. You'd still have the same inode::block ratio. > > >> - If not, I am thinking about shrinking the file system, and then use >> the free up space to create a new file system with more inodes, and move >> the data over. Since I am already running out of inode, would I still be >> able to shrink the file system? >> > > Shrinking should not require more inodes; that sounds like a decent plan. > > -Eric > > >> Thanks, >> ... >> ling >> > > From sandeen at redhat.com Wed Apr 16 15:17:09 2008 From: sandeen at redhat.com (Eric Sandeen) Date: Wed, 16 Apr 2008 10:17:09 -0500 Subject: Shrink ext3 filesystem , running out of inode questions In-Reply-To: <480613BF.5040905@fnal.gov> References: <47F503FF.9030609@fnal.gov> <48061185.9030808@redhat.com> <480613BF.5040905@fnal.gov> Message-ID: <48061875.8030804@redhat.com> Ling C. Ho wrote: > Hi, > > I tried creating a new file system and shrinking it. The inode count > reduced after the shrink. So, would shrinking the one filesystem I have > with all inode already used create problem? ie. where would the inodes > be reallocated after the number of block groups get reduced? Ok, now that I've had coffee this morning and am thinking more clearly... sorry. Well, you will probably find that you can't shrink the fs much if at all, due to the inodes being in use, you're right. Dump and restore might be the best plan. -Eric From ulf at openlane.com Wed Apr 16 19:43:38 2008 From: ulf at openlane.com (Ulf Zimmermann) Date: Wed, 16 Apr 2008 12:43:38 -0700 Subject: EXT3 and SAN Snap Shot, Best practice? Message-ID: <5DE4B7D3E79067418154C49A739C125104C4A0C4@msmpk01.corp.autc.com> As RedHat has a limited choice of file systems it supports, I have a need to use EXT3 together with Oracle and a SAN SnapShot (3Par Snapclone). I was wondering if anyone could give me some feed back as to the "best" method to do that. So far I am thinking: Put Oracle into Backup mode Run sync (or multiple times) Execute Snapshot command on SAN (takes less then 1 second). Take Oracle out of Backup mode Then on the system to be refreshed: Shutdown oracle Umount file system Execute Snapshot update (3Par updatevv) Run fsck.ext3 on the file system, which in my tries so far will just recover journals Mount file system Go through work to recover Oracle Start Oracle. Anyone care to add to this? Thanks, Ulf. From bdavids1 at gmu.edu Wed Apr 16 20:47:28 2008 From: bdavids1 at gmu.edu (Brian Davidson) Date: Wed, 16 Apr 2008 16:47:28 -0400 Subject: Extremely long FSCK. (>24 hours) In-Reply-To: References: Message-ID: <4F7E8540-9FFA-42F2-A6F2-6A0132ABECB9@gmu.edu> That sounds a lot like the floating point rounding error I encountered last year. > On Mar 20, 2007, at 6:59 PM, Theodore Tso wrote: > >> Well, keep in mind that the float is just as an optimization to doing >> a simple binary search. So it doesn't have to be precise; an >> approximation is fine, except when mid ends up being larger than >> high. >> But it's simple enough to catch that particular case where the >> division going to 1 instead of 0.99999 as we might expect. Catching >> that should be enough, I expect. >> >> - Ted > > With a float, you're still trying to cram 32 bits into a 24 bit > mantissa (23 bits + implicit bit). If nothing else, the float > should get changed to a double which has a 53 bit mantissa (52 + > implicit bit). Just catching the case where division goes to one > causes it to do a linear search. Given that this only occurs on > really big filesystems, that's probably not what you want to do... > > Brian Here's the patch I applied to e2fsck to get around the issue: > This patch does the trick. > >> --- e2fsprogs-1.39/lib/ext2fs/icount.c 2005-09-06 >> 05:40:14.000000000 -0400 >> +++ e2fsprogs-1.39-test/lib/ext2fs/icount.c 2007-03-13 >> 10:56:19.000000000 -0400 >> @@ -251,6 +251,10 @@ >> range = ((float) (ino - lowval)) / >> (highval - lowval); >> mid = low + ((int) (range * (high-low))); >> + if (mid > high) >> + mid = high; >> + if (mid < low) >> + mid = low; >> } >> #endif >> if (ino == icount->list[mid].ino) { > > Our inode count is 732,577,792 on a 5.4 TB filesystem with 5.0 TB in > use (94% use). It took about 9 hours to run, and used of 4GB of > memory. Hope this helps. On Apr 8, 2008, at 5:15 PM, Justin Hahn wrote: > Hello all, > > I recently encountered a problem that I thought I should bring to > the ext3 devs. I've seen some evidence of similar issues in the > past, but it wasn't clear that anyone had experienced it at quite > this scale. > > The short summary is that I let 'e2fsck -C 0 -y -f' run for more > than 24 hours on a 4.25Tb filesystem before having to kill it. It > had been stuck at "70.1%" in Pass 2 (checking directory structure) > for about 10 hours. e2fsck was using about 4.4Gb of RAM and was > maxing out 1 CPU core (out of 8). > > This filesystem is used for disk-to-disk backups with dirvish[1] > The volume was 4.25Gb large, and about 90% full. I was doing an fsck > prior to running resize2fs, as required by said tool. (I ended up > switching to ext2online, which worked fine.) > > I suspect the large # of hard links and the large file system size > are what did me in. Fortunately, my filesystem is clean for now. > What I'm worried about is the day when it actually needs a proper > fsck to correct problems. I have no idea how long the fsck would > have taken had I not cancelled it. I fear it would have been more > than 48hours. > > Any suggestions (including undocumented command line options) I can > try to accelerate this in the future would be welcome. As this > system is for backups and is idle for about 12-16 hours a day, I can > un-mount the volume and perform some (non-destructive!!) tests if > there is interest. Unfortunately, I cannot provide remote access to > the system for security reasons as this is our backup archive. > > I'm using CentOS 4.5 as my distro. > > 'uname -a' reports: > Linux backups-00.dc-00.rbm.local 2.6.9-55.0.12.ELsmp #1 SMP Fri Nov > 2 12:38:56 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux > > The underlying hardware is a Dell PE 2950, with a PERC 5i RAID > controller and 6x 1Tb SATA drives and 8Gb of RAM. I/O performance > has been fine for my purposes, but I have not benchmarked, tuned or > tweaked it in any way. > > Thanks! > > --jeh > > [1] Dirvish is an rsync/hardlink based set of perl scripts -- see http://www.dirvish.org/ > for more details. > > _______________________________________________ > Ext3-users mailing list > Ext3-users at redhat.com > https://www.redhat.com/mailman/listinfo/ext3-users From lizhaosg at yahoo.com.sg Tue Apr 22 07:38:59 2008 From: lizhaosg at yahoo.com.sg (Zhao Li) Date: Tue, 22 Apr 2008 15:38:59 +0800 (SGT) Subject: Assertion failure at commit.c: J_ASSERT_JH(h, commit_transaction != cp_transaction) Message-ID: <385011.67706.qm@web76311.mail.sg1.yahoo.com> Hi, all, I recently got the assertion failure error on J_ASSERT_JH(h, commit_transaction != cp_transaction). Anyone has clues what could cause the assert fail? How can I debug and fix it then? Processor: ARM926EJ-Ssid Hardware: Frescale MX2ADS Kernel: 2.4.20 Thanks very much for your help! Best Regards, Li Zhao __________________________________________________________________ Yahoo! Singapore Answers Real people. Real questions. Real answers. Share what you know at http://answers.yahoo.com.sg From jprats at cesca.es Mon Apr 28 10:49:45 2008 From: jprats at cesca.es (Jordi Prats) Date: Mon, 28 Apr 2008 12:49:45 +0200 Subject: ext3 limits? Message-ID: <4815ABC9.1060209@cesca.es> Hi all, I have a 4246GB ext3 filesystem exported by NFS on a 32 bits architecture. Some applications are generating estrange errors, so maybe I'm facing a ext3 limit? Thanks! Jordi -- ...................................................................... __ / / Jordi Prats C E / S / C A Dept. de Sistemes /_/ Centre de Supercomputaci? de Catalunya Gran Capit?, 2-4 (Edifici Nexus) ? 08034 Barcelona T. 93 205 6464 ? F. 93 205 6979 ? jprats at cesca.es ...................................................................... From lists at nerdbynature.de Mon Apr 28 13:42:46 2008 From: lists at nerdbynature.de (Christian Kujau) Date: Mon, 28 Apr 2008 15:42:46 +0200 (CEST) Subject: ext3 limits? In-Reply-To: <4815ABC9.1060209@cesca.es> References: <4815ABC9.1060209@cesca.es> Message-ID: <0d4f9140d1599a445f4caab3cda3ea97.squirrel@housecafe.dyndns.org> On Mon, April 28, 2008 12:49, Jordi Prats wrote: > I have a 4246GB ext3 filesystem exported by NFS on a 32 bits > architecture. Some applications are generating estrange errors, so maybe > I'm facing a ext3 limit? http://batleth.sapienti-sat.org/projects/FAQs/ext3-faq.html says: Ext3 can support files up to 1TB. With a 2.4 kernel the filesystem size is limited by the maximal block device size, which is 2TB. In 2.6 the maximum (32-bit CPU) limit is of block devices is 16TB, but ext3 supports only up to 4TB. HTH, C. -- make bzImage, not war From anhad.shemal at gmail.com Mon Apr 28 16:05:03 2008 From: anhad.shemal at gmail.com (ashutosh dubey) Date: Mon, 28 Apr 2008 21:35:03 +0530 Subject: Functionalities of some functions Message-ID: Hi all, Could anybody tell me what are the functionalities of following functions in ext3 code?The functions are defined in fs/ext3/inode.c ext3_get_blocks_handle() ext3_get_block() ext3_getblk() ext3_bread() walk_page_buffers() thanx in advance. -- Ashutosh Dubey CSE-IDD 5th year IIT Roorkee -------------- next part -------------- An HTML attachment was scrubbed... URL: From jprats at cesca.es Tue Apr 29 14:47:51 2008 From: jprats at cesca.es (Jordi Prats) Date: Tue, 29 Apr 2008 16:47:51 +0200 Subject: ext3 limits? In-Reply-To: <0d4f9140d1599a445f4caab3cda3ea97.squirrel@housecafe.dyndns.org> References: <4815ABC9.1060209@cesca.es> <0d4f9140d1599a445f4caab3cda3ea97.squirrel@housecafe.dyndns.org> Message-ID: <48173517.8080703@cesca.es> Just 4TB? Anyone can give any recommendation witch file system to use for large disks? Thanks, Jordi Christian Kujau wrote: > On Mon, April 28, 2008 12:49, Jordi Prats wrote: > >> I have a 4246GB ext3 filesystem exported by NFS on a 32 bits >> architecture. Some applications are generating estrange errors, so maybe >> I'm facing a ext3 limit? >> > > http://batleth.sapienti-sat.org/projects/FAQs/ext3-faq.html says: > > Ext3 can support files up to 1TB. With a 2.4 kernel the filesystem size > is limited by the maximal block device size, which is 2TB. In 2.6 the > maximum (32-bit CPU) limit is of block devices is 16TB, but ext3 > supports only up to 4TB. > > > HTH, > C. > -- ...................................................................... __ / / Jordi Prats C E / S / C A Dept. de Sistemes /_/ Centre de Supercomputaci? de Catalunya Gran Capit?, 2-4 (Edifici Nexus) ? 08034 Barcelona T. 93 205 6464 ? F. 93 205 6979 ? jprats at cesca.es ...................................................................... From sandeen at redhat.com Tue Apr 29 14:58:39 2008 From: sandeen at redhat.com (Eric Sandeen) Date: Tue, 29 Apr 2008 09:58:39 -0500 Subject: ext3 limits? In-Reply-To: <0d4f9140d1599a445f4caab3cda3ea97.squirrel@housecafe.dyndns.org> References: <4815ABC9.1060209@cesca.es> <0d4f9140d1599a445f4caab3cda3ea97.squirrel@housecafe.dyndns.org> Message-ID: <4817379F.2080003@redhat.com> Christian Kujau wrote: > On Mon, April 28, 2008 12:49, Jordi Prats wrote: >> I have a 4246GB ext3 filesystem exported by NFS on a 32 bits >> architecture. Some applications are generating estrange errors, so maybe >> I'm facing a ext3 limit? > > http://batleth.sapienti-sat.org/projects/FAQs/ext3-faq.html says: > > Ext3 can support files up to 1TB. With a 2.4 kernel the filesystem size > is limited by the maximal block device size, which is 2TB. In 2.6 the > maximum (32-bit CPU) limit is of block devices is 16TB, but ext3 > supports only up to 4TB. Actually as of 2.6.18 (or is it .19...), ext3 kernel code should support the full 16T, at least in terms of being able to address that many blocks w/o corruption. I did a fair amount of work in that time frame to root out all the sign overflows etc to allow ext3 to get to 16T. -Eric From sandeen at redhat.com Tue Apr 29 15:03:36 2008 From: sandeen at redhat.com (Eric Sandeen) Date: Tue, 29 Apr 2008 10:03:36 -0500 Subject: ext3 limits? In-Reply-To: <4817379F.2080003@redhat.com> References: <4815ABC9.1060209@cesca.es> <0d4f9140d1599a445f4caab3cda3ea97.squirrel@housecafe.dyndns.org> <4817379F.2080003@redhat.com> Message-ID: <481738C8.1090901@redhat.com> Eric Sandeen wrote: > Christian Kujau wrote: >> On Mon, April 28, 2008 12:49, Jordi Prats wrote: >>> I have a 4246GB ext3 filesystem exported by NFS on a 32 bits >>> architecture. Some applications are generating estrange errors, so maybe >>> I'm facing a ext3 limit? >> http://batleth.sapienti-sat.org/projects/FAQs/ext3-faq.html says: >> >> Ext3 can support files up to 1TB. With a 2.4 kernel the filesystem size >> is limited by the maximal block device size, which is 2TB. In 2.6 the >> maximum (32-bit CPU) limit is of block devices is 16TB, but ext3 >> supports only up to 4TB. > > Actually as of 2.6.18 (or is it .19...), ext3 kernel code should support > the full 16T, at least in terms of being able to address that many > blocks w/o corruption. I did a fair amount of work in that time frame > to root out all the sign overflows etc to allow ext3 to get to 16T. Oh, and prior to that, really 8T should be fine. -Eric From lists at nerdbynature.de Tue Apr 29 15:10:10 2008 From: lists at nerdbynature.de (Christian Kujau) Date: Tue, 29 Apr 2008 17:10:10 +0200 (CEST) Subject: ext3 limits? In-Reply-To: <4817379F.2080003@redhat.com> References: <4815ABC9.1060209@cesca.es> <0d4f9140d1599a445f4caab3cda3ea97.squirrel@housecafe.dyndns.org> <4817379F.2080003@redhat.com> Message-ID: On Tue, April 29, 2008 16:58, Eric Sandeen wrote: > Actually as of 2.6.18 (or is it .19...), ext3 kernel code should support > the full 16T, at least in terms of being able to address that many blocks > w/o corruption. I did a fair amount of work in that time frame to root > out all the sign overflows etc to allow ext3 to get to 16T. That's great, thanks! Then someone should either update the FAQ or better yet put the FAQ on e2fsprogs.sf.net (or wherever the Ext2 homepage resides). Thanks, C. -- BOFH excuse #442: Trojan horse ran out of hay From jprats at cesca.es Wed Apr 30 08:52:05 2008 From: jprats at cesca.es (Jordi Prats) Date: Wed, 30 Apr 2008 10:52:05 +0200 Subject: ext3 limits? In-Reply-To: <4817379F.2080003@redhat.com> References: <4815ABC9.1060209@cesca.es> <0d4f9140d1599a445f4caab3cda3ea97.squirrel@housecafe.dyndns.org> <4817379F.2080003@redhat.com> Message-ID: <48183335.4060000@cesca.es> Eric Sandeen wrote: > Actually as of 2.6.18 (or is it .19...), ext3 kernel code should support > the full 16T, at least in terms of being able to address that many > blocks w/o corruption. I did a fair amount of work in that time frame > to root out all the sign overflows etc to allow ext3 to get to 16T. > Thanks Eric! I'm going to update my kernel right now! Jordi -- ...................................................................... __ / / Jordi Prats C E / S / C A Dept. de Sistemes /_/ Centre de Supercomputaci? de Catalunya Gran Capit?, 2-4 (Edifici Nexus) ? 08034 Barcelona T. 93 205 6464 ? F. 93 205 6979 ? jprats at cesca.es ......................................................................