From ross at biostat.ucsf.edu Sun May 2 06:57:57 2010 From: ross at biostat.ucsf.edu (Ross Boylan) Date: Sat, 01 May 2010 23:57:57 -0700 Subject: ext3_dx_add_entry: Directory index full! Message-ID: <1272783477.11624.53.camel@corn.betterworld.us> My log is showing errors like May 1 05:14:17 corn kernel: [6822807.017625] EXT3-fs warning (device dm-7): ext3_dx_add_entry: Directory index full! Judging from the minor device numbers in /dev/mapper, that corresponds to my mail spool. My searches suggest that the problem indicates an individual directory has too many files in it. There does not seem to be a general shortage of inodes or space. df -i shows IUse of 11% for that partition and df says 88% of the ~37G is in use. First question: what happens in these circumstances? Are files lost? Search and indexing are inefficient? The program trying to write the file gets an error (it's cyrus. 10 seconds after the errors shown above, the log has "cyrus/master[8178]: process 11850 exited, status 98". Usually the status is 0.)? Second: is there a way to find what directory is causing the problem? Third: How can I fix this? I'm running a stock Debian Lenny 2.6.26-2-686 kernel. From adilger at dilger.ca Mon May 3 07:01:57 2010 From: adilger at dilger.ca (Andreas Dilger) Date: Mon, 3 May 2010 03:01:57 -0400 Subject: ext3_dx_add_entry: Directory index full! In-Reply-To: <1272783477.11624.53.camel@corn.betterworld.us> References: <1272783477.11624.53.camel@corn.betterworld.us> Message-ID: <8763852E-EEF4-4EE8-8265-38B0395528ED@dilger.ca> On 2010-05-02, at 02:57, Ross Boylan wrote: My log is showing errors like > May 1 05:14:17 corn kernel: [6822807.017625] EXT3-fs warning (device dm-7): ext3_dx_add_entry: Directory index full! > > Judging from the minor device numbers in /dev/mapper, that corresponds to my mail spool. > > My searches suggest that the problem indicates an individual directory > has too many files in it. There does not seem to be a general shortage > of inodes or space. df -i shows IUse of 11% for that partition and df > says 88% of the ~37G is in use. The directory is probably at least 10M files, though it might also suffer from the random create/delete cycle of the mail spool directory. > First question: what happens in these circumstances? Are files lost? > Search and indexing are inefficient? It _should_ be the latter, though I haven't actually looked into it closely. > Second: is there a way to find what directory is causing the problem? Patch the error message to print the inode number and dentry name, and submit it here. > Third: How can I fix this? e2fsck -fD on the filesystem (unmounted of course) Cheers, Andreas From magawake at gmail.com Tue May 11 13:33:46 2010 From: magawake at gmail.com (Mag Gam) Date: Tue, 11 May 2010 09:33:46 -0400 Subject: 2**31-1 blocks question Message-ID: We need to create very large filesystems. We prefer to have a filesystem which is 12TB but it seems ext3 does not suppor that. Everytime, we do mkfs.ext3 on a 12TB LV we get mke2fs: Filesystem too large. No more than 2**31-1 blocks (8TB using a blocksize of 4K) are currently supported. We can override that by doing, mkfs.ext3 -b 8192 But what is the downside for doing this? By using a larger blocksize what are the consequences? From sandeen at redhat.com Tue May 11 13:51:42 2010 From: sandeen at redhat.com (Eric Sandeen) Date: Tue, 11 May 2010 08:51:42 -0500 Subject: 2**31-1 blocks question In-Reply-To: References: Message-ID: <4BE960EE.9010701@redhat.com> Mag Gam wrote: > We need to create very large filesystems. We prefer to have a > filesystem which is 12TB but it seems ext3 does not suppor that. Most recent ext3 kernelspace and userspace should technically make it to 16T. > Everytime, we do mkfs.ext3 on a 12TB LV we get > > mke2fs: Filesystem too large. No more than 2**31-1 blocks > > (8TB using a blocksize of 4K) are currently supported. Newer e2fsprogs should have lifted this restriction. Note however that a filesystem this large will probably be almost impossible - at least very slow - to run fsck on. > > We can override that by doing, > > mkfs.ext3 -b 8192 > > But what is the downside for doing this? By using a larger blocksize > what are the consequences? The downside is you probably can't mount it, because it's block size > page size on most architectures (like x86 and x86_64) -Eric > _______________________________________________ > Ext3-users mailing list > Ext3-users at redhat.com > https://www.redhat.com/mailman/listinfo/ext3-users From bothie at gmx.de Tue May 11 23:26:57 2010 From: bothie at gmx.de (Bodo Thiesen) Date: Wed, 12 May 2010 01:26:57 +0200 Subject: 2**31-1 blocks question In-Reply-To: <4BE960EE.9010701@redhat.com> References: <4BE960EE.9010701@redhat.com> Message-ID: <20100512012657.1d6ac260@gmx.de> * Eric Sandeen hat geschrieben: > Mag Gam wrote: >> We need to create very large filesystems. We prefer to have a >> filesystem which is 12TB but it seems ext3 does not suppor that. > Most recent ext3 kernelspace and userspace should technically > make it to 16T. [...] > The downside is you probably can't mount it, because it's > block size > page size on most architectures (like x86 and x86_64) Contradiction? Anyone? @Mag Gam: In other words: No, you can't, sorry. However, in dependence of what you *really* need, you could create two or more file systems of lower size and mount some in places, where many files are stored. Yes, I know, that this is sub-optimal - but better take a sub-optimal solution than no solution ;) Regards, Bodo From magawake at gmail.com Wed May 12 01:42:28 2010 From: magawake at gmail.com (Mag Gam) Date: Tue, 11 May 2010 21:42:28 -0400 Subject: 2**31-1 blocks question In-Reply-To: <20100512012657.1d6ac260@gmx.de> References: <4BE960EE.9010701@redhat.com> <20100512012657.1d6ac260@gmx.de> Message-ID: Thanks. Basically, I should avoid creating such a large filesystems. On Tue, May 11, 2010 at 7:26 PM, Bodo Thiesen wrote: > * Eric Sandeen hat geschrieben: > >> Mag Gam wrote: >>> We need to create very large filesystems. We prefer to have a >>> filesystem which is 12TB but it seems ext3 does not suppor that. >> Most recent ext3 kernelspace and userspace should technically >> make it to 16T. > > [...] > >> The downside is you probably can't mount it, because it's >> block size > page size on most architectures (like x86 and x86_64) > > Contradiction? Anyone? > > @Mag Gam: In other words: No, you can't, sorry. > > However, in dependence of what you *really* need, you could create two or > more file systems of lower size and mount some in places, where many files > are stored. Yes, I know, that this is sub-optimal - but better take a > sub-optimal solution than no solution ;) > > Regards, Bodo > > _______________________________________________ > Ext3-users mailing list > Ext3-users at redhat.com > https://www.redhat.com/mailman/listinfo/ext3-users > From sandeen at redhat.com Wed May 12 01:57:31 2010 From: sandeen at redhat.com (Eric Sandeen) Date: Tue, 11 May 2010 20:57:31 -0500 Subject: 2**31-1 blocks question In-Reply-To: References: <4BE960EE.9010701@redhat.com> <20100512012657.1d6ac260@gmx.de> Message-ID: <4BEA0B0B.4000600@redhat.com> Mag Gam wrote: > Thanks. > > Basically, I should avoid creating such a large filesystems. ... on ext3. Other filesystems can handle this better; ext4 should be quite useable up to 16T, others can go larger still. -Eric From samuel at bcgreen.com Wed May 12 02:31:07 2010 From: samuel at bcgreen.com (Stephen Samuel) Date: Tue, 11 May 2010 19:31:07 -0700 Subject: 2**31-1 blocks question In-Reply-To: <4BEA0B0B.4000600@redhat.com> References: <4BE960EE.9010701@redhat.com> <20100512012657.1d6ac260@gmx.de> <4BEA0B0B.4000600@redhat.com> Message-ID: It seems to me that Mag is running a somewhat older system. That would explain the problems with expanding ext3 past 8TB. Perhaps this would be a good excuse to plan an upgrade to the OS, and maybe also the hardware. On Tue, May 11, 2010 at 6:57 PM, Eric Sandeen wrote: > Mag Gam wrote: > > Thanks. > > > > Basically, I should avoid creating such a large filesystems. > > ... on ext3. Other filesystems can handle this better; ext4 should > be quite useable up to 16T, others can go larger still. > > -Eric > > _______________________________________________ > Ext3-users mailing list > Ext3-users at redhat.com > https://www.redhat.com/mailman/listinfo/ext3-users > -- Stephen Samuel http://www.bcgreen.com Software, like love, 778-861-7641 grows when you give it away -------------- next part -------------- An HTML attachment was scrubbed... URL: From magawake at gmail.com Wed May 12 02:43:56 2010 From: magawake at gmail.com (Mag Gam) Date: Tue, 11 May 2010 22:43:56 -0400 Subject: 2**31-1 blocks question In-Reply-To: References: <4BE960EE.9010701@redhat.com> <20100512012657.1d6ac260@gmx.de> <4BEA0B0B.4000600@redhat.com> Message-ID: Running centos 5.2 on Intel Xeon . Any advice? On Tue, May 11, 2010 at 10:31 PM, Stephen Samuel wrote: > It seems to me that Mag is running a somewhat older system. That would > explain the problems > with expanding ext3 past 8TB.? Perhaps this would be a good excuse to plan > an upgrade to the OS, and maybe also the hardware. > > On Tue, May 11, 2010 at 6:57 PM, Eric Sandeen wrote: >> >> Mag Gam wrote: >> > Thanks. >> > >> > Basically, I should avoid creating such a large filesystems. >> >> ... on ext3. ?Other filesystems can handle this better; ext4 should >> be quite useable up to 16T, others can go larger still. >> >> -Eric >> >> _______________________________________________ >> Ext3-users mailing list >> Ext3-users at redhat.com >> https://www.redhat.com/mailman/listinfo/ext3-users > > > > -- > Stephen Samuel http://www.bcgreen.com ?Software, like love, > 778-861-7641 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?grows when you give it away > From lists at nerdbynature.de Thu May 13 00:49:20 2010 From: lists at nerdbynature.de (Christian Kujau) Date: Wed, 12 May 2010 17:49:20 -0700 (PDT) Subject: 2**31-1 blocks question In-Reply-To: References: <4BE960EE.9010701@redhat.com> <20100512012657.1d6ac260@gmx.de> <4BEA0B0B.4000600@redhat.com> Message-ID: On Tue, 11 May 2010 at 22:43, Mag Gam wrote: > Running centos 5.2 on Intel Xeon . So, this would be Linux 2.6.18 (plus various patches, I suppose). Ext4 won't run there (or did CentOS backport ext4?). 16TB should be possible with ext3 (and 4KB blocksize), upgrading e2fsprogs would seem the easiest step to begin with. Christian. -- BOFH excuse #54: Evil dogs hypnotised the night shift From samuel at bcgreen.com Thu May 13 07:40:29 2010 From: samuel at bcgreen.com (Stephen Samuel) Date: Thu, 13 May 2010 00:40:29 -0700 Subject: 2**31-1 blocks question In-Reply-To: References: <4BE960EE.9010701@redhat.com> <20100512012657.1d6ac260@gmx.de> <4BEA0B0B.4000600@redhat.com> Message-ID: Your core problem, as I see it, is that you're running at the boundary of what ext3 is capable of, in any event. This means that, even if you do manage to get it working you're going to be running into other boundary related conditions (like your first fsck taking longer than an upgrade would have, the inability to expand the filesystem much past it's current size, and god-only knows what else. In other words, if you need to stay with the current version of centos for other reasons, then continue on this path, otherwise, an upgrade is likely to make life easier in the long run. ... and if you can swing an upgrade to 64 bit, you may avoid other side effects of working with a filesystem this large. On Tue, May 11, 2010 at 7:43 PM, Mag Gam wrote: > Running centos 5.2 on Intel Xeon . > > Any advice? > > On Tue, May 11, 2010 at 10:31 PM, Stephen Samuel > wrote: > > It seems to me that Mag is running a somewhat older system. That would > > explain the problems > > with expanding ext3 past 8TB. Perhaps this would be a good excuse to > plan > > an upgrade to the OS, and maybe also the hardware. > > > > On Tue, May 11, 2010 at 6:57 PM, Eric Sandeen > wrote: > >> > >> Mag Gam wrote: > >> > Thanks. > >> > > >> > Basically, I should avoid creating such a large filesystems. > >> > >> ... on ext3. Other filesystems can handle this better; ext4 should > >> be quite useable up to 16T, others can go larger still. > >> > -- Stephen Samuel http://www.bcgreen.com Software, like love, 778-861-7641 grows when you give it away -------------- next part -------------- An HTML attachment was scrubbed... URL: From rgillette at napc.com Thu May 13 15:13:51 2010 From: rgillette at napc.com (Russell Gillette) Date: Thu, 13 May 2010 11:13:51 -0400 Subject: 2**31-1 blocks question In-Reply-To: <20100512012657.1d6ac260@gmx.de> References: <4BE960EE.9010701@redhat.com> <20100512012657.1d6ac260@gmx.de> Message-ID: <4BEC172F.8030403@napc.com> On 5/11/10 7:26 PM, Bodo Thiesen wrote: > * Eric Sandeen hat geschrieben: > >> > Mag Gam wrote: >>> >> We need to create very large filesystems. We prefer to have a >>> >> filesystem which is 12TB but it seems ext3 does not suppor that. >> > Most recent ext3 kernelspace and userspace should technically >> > make it to 16T. > [...] > >> > The downside is you probably can't mount it, because it's >> > block size> page size on most architectures (like x86 and x86_64) > Contradiction? Anyone? Eric's comment about not being able to mount referenced altering the FS block size to 8k from 4k. Intel Xeon only supports 4k pages. He is correct that newer e2fsprogs will allow creation of ext3 filesystems up to 16T _without_ altering block size, as I frequently make 10T+ filesystems on RHEL 5.3 and 5.4. --russellg From criley at erad.com Thu May 13 15:30:16 2010 From: criley at erad.com (Charles Riley) Date: Thu, 13 May 2010 11:30:16 -0400 (EDT) Subject: 2**31-1 blocks question In-Reply-To: <4BEC172F.8030403@napc.com> Message-ID: <15390666.44071273764616773.JavaMail.root@boardwalk2.erad.com> ----- "Russell Gillette" wrote: > On 5/11/10 7:26 PM, Bodo Thiesen wrote: > > * Eric Sandeen hat geschrieben: > > > >> > Mag Gam wrote: > >>> >> We need to create very large filesystems. We prefer to have a > >>> >> filesystem which is 12TB but it seems ext3 does not suppor > that. > >> > Most recent ext3 kernelspace and userspace should technically > >> > make it to 16T. > > [...] > > > >> > The downside is you probably can't mount it, because it's > >> > block size> page size on most architectures (like x86 and > x86_64) > > Contradiction? Anyone? > > Eric's comment about not being able to mount referenced altering the > FS > block size to 8k from 4k. Intel Xeon only supports 4k pages. > > He is correct that newer e2fsprogs will allow creation of ext3 > filesystems up to 16T _without_ altering block size, as I frequently > make 10T+ filesystems on RHEL 5.3 and 5.4. > > --russellg > Out of curiosity, how long does it take to fsck a 10TB filesystem? Charles From lists at nerdbynature.de Tue May 25 11:07:45 2010 From: lists at nerdbynature.de (Christian Kujau) Date: Tue, 25 May 2010 04:07:45 -0700 (PDT) Subject: ext3_clear_journal_err: Filesystem error recorded from previous mount Message-ID: Hi, this MacMini (x86, 2.6.24-24-xen) has an external disk attached via Firewire. Earlier today, the disk had a problem (might be the disk, but could've been the cabling, I suspect the latter) and the kernel rightfully complained about it: sd 4:0:0:0: [sdb] Result: hostbyte=DID_BUS_BUSY driverbyte=DRIVER_OK,SUGGEST_OK end_request: I/O error, dev sdb, sector 366464538 This sdb holds LVM volumes and one volume (/dev/mapper/vault) is usually mounted 'ro'. When the backup script tried to mount it 'rw', this happened: -------------------------------------------------------- kjournald starting. Commit interval 5 seconds EXT3-fs: mounted filesystem with ordered data mode. __journal_remove_journal_head: freeing b_committed_data __journal_remove_journal_head: freeing b_committed_data WARNING: at /data/Scratch/scm/hardy-git/debian/build/custom-source-xen/fs/buffer.c:1169 mark_buffer_dirty() Pid: 20378, comm: umount Not tainted 2.6.24-24-xen #1 Call Trace: [] mark_buffer_dirty+0x87/0xa0 [] :jbd:journal_update_superblock+0x82/0x100 [] :jbd:journal_destroy+0x18c/0x1f0 [] autoremove_wake_function+0x0/0x30 [] :ext3:ext3_put_super+0x29/0x210 [] generic_shutdown_super+0x6a/0x120 [] kill_block_super+0xd/0x20 [] deactivate_super+0x74/0xb0 [] sys_umount+0x6b/0x2f0 [] sys_newstat+0x27/0x50 [] do_munmap+0x2de/0x340 [] __up_write+0x21/0x150 [] system_call+0x68/0x6d [] system_call+0x0/0x6d EXT3-fs: INFO: recovery required on readonly filesystem. EXT3-fs: write access will be enabled during recovery. kjournald starting. Commit interval 5 seconds EXT3-fs warning (device dm-31): ext3_clear_journal_err: Filesystem error recorded from previous mount: IO failure EXT3-fs warning (device dm-31): ext3_clear_journal_err: Marking fs in need of filesystem check. EXT3-fs: recovery complete. EXT3-fs: mounted filesystem with ordered data mode. -------------------------------------------------------- However, even after unmounting, running e2fsck on the filesystem in question the "Filesystem error recorded from previous mount: IO failure" message persists. Is this expected behaviour or could the WARNING somehow have confused the kernel (and the ext3 module)? I can still mount the fs, even 'rw', but the errors in the log are kinda disturbing.... I've put the full log on: http://nerdbynature.de/bits/2.6.24-24-xen/e2fsck-20100525.txt Thanks, Christian. -- BOFH excuse #114: electro-magnetic pulses from French above ground nuke testing. From jbn at forestfield.org Fri May 28 05:22:58 2010 From: jbn at forestfield.org (J.B. Nicholson-Owens) Date: Fri, 28 May 2010 00:22:58 -0500 Subject: e2fsck: aborted In-Reply-To: <20100402220859.GA18666@red-sonja> References: <20100402220859.GA18666@red-sonja> Message-ID: <4BFF5332.4020505@forestfield.org> Morty wrote: > Google says the error relates to process memory size required for > large FSs. The FS here is a 1TB FS, created before I started using > largefile and largefile4 for large FSs. When I mount it, some data > seems to be lost. Anything I can do other than recover from backup? I too just experienced this with a 1TB EXT3 filesystem I can't mount. I'm using Fedora GNU/Linux 13 on a 64-bit AMD system with 4GB RAM (around 3.6GiB of RAM is visible according to the system monitor program one runs via System -> About This Computer). I'm running Linux kernel 2.6.33.4-95.fc13.x86_64 (a Fedora kernel package). I'm using the fsck that came with Fedora 13 (plus all of its updates): $ rpm -qi e2fsprogs Name : e2fsprogs Relocations: (not relocatable) Version : 1.41.10 Vendor: Fedora Project Release : 6.fc13 Build Date: Mon 15 Mar 2010 10:53:30 AM CDT Install Date: Wed 07 Apr 2010 02:17:41 PM CDT Build Host: xb-01.phx2.fedoraproject.org Group : System Environment/Base Source RPM: e2fsprogs-1.41.10-6.fc13.src.rpm Size : 1943069 License: GPLv2 Signature : RSA/8, Mon 15 Mar 2010 11:17:10 AM CDT, Key ID 7edc6ad6e8e40fde I tried to run $ sudo fsck.ext3 -y -C0 /dev/sdc1 (-C0 because I wanted to see how far this would go and -y because I got tired of answering "y"es to all the questions) fsck aborted itself. I tried looking up this error response and reading e2fsck.config manpage and then I added a config file: $ cat /etc/e2fsck.conf [scratch_files] numdirs_threshold = 2 directory = /var/cache/e2fsck dirinfo = true icount = true followed by re-running the fsck command above. There's over 780GiB free on / (where the scratch directory is mounted), plenty of room to let fsck avoid using RAM. Both times the fsck process gets to 70% completion and starts a long process of relocating data. That ends with: [...thousands of lines like the following...] Relocating group 7451's block bitmap from 244154368 to 244154626... Relocating group 7451's inode bitmap from 244154369 to 244154627... Relocating group 7451's inode table from 244154370 to 244154628... Relocating group 7452's block bitmap from 244187136 to 244187394... Relocating group 7452's inode bitmap from 244187137 to 244187395... Relocating group 7452's inode table from 244187138 to 244187396... e2fsck: aborted and I'm still left with a volume I can't mount. I was surprised that even though I specified I wanted the fsck to use the scratch directory the files in the scratch directory aren't very large and all of my remaining RAM is still used by fsck. It's as if using the scratch directory only made the process run slower but didn't change anything to do with (what I'm reading) is the main problem--not enough RAM to hold the data fsck needs while it runs. I can't add more RAM to the system, 4GB is its max. Has there been any improvement on doing fsck on large volumes (where "large" means larger than what fsck can work with in available system RAM)? I'd gladly trade repair time and disk space for an fsck that worked. Any ideas on how I can get fsck to run and actually fix this volume would be welcome. Thanks. From adilger at dilger.ca Fri May 28 23:22:53 2010 From: adilger at dilger.ca (Andreas Dilger) Date: Fri, 28 May 2010 17:22:53 -0600 Subject: e2fsck: aborted In-Reply-To: <4BFF5332.4020505@forestfield.org> References: <20100402220859.GA18666@red-sonja> <4BFF5332.4020505@forestfield.org> Message-ID: <726C34C9-2015-4322-8812-BD825545B004@dilger.ca> On 2010-05-27, at 23:22, J.B. Nicholson-Owens wrote: > Morty wrote: >> Google says the error relates to process memory size required for >> large FSs. The FS here is a 1TB FS, created before I started using >> largefile and largefile4 for large FSs. When I mount it, some data >> seems to be lost. Anything I can do other than recover from backup? > > I too just experienced this with a 1TB EXT3 filesystem I can't mount. I'm using Fedora GNU/Linux 13 on a 64-bit AMD system with 4GB RAM (around 3.6GiB of RAM is visible according to the system monitor program one runs via System -> About This Computer). I'm running Linux kernel 2.6.33.4-95.fc13.x86_64 (a Fedora kernel package). I can't imagine that there is a shortage of RAM for a 1TB filesystem. We run e2fsck on 3x 8TB filesystems with only 2GB of RAM. > Both times the fsck process gets to 70% completion and starts a long process of relocating data. That ends with: > > [...thousands of lines like the following...] > Relocating group 7451's block bitmap from 244154368 to 244154626... > Relocating group 7451's inode bitmap from 244154369 to 244154627... > Relocating group 7451's inode table from 244154370 to 244154628... > Relocating group 7452's block bitmap from 244187136 to 244187394... > Relocating group 7452's inode bitmap from 244187137 to 244187395... > Relocating group 7452's inode table from 244187138 to 244187396... > e2fsck: aborted What is more important to know is why it thinks the block/inode bitmaps and inode table need to be relocated in the first place. That is a pretty serious/significant problem that should normally never been seen, since the bitmaps never move, and there are backups of all the group descriptors (that say where the bitmaps are located). > and I'm still left with a volume I can't mount. Did you do something like resize your filesystem before having this problem? Cheers, Andreas From jbn at forestfield.org Fri May 28 23:49:10 2010 From: jbn at forestfield.org (J.B. Nicholson-Owens) Date: Fri, 28 May 2010 18:49:10 -0500 Subject: e2fsck: aborted In-Reply-To: <726C34C9-2015-4322-8812-BD825545B004@dilger.ca> References: <20100402220859.GA18666@red-sonja> <4BFF5332.4020505@forestfield.org> <726C34C9-2015-4322-8812-BD825545B004@dilger.ca> Message-ID: <4C005676.6060100@forestfield.org> Andreas Dilger wrote: > What is more important to know is why it thinks the block/inode > bitmaps and inode table need to be relocated in the first place. That > is a pretty serious/significant problem that should normally never > been seen, since the bitmaps never move, and there are backups of all > the group descriptors (that say where the bitmaps are located). I was unaware this was such a serious issue. Unfortunately I have no helpful information to offer. > Did you do something like resize your filesystem before having this > problem? No resizing at all; this drive has always had one volume on it at max size (1TB minus whatever ext3 needs for its own bookkeeping). I used to have this drive mounted in another computer running gNewSense (latest + updates) but I thought I'd detach it and put the drive on this 64-bit 4GB machine. Does it matter that the other system was a 32-bit system? Would it be wise to attempt fsck on a 32-bit machine? Thanks for your input. From adilger at dilger.ca Sat May 29 02:47:24 2010 From: adilger at dilger.ca (Andreas Dilger) Date: Fri, 28 May 2010 20:47:24 -0600 Subject: e2fsck: aborted In-Reply-To: <4C005676.6060100@forestfield.org> References: <20100402220859.GA18666@red-sonja> <4BFF5332.4020505@forestfield.org> <726C34C9-2015-4322-8812-BD825545B004@dilger.ca> <4C005676.6060100@forestfield.org> Message-ID: <6B61ECF8-BDF6-4C10-9F5E-05C507A332F6@dilger.ca> On 2010-05-28, at 17:49, J.B. Nicholson-Owens wrote: > Andreas Dilger wrote: >> What is more important to know is why it thinks the block/inode >> bitmaps and inode table need to be relocated in the first place. That >> is a pretty serious/significant problem that should normally never >> been seen, since the bitmaps never move, and there are backups of all >> the group descriptors (that say where the bitmaps are located). > > I was unaware this was such a serious issue. Unfortunately I have no > helpful information to offer. Unless you have the original output from the first e2fsck run, it will be quite difficult to determine what actually went wrong here. >> Did you do something like resize your filesystem before having this >> problem? > > No resizing at all; this drive has always had one volume on it at max size (1TB minus whatever ext3 needs for its own bookkeeping). > > I used to have this drive mounted in another computer running gNewSense > (latest + updates) but I thought I'd detach it and put the drive on this > 64-bit 4GB machine. > > Does it matter that the other system was a 32-bit system? Would it be > wise to attempt fsck on a 32-bit machine? It shouldn't make any difference, but it isn't impossible that there is some kind of bug involved, since I doubt this gets tested all too often. That said, it seems unlikely, since I'm sure it _does_ happen often enough that it would be reported. It wouldn't hurt to give it a try on the original 32-bit system, but I fear that even if that were the cause the later e2fsck runs may have changed the filesystem enough that it will be hard to recover. Cheers, Andreas