[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Assertion failure in ext3_sync_file() at fs/ext3/fsync.c:50: "ext3_journal_current_handle() == 0"

------------[ cut here ]------------
kernel BUG at fs/ext3/fsync.c:50!
invalid operand: 0000 [#1]
CPU:    0
EIP:    0060:[<b0187d38>]    Not tainted VLI
EFLAGS: 00010296   (
EIP is at ext3_sync_file+0x58/0xf0
eax: 00000068   ebx: bf4a479c   ecx: b03cffac   edx: b03cffac
esi: b0398cfc   edi: b2b8f1c8   ebp: c13bcf60   esp: c13bcf18
ds: 007b   es: 007b   ss: 0068
Process aptitude (pid: 26952, threadinfo=c13bc000 task=d99cca80)
Stack: b0398afc b0383f40 b0395746 00000032 b0398cfc 00000000 00000000 e84824c0
       ca281dc0 bf4a483c c13bcf60 b01317c2 bf4a483c 00000000 00000000 e84824c0
       ffffffe4 bf4a483c c13bcf80 b01458ce e84824c0 dce74d9c 00000001 a7004000
Call Trace:
 [<b01032cb>] show_stack+0xab/0xf0
 [<b0103494>] show_registers+0x164/0x200
 [<b01036a8>] die+0xc8/0x150
 [<b01037b9>] do_trap+0x89/0xd0
 [<b0103b1a>] do_invalid_op+0xaa/0xc0
 [<b0102eef>] error_code+0x4f/0x54
 [<b01458ce>] msync_interval+0x8e/0xd0
 [<b0145a6f>] sys_msync+0x15f/0x171
 [<b0102c69>] syscall_call+0x7/0xb
Code: ba 46 57 39 b0 be fc 8c 39 b0 b8 40 3f 38 b0 89 74 24 10 89 4c 24 0c 89 54 24 08 89 44 24 04 c7 04 24 fc 8a 39 b0 e8 08 10 f9 ff <0f> 0b 32 00 46 57 39 b0 0f b7 43 28 25 00 f0 00 00 3d 00 80 00

x86, uniprocessor,, ext3 file system, data=ordered, 6-way RAID-1.
Kernel is stock except for ppskit-lite patches.

This is the usually-not-mounted emergency rescue partition which
contains disaster recovery tools.  Thus, the somewhat paranoid
data integrity settings.

The FS just filled up as I was doing the every-few-months update.

Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/md1                432312    425732         0 100% /boot

I'm currently copying a raw device snapshot which I can make available to
anyone who promises not to go grepping for secrets on it.  I don't think
there are any, but hunting through the whole image and maybe zeroing a
few data blocks is a bit of a PITA.

Anyway, thanks for what has usually been a very reliable file system!
I hope there's enough info here to find the problem.

Here's the tune2fs -l output.  No idea why it says "clean"; it is still
mounted read/write.

tune2fs 1.38 (30-Jun-2005)
Filesystem volume name:   <none>
Last mounted on:          /boot
Filesystem UUID:          ad036960-f1df-4c5e-9240-4e917527f20c
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal filetype needs_recovery sparse_super
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Remount read-only
Filesystem OS type:       Linux
Inode count:              55296
Block count:              439360
Reserved block count:     21968
Free blocks:              91538
Free inodes:              30975
First block:              1
Block size:               1024
Fragment size:            1024
Blocks per group:         8192
Fragments per group:      8192
Inodes per group:         1024
Inode blocks per group:   128
Last mount time:          Thu Nov 24 20:52:16 2005
Last write time:          Thu Nov 24 20:52:16 2005
Mount count:              6
Maximum mount count:      34
Last checked:             Sat Aug  6 04:39:08 2005
Check interval:           15552000 (6 months)
Next check after:         Thu Feb  2 04:39:08 2006
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               128
Journal inode:            8
Journal backup:           inode blocks

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]