[Linux-cluster] gfs2 mount: No space left on device

Bart Verwilst lists at verwilst.be
Wed Aug 29 20:58:16 UTC 2012


Hi steven, i wrote a very simple script to reproduce it, writing small 
random textfiles on one node, deleting them on another ( it's the first 
test i did, so not sure yet if deleting is needed.

root at vm02-test:~/gfs2-test# ./write-files.sh
..............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
 ........
..............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
 ........
..............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
 ........
..............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
 ........
..............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
 ........
..............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
 ........
............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................/write-files.sh: 
line 5: /etc/libvirt/qemu/gfs2-test.9540: No space left on device
../write-files.sh: line 5: /etc/libvirt/qemu/gfs2-test.26494: No space 
left on device
../write-files.sh: line 5: /etc/libvirt/qemu/gfs2-test.6601: No space 
left on device
................................................................................./write-files.sh: 
line 5: /etc/libvirt/qemu/gfs2-test.2420: No space left on device
../write-files.sh: line 5: /etc/libvirt/qemu/gfs2-test.25833: No space 
left on device
../write-files.sh: line 5: /etc/libvirt/qemu/gfs2-test.2901: No space 
left on device
../write-files.sh: line 5: /etc/libvirt/qemu/gfs2-test.25261: No space 
left on device
../write-files.sh: line 5: /etc/libvirt/qemu/gfs2-test.25212: No space 
left on device
../write-files.sh: line 5: /etc/libvirt/qemu/gfs2-test.13118: No space 
left on device
../write-files.sh: line 5: /etc/libvirt/qemu/gfs2-test.832: No space 
left on device
../write-files.sh: line 5: /etc/libvirt/qemu/gfs2-test.21241: No space 
left on device
<snip>

I had over 75% diskspace free when this happened. I unmounted the 
mount, but remounting made it hang. It also hang on both other nodes. 
The other gfs2 mount ( /var/lib/libvirt/sanlock ) worked just fine. 
Rebooting this node ( echo b > /proc/sysrq-trigger ) brought the others 
to life for a while, but i keep having stability issues with the mounts 
now, so i think a full reboot is in order..

Any ideas?

Kind regards,

Bart


Steven Whitehouse schreef op 27.08.2012 14:53:
> Hi,
>
> On Thu, 2012-08-23 at 22:35 +0200, Bart Verwilst wrote:
>> Umounting and remounting made the filesystem writeable again.
>>
>> I've then ran a gfs2_fsck on the device, which gave me
>>
>
> The output from fsck doesn't really give any clues as to the cause.
>
> The reclaiming of unlinked inodes is a fairly normal thing to see,
> particularly if there has been some kind of crash just before running
> fsck and it is nothing to worry about.
>
> The real issue is why you got this out of space error in the first
> place, when there appears to be plenty of free blocks left. It would 
> be
> worth checking with gfs2_edit just to be sure that the allocation
> bitmaps are not full, even if the summary information says otherwise.
>
> Can you easily reproduce this issue, or is this something that has 
> just
> occurred once?
>
> Steve.
>
>
>> root at vm01-test:~# gfs2_fsck /dev/mapper/iscsi_cluster_qemu
>> Initializing fsck
>> Validating Resource Group index.
>> Level 1 rgrp check: Checking if all rgrp and rindex values are good.
>> (level 1 passed)
>> Okay to reclaim unlinked inodes in resource group 131090 (0x20012)?
>> (y/n)y
>> Error: resource group 131090 (0x20012): free space (65527) does not
>> match bitmap (65528)
>> (1 blocks were reclaimed)
>> Fix the rgrp free blocks count? (y/n)y
>> The rgrp was fixed.
>> RGs: Consistent: 7   Inconsistent: 1   Fixed: 1   Total: 8
>> Starting pass1
>> Pass1 complete
>> Starting pass1b
>> Pass1b complete
>> Starting pass1c
>> Pass1c complete
>> Starting pass2
>> Pass2 complete
>> Starting pass3
>> Pass3 complete
>> Starting pass4
>> Pass4 complete
>> Starting pass5
>> RG #131090 (0x20012) Inode count inconsistent: is 1 should be 0
>> Update resource group counts? (y/n) y
>> Resource group counts updated
>> Pass5 complete
>> The statfs file is wrong:
>>
>> Current statfs values:
>> blocks:  524228 (0x7ffc4)
>> free:    424937 (0x67be9)
>> dinodes: 24 (0x18)
>>
>> Calculated statfs values:
>> blocks:  524228 (0x7ffc4)
>> free:    424938 (0x67bea)
>> dinodes: 23 (0x17)
>> Okay to fix the master statfs file? (y/n)y
>> The statfs file was fixed.
>> Writing changes to disk
>> gfs2_fsck complete
>>
>>
>> root at vm01-test:~# gfs2_fsck /dev/mapper/iscsi_cluster_qemu
>> Initializing fsck
>> Validating Resource Group index.
>> Level 1 rgrp check: Checking if all rgrp and rindex values are good.
>> (level 1 passed)
>> Okay to reclaim unlinked inodes in resource group 131090 (0x20012)?
>> (y/n)y
>> Error: resource group 131090 (0x20012): free space (65527) does not
>> match bitmap (65528)
>> (1 blocks were reclaimed)
>> Fix the rgrp free blocks count? (y/n)y
>> The rgrp was fixed.
>> RGs: Consistent: 7   Inconsistent: 1   Fixed: 1   Total: 8
>> Starting pass1
>> Pass1 complete
>> Starting pass1b
>> Pass1b complete
>> Starting pass1c
>> Pass1c complete
>> Starting pass2
>> Pass2 complete
>> Starting pass3
>> Pass3 complete
>> Starting pass4
>> Pass4 complete
>> Starting pass5
>> RG #131090 (0x20012) Inode count inconsistent: is 1 should be 0
>> Update resource group counts? (y/n) y
>> Resource group counts updated
>> Pass5 complete
>> The statfs file is wrong:
>>
>> Current statfs values:
>> blocks:  524228 (0x7ffc4)
>> free:    424937 (0x67be9)
>> dinodes: 24 (0x18)
>>
>> Calculated statfs values:
>> blocks:  524228 (0x7ffc4)
>> free:    424938 (0x67bea)
>> dinodes: 23 (0x17)
>> Okay to fix the master statfs file? (y/n)y
>> The statfs file was fixed.
>> Writing changes to disk
>> gfs2_fsck complete
>>
>> Could it be that it looks like bug
>> https://bugzilla.redhat.com/show_bug.cgi?id=666080 ?
>>
>> Bart
>>
>> Bart Verwilst schreef op 23.08.2012 22:16:
>> > Hello,
>> >
>> > One problem fixed, up to the next one :) While everything seemed 
>> to
>> > work fine for a while, now I'm seeing this:
>> >
>> > root at vm02-test:~# df -h | grep libvirt
>> > /dev/mapper/iscsi_cluster_qemu     2.0G  388M  1.7G  19%
>> > /etc/libvirt/qemu
>> > /dev/mapper/iscsi_cluster_sanlock  5.0G  393M  4.7G   8%
>> > /var/lib/libvirt/sanlock
>> >
>> > root at vm02-test:~# ls -al /etc/libvirt/qemu
>> > total 16
>> > drwxr-xr-x 2 root root 3864 Aug 23 13:54 .
>> > drwxr-xr-x 6 root root 4096 Aug 14 15:09 ..
>> > -rw------- 1 root root 2566 Aug 23 13:51 firewall.xml
>> > -rw------- 1 root root 2390 Aug 23 13:54 zabbix.xml
>> >
>> > root at vm02-test:~# gfs2_tool journals /etc/libvirt/qemu
>> > journal2 - 128MB
>> > journal1 - 128MB
>> > journal0 - 128MB
>> > 3 journal(s) found.
>> >
>> >
>> > root at vm02-test:~# touch /etc/libvirt/qemu/test
>> > touch: cannot touch `/etc/libvirt/qemu/test': No space left on 
>> device
>> >
>> >
>> >
>> > Anything I can do to debug this further?
>> >
>> > Kind regards,
>> >
>> > Bart Verwilst
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster




More information about the Linux-cluster mailing list