[Linux-cluster] R: nfs cluster, problem with delete file in the failover case
gianpietro.sella at unipd.it
gianpietro.sella at unipd.it
Thu May 21 19:05:36 UTC 2015
> On Wed, May 13, 2015 at 07:38:03PM +0000, gianpietro sella wrote:
>> J. Bruce Fields <bfields <at> fieldses.org> writes:
>>
>> >
>> > On Wed, May 13, 2015 at 01:06:17PM +0200, sella gianpietro wrote:
>> > > this is the inodes number in the exported folder of the volume
>> > > in the server before write file in the client:
>> > >
>> > > [root <at> cld-blu-13 nova]# du --inodes
>> > > 2 .
>> > >
>> > > this is the used block:
>> > >
>> > > [root <at> cld-blu-13 nova]# df -T
>> > > Filesystem Type 1K-blocks Used
>> Available
>> > > Use% Mounted on
>> > > /dev/mapper/nfsclustervg-nfsclusterlv xfs 1152878588 33000
>> 1152845588
>> > > 1% /nfscluster
>> > >
>> > > after write file in the client with umount/mount during writing:
>> > >
>> > > [root <at> cld-blu-13 nova]# du --inodes
>> > > 3 .
>> > >
>> > > [root <at> cld-blu-13 nova]# df -T
>> > > Filesystem Type 1K-blocks Used
>> > > Available Use% Mounted on
>> > > /dev/mapper/nfsclustervg-nfsclusterlv xfs 1152878588 21004520
>> > > 1131874068 2% /nfscluster
>> > >
>> > > thi is correct.
>> > > now delete file:
>> > >
>> > > [root <at> cld-blu-13 nova]# du --inodes
>> > > 2 .
>> > >
>> > > the number of the inodes is correct (from 3 to 2).
>> > >
>> > > [root <at> cld-blu-13 nova]# df -T
>> > > Filesystem Type 1K-blocks Used
>> > > Available Use% Mounted on
>> > > /dev/mapper/nfsclustervg-nfsclusterlv xfs 1152878588 21004520
>> > > 1131874068 2% /nfscluster
>> > >
>> > > the number of used block is not correct.
>> > > Do not return to initial value 33000
>> >
>> > If you try "df -i", you'll probably also find that it gives the
>> "wrong"
>> > result. (So, probably 3 inodes, though "du --inodes" is still only
>> > finding 2).
>> >
>> > --b.
>> >
>>
>>
>> the problem is that after delete file the inode go in the orphaned
>> state:
>
> Yeah, that's consistent with everything else--we're not removing a
> dentry when we should for some reason, so the inode's staying
> referenced.
>
> --b.
>
tanks Bruce.
yes this is true.
I use nfs cluster on 2 node for nova instances in openstack (the instamces
are stored on nfs folder).
the probability that I create an file before an failover and then I delete
the file file after failover is very little.
In this case I can execute an "mount -o remount" after the failover and
delete command and the orpahned inode is deleted and the free disk space
is ok.
I do not understand who use the file after failover and delete command.
After I delete the file I do not see process that use the deleted file.
this is very strange.
But my is just an curiosity.
I think that the cause is the unmount operation on the failover node.
>>
>> [root at cld-blu-13 nova]# tune2fs -l /dev/nfsclustervg/nfsclusterlv |grep
>> -i inode
>> Filesystem features: has_journal ext_attr resize_inode dir_index
>> filetype needs_recovery sparse_super large_file
>> Inode count: 72097792
>> Free inodes: 72097754
>> Inodes per group: 8192
>> Inode blocks per group: 512
>> First inode: 11
>> Inode size: 256
>> Journal inode: 8
>> First orphan inode: 53067783
>> Journal backup: inode blocks
>>
>>
>>
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
More information about the Linux-cluster
mailing list