[Linux-cluster] nfs cluster, problem with delete file in the failover case

gianpietro.sella at unipd.it gianpietro.sella at unipd.it
Sun May 10 09:28:25 UTC 2015


Hi, sorry for my bad english.
I testing nfs cluster active/passsive (2 nodes).
I use the next instruction for nfs:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/High_Availability_Add-On_Administration/s1-resourcegroupcreatenfs-HAAA.html

I use centos 7.1 on the nodes.
The 2 node of the cluster share the same iscsi volume.
The nfs cluster is very good.
I have only one problem.
I mount the nfs cluster exported folder on my client node (nfsv3 protocol).
I write on the nfs folder an big data file (70GB):
dd if=/dev/zero bs=1M count=70000 > /Instances/output.dat
Before write is finished I put the active node in standby status.
then the resource migrate in the other node.
when the dd write finish the file is ok.
I delete the file output.dat.
Now the file output.dat is not present in the nfs folder, it is correctly
erased.
but the space in the nfs volume is not free.
If I execute an df command on the client (and on the new active node) I
see 70GB on used space in the exported volume disk.
Now if I put the new active node in standby status (migrate the resource
in the first node where start writing file), and the other node is now the
active node, the space of the deleted output.dat file is now free.
It is very strange.





More information about the Linux-cluster mailing list