[Linux-cluster] nfs cluster, problem with delete file in the failover case

J. Bruce Fields bfields at fieldses.org
Mon May 11 21:20:15 UTC 2015

On Sun, May 10, 2015 at 11:28:25AM +0200, gianpietro.sella at unipd.it wrote:
> Hi, sorry for my bad english.
> I testing nfs cluster active/passsive (2 nodes).
> I use the next instruction for nfs:
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/High_Availability_Add-On_Administration/s1-resourcegroupcreatenfs-HAAA.html
> I use centos 7.1 on the nodes.
> The 2 node of the cluster share the same iscsi volume.
> The nfs cluster is very good.
> I have only one problem.
> I mount the nfs cluster exported folder on my client node (nfsv3 protocol).
> I write on the nfs folder an big data file (70GB):
> dd if=/dev/zero bs=1M count=70000 > /Instances/output.dat
> Before write is finished I put the active node in standby status.
> then the resource migrate in the other node.
> when the dd write finish the file is ok.
> I delete the file output.dat.

So, the dd and the later rm are both run on the client, and the rm after
the dd has completed and exited?  And the rm doesn't happen till after
the first migration is completely finished?  What version of NFS are you

It sounds like a sillyrename problem, but I don't see the explanation.


> Now the file output.dat is not present in the nfs folder, it is correctly
> erased.
> but the space in the nfs volume is not free.
> If I execute an df command on the client (and on the new active node) I
> see 70GB on used space in the exported volume disk.
> Now if I put the new active node in standby status (migrate the resource
> in the first node where start writing file), and the other node is now the
> active node, the space of the deleted output.dat file is now free.
> It is very strange.
> -- 
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster

More information about the Linux-cluster mailing list