[Linux-cachefs] Adventures in NFS re-exporting

bfields bfields at fieldses.org
Tue Nov 24 21:15:22 UTC 2020


On Tue, Nov 24, 2020 at 08:35:06PM +0000, Daire Byrne wrote:
> Sometimes I have seen clusters of 16 GETATTRs for the same file on the
> wire with nothing else inbetween. So if the re-export server is the
> only "client" writing these files to the originating server, why do we
> need to do so many repeat GETATTR calls when using nconnect>1? And why
> are the COMMIT calls required when the writes are coming via nfsd but
> not from userspace on the re-export server? Is that due to some sort
> of memory pressure or locking?
> 
> I picked the NFSv3 originating server case because my head starts to
> hurt tracking the equivalent packets, stateids and compound calls with
> NFSv4. But I think it's mostly the same for NFSv4. The writes through
> the re-export server lead to lots of COMMITs and (double) GETATTRs but
> using nconnect>1 at least doesn't seem to make it any worse like it
> does for NFSv3.
> 
> But maybe you actually want all the extra COMMITs to help better
> guarantee your writes when putting a re-export server in the way?
> Perhaps all of this is by design...

Maybe that's close-to-open combined with the server's tendency to
open/close on every IO operation?  (Though the file cache should have
helped with that, I thought; as would using version >=4.0 on the final
client.)

Might be interesting to know whether the nocto mount option makes a
difference.  (So, add "nocto" to the mount options for the NFS mount
that you're re-exporting on the re-export server.)

By the way I made a start at a list of issues at

	http://wiki.linux-nfs.org/wiki/index.php/NFS_re-export

but I was a little vague on which of your issues remained and didn't
take much time over it.

(If you want an account on that wiki BTW I seem to recall you just have
to ask Trond (for anti-spam reasons).)

--b.




More information about the Linux-cachefs mailing list