[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: unsuccessful in trying to increase NFS r/w block sizes to 32k

Jim Laverty wrote:
Check your rmem_max and wmem_max buffer sizes in /proc/sys/net/core (try
setting them to either 128k or 256k), check the rmem_defaults/wmem_defaults
also on the Linux box.

echo 262143 > /proc/sys/net/core/rmem_max
echo 262143 > /proc/sys/net/core/wmem_max

They are set to 128k at the moment (I upgraded to nfs-utils-1.0.3 to see how it compared to 0.3.3-5 in RHL7.3, I guess 128 kis the default for 1.0.3). I was going to vary those settings in just a minute, after I measured the effects of block size changes.

I still want to know if I made all the needed changes to enable 32k block sizes on the server side. The data doesn't show any discernable difference between 8k and 32k block sizes, which raises a red flag.

Run nfsstat -r (RPC stats) or nfsstat -o rpc to see if you are getting

None to report. I don't believe network infrastructure is causing the poor performance. The linux and HP-UX clients both do the untar in less than half the time it takes a solaris client, and they are on the same network/subnet/etc.

How well do the Linux clients perform with NFS compared to the Sun client?

Linux clients vastly outperform the solaris clients. 0m54.675s to untar the same 51MB file it took solaris 5m40.673s.

Have you tried 4k r/w sizes for the Sun boxes?

I've tried 1k,4k,8k & 32k: 1k: 28m17.027s 4k: 9m21.745s 8k: 5m21.860s 32k: 5m25.239s

Have you tested exporting a
Solaris volume and mounting it from a Linux box?

This works fine and with good performance.

I have a lot of data (more than I ever thought on this topic) that I intend to post to this list, once I've got it all together. The bottom line appears to be that Solaris clients and Linux NFS servers are a bad idea (unless I've got my system configured like a dog).


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]