Article Index

11) How many nodes can mount a single NFS server at once:

  • 24% >= 512 nodes
  • 20% 65-128 nodes
  • 16% 1-16 nodes
  • 12% 17-32 nodes
  • 12% 257-512 nodes
  • 12% 129-256 nodes
  • 4% 33-64 nodes

12) How many NFSd daemons do you run per NFS server

  • 45.0% 1-16
  • 13.6% 129-256
  • 13.6% 65-128
  • 9.1% 33-64
  • 4.5% 17-32
  • 4.5% 256-512
  • 4.5% 512-1024
  • 4.5% 2048-4096

13) Do you use NFSd or user space

  • 81.0% Kernel NFSd
  • 14.3% User space
  • 4.8% Both

14) What interconnect do you use with NFS?

  • 38.5% 10G
  • 26.9% GigE
  • 23.1% IB
  • 11.5% Other

15) If IB what transport (10 responses)

  • 100% IPoIB
  • 0% Other

16) If IB, do you use connected mode (8 responses)

  • 65.5% Connected mode
  • 37.5% Don't use connected mode

17) Do you use UDP or TCP (25 responses)

  • 84% TCP
  • 12% UDP
  • 4% Other

18) Which other network file systems do you use? (24 responses)

  • 0% PNFS
  • 58.3% Lustre
  • 16.7% Ceph
  • 12.5% BeeGFS
  • 12.5% GlusterFS
  • 8.3% None (Panansas, GPFS, HSM/SAM/QFS, or more than one of the above)

19) Are the other network file systems more or less reliable than NFS?

  • 58.3% Similar
  • 16.7% I use only NFS
  • 12.5% Much more reliable
  • 4.2% Much less reliable
  • 4.2% Somewhat less reliable
  • 4.2% Somewhat more reliable

20) Do you support MPI-IO (not just MPI)

  • 70.8% no
  • 20.8% yes
  • 8.3% (yes, but nobody uses it)

21) Any tips for making NFS perform better or more reliably?

  • We start with the underlying block (raid/disks) setup that you are going to serve data out and plumb up from there. The key things here is choosing your raid stride/chunk sizes and insuring your file system is as aware of the raid layout for good alignment as you can. We do follow the esnet host tuning found at: http://fasterdata.es.net/host-tuning/linux/ on both client and server systems. We also bump up the rpc.mountd count to help insure successful mounts as we use autofs to mount a number of the nfs spaces. When a larger HPC job starts up on many nodes we did have a time where not all would be able to mount successfully if the server was under load. Increasing the rpc.mountd count helped. We also set async and wdelay on our exports on the servers.
  • Kernel settings
  • I've heard that configuring IB in RDMA boosts NFS performance
  • We don't use NFS for high performance cluster data. That's Lustre's world. Where NFS is used for scientific data, it's in places where there are modest numbers of concurrent clients.
  • more disks
  • RPCMOUNTDOPTS="--num-threads=64"
  • Try to optimize /etc/sysconfig/nfs as much as possible.

22) Any tips for making NFS clients perform better or more reliably?

  • Following the above mentioned esnet info at: http://fasterdata.es.net/host-tuning/linux/. I should note that for both client and server that are using IPoIB we use connected mode and set the MTU to 64k.
  • Reducing the size of the kernel dirty buffer on the clients makes performance much more consistent.
  • user reliable interconnect hw
  • We've tried scripting NFS mounts w/o much success.
  • Educate users on using the right filesystem for the right task

23) Anything you would like to add:

  • We have also seen input from others that they see gains with the client option of 'nocto'. The man pages would suggest this has some risks so while we have tested and can see that certain loads see a gain from this we have not yet moved forward to deploy this option on our general setup. We are in process of testing our apps to insure we do not create other issues for apps if we do use this flag. Another things we have been looking at is cachefilesd and seeing how well that helps for data that can easily be cached. For things like our application trees, the OS (we are NFSRoot booted), and even some user reference data sets this looks quite promising but we have not gone live with this yet either.
  • We're always looking to improve our environment as well. We don't always have TIME to do so, of course.
  • Horses for courses. NFS is great for shared software and home directories. It's pretty useless for high performance access from hundreds of compute nodes.
  • Every storage system / file system I've ever seen or used has had its problems. There is no silver bullet (afaik). Use that which you have the competence to handle.
  • We are currently struggling with NFS mounts. We use them extensively throughout our department. Problems are they hang constantly and when one person is using the share heavily it slows down other computers. We've done lots of research into optimizing NFS but always come back to the same issues (hanging mounts that don't recover w/o admin interaction). We would love to know what other people are doing. We are experimenting with ceph at the moment for future large storage needs.

You have no rights to post comments

Search

Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.

Feedburner


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.