<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://www.clustermonkey.net/cdp/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Laytonjb</id>
		<title>Cluster Documentation Project - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://www.clustermonkey.net/cdp/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Laytonjb"/>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php/Special:Contributions/Laytonjb"/>
		<updated>2026-05-09T07:42:25Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.27.4</generator>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2158</id>
		<title>NFS</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2158"/>
				<updated>2009-01-28T14:27:49Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page covers some tweak ideas for running NFS on your cluster. It covers both the server and the client. There is no guarantee that these tweaks will work for your particular case. They are tweaks that either people have tried or have been floating around the net for a while. They may help some workloads and not others. So your mileage may vary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Our advice is to try the tweaks on your system '''for your workloads''' and then decide for yourself of the tweaks are worth it or not.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equally important, if you do any tweaks and have success, please add your comments to this wiki so other people can learn from what you did (and you can become famous in the annals of Cluster-Tweaking-lore). We can't beg you enough to add your comments, observations, and success stories with as much detail as you can provide.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With these brief introductory concepts, let's dive into the subtopics for NFS tweaking.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[1. Server Tweaks]]&lt;br /&gt;
:[[1.1 Number of Threads]]&lt;br /&gt;
:[[1.2 Second Topic]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[2. Client Tweaks]]&lt;br /&gt;
:[[2.1 Mount options]]&lt;br /&gt;
:[[2.2 Second Topic]]&lt;br /&gt;
:[[2.3 CacheFS]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[3. Network Tweaks]]&lt;br /&gt;
:[[3.1 UDP vs. TCP]]&lt;br /&gt;
:[[3.2 Jumbo Frames]] (Changing the frame size)&lt;br /&gt;
:[[3.3 Third Topic]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[4. Success Stories]] (Put your success stories here)&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=2.2_biod_daemons_on_clients&amp;diff=2155</id>
		<title>2.2 biod daemons on clients</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=2.2_biod_daemons_on_clients&amp;diff=2155"/>
				<updated>2009-01-21T00:25:34Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Client tuning is a bit more difficult (interesting?) than the server :) One thing you can fairly easily tune are the biod (daemons) on the client nodes.&lt;br /&gt;
&lt;br /&gt;
On the clients, biod are the daemons that handle block IO. They help read-ahead and write-behind on remote file systems in order to improve performance. You can actually run without any biod is you want and you can still use NFS. But running them allows you to possibly improve IO performance on the clients.&lt;br /&gt;
&lt;br /&gt;
One [http://osr507doc.sco.com/en/PERFORM/NFS_tuning.html#tuning_biods] below discusses biod's on the clients. The link is for SCO, but the general concepts are applicable to Linux. From the link,&lt;br /&gt;
&lt;br /&gt;
''On an NFS client system, you do not need to run any biod processes for applications to access remote filesystems. The biods handle read-ahead and write-behind on remote filesystems in order to improve performance. When reading, they send requests to read disk blocks ahead of that currently requested. When writing, they take over responsibility for handling writing the block to the remote disk from the application. The biod processes visible using ps(C) are merely convenient handles used by the process scheduler to control NFS client operation -- the majority of the work dealing with the read and write requests is dealt with inside the kernel.&lt;br /&gt;
&lt;br /&gt;
''If no biods are running, the application's performance will suffer as a result. When it writes to the remote filesystem, the write system call will block until the data has been written to the disk on the server. When it reads from the remote filesystem, it is unlikely to find the blocks in the buffer cache.&lt;br /&gt;
&lt;br /&gt;
''From this, you might deduce that running an extra copy of biod will always enhance NFS performance on the client. For example, if four biods are running, each of these can perform asynchronous writes without applications programs having to wait for these to complete. If an application requires access to the remote filesystem while the biods are busy, it performs this itself. The limit to performance enhancement comes from the fact that each biod's disk requests impose a load on the server. nfsd daemons, the buffer cache, and disk I/O on the server will all come under more pressure if more biod daemons are run on the clients. Network traffic will also increase as will the activity of the networking protocol stacks on both the server and its clients. The default number of biod processes run on a client is four. To see if the number running on your system is adequate, use the ps -ef command and examine the elapsed CPU time used by the biods under the TIME column. Note that the results are only meaningful if your system has been operating under normal conditions for several hours.&lt;br /&gt;
&lt;br /&gt;
''If nfsstat -c on the client shows a wait for client handle value of zero and if the TIME value for at least one of the biods is substantially less than the others, then there are probably enough daemons running. If several biods show low TIME values, it should be safe to reduce their number to one more than the number showing high TIME values.&lt;br /&gt;
&lt;br /&gt;
''If all the TIME values are high, increase the number of biods by two, and continue to monitor the situation.&lt;br /&gt;
&lt;br /&gt;
''If you are root, you can reduce the number of biods running by killing them with kill(C). You can also start extra biods running using the command /etc/biod.&lt;br /&gt;
&lt;br /&gt;
''To change the number of biods that are configured to run, edit the following lines in the file /etc/nfs on each client:&lt;br /&gt;
&lt;br /&gt;
   [ -x /etc/biod ] &amp;amp;&amp;amp; {&lt;br /&gt;
           echo &amp;quot; biod(xnumber)\c&amp;quot;&lt;br /&gt;
           biod number &amp;amp;&lt;br /&gt;
   }&lt;br /&gt;
&lt;br /&gt;
''When NFS is next started on the client, number biods will run.'' &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The procedure above can be a little complicated, but try to follow the steps (ask questions if you need to). In addition, don't forget to use your applications if possible so that you can get a better understanding of the impact of biod's on your application performance.&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=2.2_biod_daemons_on_clients&amp;diff=2154</id>
		<title>2.2 biod daemons on clients</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=2.2_biod_daemons_on_clients&amp;diff=2154"/>
				<updated>2009-01-21T00:19:39Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Client tuning is a bit more difficult (interesting?) than the server :) One thing you can fairly easily tune are the biod (daemons) on the client nodes.&lt;br /&gt;
&lt;br /&gt;
On the clients, biod are the daemons that handle block IO. They help read-ahead and write-behind on remote file systems in order to improve performance. You can actually run without any biod is you want and you can still use NFS. But running them allows you to possibly improve IO performance on the clients.&lt;br /&gt;
&lt;br /&gt;
One [http://osr507doc.sco.com/en/PERFORM/NFS_tuning.html#tuning_biods] below discusses biod's on the clients. From the link,&lt;br /&gt;
&lt;br /&gt;
''On an NFS client system, you do not need to run any biod processes for applications to access remote filesystems. The biods handle read-ahead and write-behind on remote filesystems in order to improve performance. When reading, they send requests to read disk blocks ahead of that currently requested. When writing, they take over responsibility for handling writing the block to the remote disk from the application. The biod processes visible using ps(C) are merely convenient handles used by the process scheduler to control NFS client operation -- the majority of the work dealing with the read and write requests is dealt with inside the kernel.&lt;br /&gt;
&lt;br /&gt;
''If no biods are running, the application's performance will suffer as a result. When it writes to the remote filesystem, the write system call will block until the data has been written to the disk on the server. When it reads from the remote filesystem, it is unlikely to find the blocks in the buffer cache.&lt;br /&gt;
&lt;br /&gt;
''From this, you might deduce that running an extra copy of biod will always enhance NFS performance on the client. For example, if four biods are running, each of these can perform asynchronous writes without applications programs having to wait for these to complete. If an application requires access to the remote filesystem while the biods are busy, it performs this itself. The limit to performance enhancement comes from the fact that each biod's disk requests impose a load on the server. nfsd daemons, the buffer cache, and disk I/O on the server will all come under more pressure if more biod daemons are run on the clients. Network traffic will also increase as will the activity of the networking protocol stacks on both the server and its clients. The default number of biod processes run on a client is four. To see if the number running on your system is adequate, use the ps -ef command and examine the elapsed CPU time used by the biods under the TIME column. Note that the results are only meaningful if your system has been operating under normal conditions for several hours.&lt;br /&gt;
&lt;br /&gt;
''If nfsstat -c on the client shows a wait for client handle value of zero and if the TIME value for at least one of the biods is substantially less than the others, then there are probably enough daemons running. If several biods show low TIME values, it should be safe to reduce their number to one more than the number showing high TIME values.&lt;br /&gt;
&lt;br /&gt;
''If all the TIME values are high, increase the number of biods by two, and continue to monitor the situation.&lt;br /&gt;
&lt;br /&gt;
''If you are root, you can reduce the number of biods running by killing them with kill(C). You can also start extra biods running using the command /etc/biod.&lt;br /&gt;
&lt;br /&gt;
''To change the number of biods that are configured to run, edit the following lines in the file /etc/nfs on each client:&lt;br /&gt;
&lt;br /&gt;
   [ -x /etc/biod ] &amp;amp;&amp;amp; {&lt;br /&gt;
           echo &amp;quot; biod(xnumber)\c&amp;quot;&lt;br /&gt;
           biod number &amp;amp;&lt;br /&gt;
   }&lt;br /&gt;
&lt;br /&gt;
''When NFS is next started on the client, number biods will run.'' &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The procedure above can be a little complicated, but try to follow the steps (ask questions if you need to). In addition, don't forget to use your applications if possible so that you can get a better understanding of the impact of biod's on your application performance.&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=2.2_biod_daemons_on_clients&amp;diff=2153</id>
		<title>2.2 biod daemons on clients</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=2.2_biod_daemons_on_clients&amp;diff=2153"/>
				<updated>2009-01-21T00:18:43Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Client tuning is a bit more difficult (interesting?) than the server :) One thing you can fairly easily tune are the biod (daemons) on the client nodes.&lt;br /&gt;
&lt;br /&gt;
On the clients, biod are the daemons that handle block IO. They help read-ahead and write-behind on remote file systems in order to improve performance. You can actually run without any biod is you want and you can still use NFS. But running them allows you to possibly improve IO performance on the clients.&lt;br /&gt;
&lt;br /&gt;
One [http://osr507doc.sco.com/en/PERFORM/NFS_tuning.html#tuning_biods] below discusses biod's on the clients. From the link,&lt;br /&gt;
&lt;br /&gt;
''On an NFS client system, you do not need to run any biod processes for applications to access remote filesystems. The biods handle read-ahead and write-behind on remote filesystems in order to improve performance. When reading, they send requests to read disk blocks ahead of that currently requested. When writing, they take over responsibility for handling writing the block to the remote disk from the application. The biod processes visible using ps(C) are merely convenient handles used by the process scheduler to control NFS client operation -- the majority of the work dealing with the read and write requests is dealt with inside the kernel.&lt;br /&gt;
&lt;br /&gt;
If no biods are running, the application's performance will suffer as a result. When it writes to the remote filesystem, the write system call will block until the data has been written to the disk on the server. When it reads from the remote filesystem, it is unlikely to find the blocks in the buffer cache.&lt;br /&gt;
&lt;br /&gt;
From this, you might deduce that running an extra copy of biod will always enhance NFS performance on the client. For example, if four biods are running, each of these can perform asynchronous writes without applications programs having to wait for these to complete. If an application requires access to the remote filesystem while the biods are busy, it performs this itself. The limit to performance enhancement comes from the fact that each biod's disk requests impose a load on the server. nfsd daemons, the buffer cache, and disk I/O on the server will all come under more pressure if more biod daemons are run on the clients. Network traffic will also increase as will the activity of the networking protocol stacks on both the server and its clients. The default number of biod processes run on a client is four. To see if the number running on your system is adequate, use the ps -ef command and examine the elapsed CPU time used by the biods under the TIME column. Note that the results are only meaningful if your system has been operating under normal conditions for several hours.&lt;br /&gt;
&lt;br /&gt;
If nfsstat -c on the client shows a wait for client handle value of zero and if the TIME value for at least one of the biods is substantially less than the others, then there are probably enough daemons running. If several biods show low TIME values, it should be safe to reduce their number to one more than the number showing high TIME values.&lt;br /&gt;
&lt;br /&gt;
If all the TIME values are high, increase the number of biods by two, and continue to monitor the situation.&lt;br /&gt;
&lt;br /&gt;
If you are root, you can reduce the number of biods running by killing them with kill(C). You can also start extra biods running using the command /etc/biod.&lt;br /&gt;
&lt;br /&gt;
To change the number of biods that are configured to run, edit the following lines in the file /etc/nfs on each client:&lt;br /&gt;
&lt;br /&gt;
   [ -x /etc/biod ] &amp;amp;&amp;amp; {&lt;br /&gt;
           echo &amp;quot; biod(xnumber)\c&amp;quot;&lt;br /&gt;
           biod number &amp;amp;&lt;br /&gt;
   }&lt;br /&gt;
&lt;br /&gt;
When NFS is next started on the client, number biods will run.'' &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The procedure above can be a little complicated, but try to follow the steps (ask questions if you need to). In addition, don't forget to use your applications if possible so that you can get a better understanding of the impact of biod's on your application performance.&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2152</id>
		<title>NFS</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2152"/>
				<updated>2009-01-21T00:08:25Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page covers some tweak ideas for running NFS on your cluster. It covers both the server and the client. There is no guarantee that these tweaks will work for your particular case. They are tweaks that either people have tried or have been floating around the net for a while. They may help some workloads and not others. So your mileage may vary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Our advice is to try the tweaks on your system '''for your workloads''' and then decide for yourself of the tweaks are worth it or not.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equally important, if you do any tweaks and have success, please add your comments to this wiki so other people can learn from what you did (and you can become famous in the annals of Cluster-Tweaking-lore). We can't beg you enough to add your comments, observations, and success stories with as much detail as you can provide.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With these brief introductory concepts, let's dive into the subtopics for NFS tweaking.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[1. Server Tweaks]]&lt;br /&gt;
:[[1.1 Number of Threads]]&lt;br /&gt;
:[[1.2 Second Topic]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[2. Client Tweaks]]&lt;br /&gt;
:[[2.1 Mount options]]&lt;br /&gt;
:[[2.2 biod daemons on clients]]&lt;br /&gt;
:[[2.3 CacheFS]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[3. Network Tweaks]]&lt;br /&gt;
:[[3.1 UDP vs. TCP]]&lt;br /&gt;
:[[3.2 Jumbo Frames]] (Changing the frame size)&lt;br /&gt;
:[[3.3 Third Topic]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[4. Success Stories]] (Put your success stories here)&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=Local_File_Systems&amp;diff=2150</id>
		<title>Local File Systems</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=Local_File_Systems&amp;diff=2150"/>
				<updated>2009-01-11T20:23:42Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;1.0 Local File System Tweaks:&lt;br /&gt;
&lt;br /&gt;
[[1.1 ext2]]&lt;br /&gt;
&lt;br /&gt;
[[1.2 ext3]]&lt;br /&gt;
&lt;br /&gt;
[[1.3 ext4]]&lt;br /&gt;
&lt;br /&gt;
[[1.4 ReiserFS]]&lt;br /&gt;
&lt;br /&gt;
[[1.5 ResiserFS 4]]&lt;br /&gt;
&lt;br /&gt;
[[1.6 JFS]]&lt;br /&gt;
&lt;br /&gt;
[[1.7 XFS]]&lt;br /&gt;
&lt;br /&gt;
[[1.8 BTRFS]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[2.0 Benchmarking Local File Systems]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[3.0 Benchmark Results]]&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=Local_File_Systems&amp;diff=2149</id>
		<title>Local File Systems</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=Local_File_Systems&amp;diff=2149"/>
				<updated>2009-01-11T20:23:31Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;1.0 Local File System Tweaks:&lt;br /&gt;
&lt;br /&gt;
[[1.1 ext2]]&lt;br /&gt;
&lt;br /&gt;
[[1.2 ext3]]&lt;br /&gt;
&lt;br /&gt;
[[1.3 ext4]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[1.4 ReiserFS]]&lt;br /&gt;
&lt;br /&gt;
[[1.5 ResiserFS 4]]&lt;br /&gt;
&lt;br /&gt;
[[1.6 JFS]]&lt;br /&gt;
&lt;br /&gt;
[[1.7 XFS]]&lt;br /&gt;
&lt;br /&gt;
[[1.8 BTRFS]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[2.0 Benchmarking Local File Systems]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[3.0 Benchmark Results]]&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=Finite_Element_Analysis&amp;diff=2148</id>
		<title>Finite Element Analysis</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=Finite_Element_Analysis&amp;diff=2148"/>
				<updated>2009-01-11T20:21:49Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://en.wikipedia.org/wiki/Finite_element Finite Element] Analysis is a very computational intensive technique for solving a wide range of problems. This page will give you some suggestions for improving FEM performance of ISV codes such as [http://www.simulia.com/products/abaqus_fea.html Abaqus], [http://www.ansys.com/products/default.asp ANSYS], and [http://www.mscsoftware.com/products/nastran.cfm?Q=131&amp;amp;Z=457&amp;amp;Y=401 Nastran], as well as open-source codes such as [http://opensees.berkeley.edu/index.php OpenSees], [http://tahoe.ca.sandia.gov/ Tahoe], [http://www.oofem.org/en/oofem.html OOFEM], [http://www.calculix.de/ Calculix], [http://impact.sourceforge.net/ Impact], [http://www.csc.fi/english/pages/elmer Elmer], [http://cern49.cee.uiuc.edu/cfm/warp3d.html Warp3D], [http://mechsys.nongnu.org/index.shtml MechSysNG], [http://www.cimne.com/kratos/ Kratos], [http://sokocalo.engr.ucdavis.edu/~jeremic/PDD/ PDD], [http://adventure.sys.t.u-tokyo.ac.jp/ Adventure], [http://www.dealii.org/ deal.ii] [http://geofem.tokyo.rist.or.jp/ GeoFEM], [http://www.rcs.manchester.ac.uk/research/parafem ParaFEM] that are focused on solving solid mechanics problems with FEA techniques.&lt;br /&gt;
&lt;br /&gt;
== Non-FEM Solver tips ==&lt;br /&gt;
The following list of items are not related to the actual FEM solver itself. But since solvers range so widely, these tips are somewhat generic and may or may not help your specific applicaiton.&lt;br /&gt;
&lt;br /&gt;
=== IO ===&lt;br /&gt;
FEM codes, for the most part, do a great deal of local IO. This is a result of several possibilities including the solver itself (an out-of-core solver) or retaining intermediate results to make post-processing, including stress recovery, much faster.&lt;br /&gt;
&lt;br /&gt;
Based on some testing of commercial FEM codes, here are some tips for improving performance.&lt;br /&gt;
&lt;br /&gt;
* Use RAID-0 for the local scratch space.&lt;br /&gt;
* If possible use EXT2 or XFS for the local IO&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=3.0_Benchmark_Results&amp;diff=2147</id>
		<title>3.0 Benchmark Results</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=3.0_Benchmark_Results&amp;diff=2147"/>
				<updated>2009-01-11T19:55:01Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This part of the wiki presents posted benchmark results for various file systems on Linux. Be sure to look very carefully at the conditions of the test, the actual benchmarks used, and the results. &lt;br /&gt;
&lt;br /&gt;
If you want to comment on the links and even posted benchmarks, please do so. If you would like to post your own benchmarks, you are highly encouraged to do so. But please do everyone a favor and try to give us lots of details about the system you tested, the Linux distribution and any changes you may have made, the benchmark used, how you ran the benchmarks, and how many times you ran the benchmark and how you computed the results. Please be as detailed as possible to the point of giving too much detail. While labor intensive, this list of information will help people understand the conditions of the benchmark. This will help with readers understanding if the results apply to their conditions. &lt;br /&gt;
&lt;br /&gt;
== Links to Benchmarks ==&lt;br /&gt;
&lt;br /&gt;
=== (1) Comparison of JFS, XFS, ReiserFS, EXT3, and EXT4 on Ubuntu 9.04 (1/11/09) ===&lt;br /&gt;
[http://www.phoronix.com/scan.php?page=article&amp;amp;item=ubuntu_ext4&amp;amp;num=1 Link to Benchmarks]&lt;br /&gt;
&lt;br /&gt;
[[Comments on Benchmark (1) ]]&lt;br /&gt;
&lt;br /&gt;
=== (2) Comparison of EXT3, EXT4, ResierFS, and XFS (12/3/2008) ===&lt;br /&gt;
[http://www.phoronix.com/scan.php?page=article&amp;amp;item=ext4_benchmarks&amp;amp;num=1 Link to Benchmarks]&lt;br /&gt;
&lt;br /&gt;
[[Comments on Benchmark (2) ]]&lt;br /&gt;
&lt;br /&gt;
=== (3) Initial BTRFS results  ===&lt;br /&gt;
[http://scalability.org/?p=1133 Link]&lt;br /&gt;
&lt;br /&gt;
[[ Comments on Benchmark (3) ]]&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=1.8_BTRFS&amp;diff=2146</id>
		<title>1.8 BTRFS</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=1.8_BTRFS&amp;diff=2146"/>
				<updated>2009-01-11T19:47:46Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This section discusses tuning tips for BTRFS.&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=1.7_XFS&amp;diff=2145</id>
		<title>1.7 XFS</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=1.7_XFS&amp;diff=2145"/>
				<updated>2009-01-11T19:46:55Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This section discusses and presents XFS tuning tips.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Links to information ==&lt;br /&gt;
[http://everything2.com/index.pl?node_id=1479435 Link 1]&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=Comments_on_Benchmark_(1)&amp;diff=2144</id>
		<title>Comments on Benchmark (1)</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=Comments_on_Benchmark_(1)&amp;diff=2144"/>
				<updated>2009-01-11T19:44:51Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Ubuntu 9.04 is intended to support the 2.6.28 kernel. At the end of Jan. 2009, the experimental label was removed from ext4 and the Ubuntu developers have added the ability to install to an ext4 file system. A Samsung NC10 network was used for the hardware and the pre-Alpha 3 release of Ubuntu 9.04. The netbook used an Intel Atom N270 processor, 2GB of DDR2 memory, and a 32GB OCZ Core Series V2 SSD, and integrated Intel graphics.&lt;br /&gt;
&lt;br /&gt;
The benchmarks are not really HPC related, but it's another data point comparing a number of file systems.&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=3.0_Benchmark_Results&amp;diff=2143</id>
		<title>3.0 Benchmark Results</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=3.0_Benchmark_Results&amp;diff=2143"/>
				<updated>2009-01-11T19:44:44Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This part of the wiki presents posted benchmark results for various file systems on Linux. Be sure to look very carefully at the conditions of the test, the actual benchmarks used, and the results. &lt;br /&gt;
&lt;br /&gt;
If you want to comment on the links and even posted benchmarks, please do so. If you would like to post your own benchmarks, you are highly encouraged to do so. But please do everyone a favor and try to give us lots of details about the system you tested, the Linux distribution and any changes you may have made, the benchmark used, how you ran the benchmarks, and how many times you ran the benchmark and how you computed the results. Please be as detailed as possible to the point of giving too much detail. While labor intensive, this list of information will help people understand the conditions of the benchmark. This will help with readers understanding if the results apply to their conditions. &lt;br /&gt;
&lt;br /&gt;
== Links to Benchmarks ==&lt;br /&gt;
&lt;br /&gt;
=== (1) Comparison of JFS, XFS, ReiserFS, EXT3, and EXT4 on Ubuntu 9.04 (1/11/09) ===&lt;br /&gt;
[http://www.phoronix.com/scan.php?page=article&amp;amp;item=ubuntu_ext4&amp;amp;num=1 Link to Benchmarks]&lt;br /&gt;
&lt;br /&gt;
[[Comments on Benchmark (1) ]]&lt;br /&gt;
&lt;br /&gt;
=== (2) Comparison of EXT3, EXT4, ResierFS, and XFS (12/3/2008) ===&lt;br /&gt;
[http://www.phoronix.com/scan.php?page=article&amp;amp;item=ext4_benchmarks&amp;amp;num=1 Link to Benchmarks]&lt;br /&gt;
&lt;br /&gt;
[[Comments on Benchmark (2) ]]&lt;br /&gt;
&lt;br /&gt;
=== (3) Next benchmark ===&lt;br /&gt;
Link&lt;br /&gt;
Comments&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=3.0_Benchmark_Results&amp;diff=2142</id>
		<title>3.0 Benchmark Results</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=3.0_Benchmark_Results&amp;diff=2142"/>
				<updated>2009-01-11T19:44:06Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This part of the wiki presents posted benchmark results for various file systems on Linux. Be sure to look very carefully at the conditions of the test, the actual benchmarks used, and the results. &lt;br /&gt;
&lt;br /&gt;
If you want to comment on the links and even posted benchmarks, please do so. If you would like to post your own benchmarks, you are highly encouraged to do so. But please do everyone a favor and try to give us lots of details about the system you tested, the Linux distribution and any changes you may have made, the benchmark used, how you ran the benchmarks, and how many times you ran the benchmark and how you computed the results. Please be as detailed as possible to the point of giving too much detail. While labor intensive, this list of information will help people understand the conditions of the benchmark. This will help with readers understanding if the results apply to their conditions. &lt;br /&gt;
&lt;br /&gt;
== Links to Benchmarks ==&lt;br /&gt;
&lt;br /&gt;
=== (1) Comparison of JFS, XFS, ReiserFS, EXT3, and EXT4 on Ubuntu 9.04 (1/11/09) ===&lt;br /&gt;
[http://www.phoronix.com/scan.php?page=article&amp;amp;item=ubuntu_ext4&amp;amp;num=1 Link to Benchmarks]&lt;br /&gt;
&lt;br /&gt;
[[Comments on Benchmark]]&lt;br /&gt;
&lt;br /&gt;
=== (2) Comparison of EXT3, EXT4, ResierFS, and XFS (12/3/2008) ===&lt;br /&gt;
[http://www.phoronix.com/scan.php?page=article&amp;amp;item=ext4_benchmarks&amp;amp;num=1 Link to Benchmarks]&lt;br /&gt;
&lt;br /&gt;
[[Comments on Benchmark (2) ]]&lt;br /&gt;
&lt;br /&gt;
=== (3) Next benchmark ===&lt;br /&gt;
Link&lt;br /&gt;
Comments&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=Comments_on_Benchmark&amp;diff=2141</id>
		<title>Comments on Benchmark</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=Comments_on_Benchmark&amp;diff=2141"/>
				<updated>2009-01-11T19:41:27Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Ubuntu 9.04 is intended to support the 2.6.28 kernel. At the end of Jan. 2009, the experimental label was removed from ext4 and the Ubuntu developers have added the ability to install to an ext4 file system. A Samsung NC10 network was used for the hardware and the pre-Alpha 3 release of Ubuntu 9.04. The netbook used an Intel Atom N270 processor, 2GB of DDR2 memory, and a 32GB OCZ Core Series V2 SSD, and integrated Intel graphics.&lt;br /&gt;
&lt;br /&gt;
The benchmarks are not really HPC related, but it's another data point comparing a number of file systems.&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=3.0_Benchmark_Results&amp;diff=2140</id>
		<title>3.0 Benchmark Results</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=3.0_Benchmark_Results&amp;diff=2140"/>
				<updated>2009-01-11T19:41:18Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This part of the wiki presents posted benchmark results for various file systems on Linux. Be sure to look very carefully at the conditions of the test, the actual benchmarks used, and the results. &lt;br /&gt;
&lt;br /&gt;
If you want to comment on the links and even posted benchmarks, please do so. If you would like to post your own benchmarks, you are highly encouraged to do so. But please do everyone a favor and try to give us lots of details about the system you tested, the Linux distribution and any changes you may have made, the benchmark used, how you ran the benchmarks, and how many times you ran the benchmark and how you computed the results. Please be as detailed as possible to the point of giving too much detail. While labor intensive, this list of information will help people understand the conditions of the benchmark. This will help with readers understanding if the results apply to their conditions. &lt;br /&gt;
&lt;br /&gt;
== Links to Benchmarks ==&lt;br /&gt;
&lt;br /&gt;
=== (1) Comparison of JFS, XFS, ReiserFS, EXT3, and EXT4 on Ubuntu 9.04 (1/11/09) ===&lt;br /&gt;
[http://www.phoronix.com/scan.php?page=article&amp;amp;item=ubuntu_ext4&amp;amp;num=1 Link to Benchmarks]&lt;br /&gt;
&lt;br /&gt;
[[Comments on Benchmark]]&lt;br /&gt;
&lt;br /&gt;
=== (2) Second benchmark ===&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=3.0_Benchmark_Results&amp;diff=2139</id>
		<title>3.0 Benchmark Results</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=3.0_Benchmark_Results&amp;diff=2139"/>
				<updated>2009-01-11T19:34:34Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This part of the wiki presents posted benchmark results for various file systems on Linux. Be sure to look very carefully at the conditions of the test, the actual benchmarks used, and the results. &lt;br /&gt;
&lt;br /&gt;
If you want to comment on the links and even posted benchmarks, please do so. If you would like to post your own benchmarks, you are highly encouraged to do so. But please do everyone a favor and try to give us lots of details about the system you tested, the Linux distribution and any changes you may have made, the benchmark used, how you ran the benchmarks, and how many times you ran the benchmark and how you computed the results. Please be as detailed as possible to the point of giving too much detail. While labor intensive, this list of information will help people understand the conditions of the benchmark. This will help with readers understanding if the results apply to their conditions. &lt;br /&gt;
&lt;br /&gt;
== Links to Benchmarks ==&lt;br /&gt;
&lt;br /&gt;
=== Comparison of JFS, XFS, ReiserFS, EXT3, and EXT4 on Ubuntu 9.04 ===&lt;br /&gt;
[http://www.phoronix.com/scan.php?page=article&amp;amp;item=ubuntu_ext4&amp;amp;num=1 Link to Benchmarks]&lt;br /&gt;
&lt;br /&gt;
Comments on Benchmark&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=Local_File_Systems&amp;diff=2138</id>
		<title>Local File Systems</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=Local_File_Systems&amp;diff=2138"/>
				<updated>2009-01-11T19:22:25Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;1.0 Local File System Tweaks:&lt;br /&gt;
&lt;br /&gt;
[[1.1 ext2]]&lt;br /&gt;
&lt;br /&gt;
[[1.2 ext3]]&lt;br /&gt;
&lt;br /&gt;
[[1.3 ext4]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[1.4 ReiserFS]]&lt;br /&gt;
&lt;br /&gt;
[[1.5 ResiserFS 4]]&lt;br /&gt;
&lt;br /&gt;
[[1.6 JFS]]&lt;br /&gt;
&lt;br /&gt;
[[1.7 XFS]]&lt;br /&gt;
&lt;br /&gt;
[[1.8 BTRFS]]&lt;br /&gt;
&lt;br /&gt;
[[2.0 Benchmarking Local File Systems]]&lt;br /&gt;
&lt;br /&gt;
[[3.0 Benchmark Results]]&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=Local_File_Systems&amp;diff=2137</id>
		<title>Local File Systems</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=Local_File_Systems&amp;diff=2137"/>
				<updated>2009-01-11T19:22:04Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;1.0 Local File System Tweaks:&lt;br /&gt;
[[1.1 ext2]]&lt;br /&gt;
[[1.2 ext3]]&lt;br /&gt;
[[1.3 ext4]]&lt;br /&gt;
&lt;br /&gt;
[[1.4 ReiserFS]]&lt;br /&gt;
[[1.5 ResiserFS 4]]&lt;br /&gt;
&lt;br /&gt;
[[1.6 JFS]]&lt;br /&gt;
[[1.7 XFS]]&lt;br /&gt;
[[1.8 BTRFS]]&lt;br /&gt;
&lt;br /&gt;
[[2.0 Benchmarking Local File Systems]]&lt;br /&gt;
&lt;br /&gt;
[[3.0 Benchmark Results]]&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=3.1_UDP_vs._TCP&amp;diff=2136</id>
		<title>3.1 UDP vs. TCP</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=3.1_UDP_vs._TCP&amp;diff=2136"/>
				<updated>2009-01-11T14:23:41Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Background ==&lt;br /&gt;
&lt;br /&gt;
The [http://en.wikipedia.org/wiki/Network_File_System_(protocol) original NFS] used UDP [http://en.wikipedia.org/wiki/User_Datagram_Protocol UDP] for transmitting data for NFS. UDP was chosen because is very simple, you are just transmitting packets and if they get lost everything is sent again. In addition, there is no load on the server when the connection to the network (or really the client) is not active. But UDP does have the problem that if the network has any congestion and packets are lost, all of the packets are retransmitted, increasing the load on the server and the network.&lt;br /&gt;
&lt;br /&gt;
NFSv2, the original NFS that had widespread use, used UDP. A few companies put TCP into NFSv2, but it was not part of the NFS standard. With the advent of NFSv3 which was released around the mid-1990's, TCP was added to the protocol. Today, we can set our Linux NFS clients to accept data via UDP 9if we really want to) or to use TCP. The advantage in using TCP is that if a packet is lost, which can happen under reasonably heavy network congestion (or a bad switch), then only the lost packet is retransmitted, limiting the amount of data that has to be retransmitted.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Recommendations ==&lt;br /&gt;
Getting the best performance using either UDP or TCP, really depends upon the situation. In general, people just use TCP because it's the default in most distributions (pretty much all distributions using a 2.6.x kernel). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Steps for Testing UDP and TCP Performance ===&lt;br /&gt;
But if you want to experiment, it is recommended that you start with UDP, run the cluster under load (i.e. run applications), and watch the load on the server as well as time the performance of your applications.&lt;br /&gt;
&lt;br /&gt;
To test UDP, you have to modify the file, &amp;lt;tt&amp;gt;/etc/fstab&amp;lt;/tt&amp;gt; on all of your clients (compute nodes). It is beyond the scope of this wiki to tell you how to modify the file on all of the compute nodes. Please consult the manual for whatever tool you are using. But the nfs mount points on the client should look like&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mount udp&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Don't forget to keep any options on the nfs mount points that you had before.&lt;br /&gt;
&lt;br /&gt;
To watch the load, just use &amp;lt;tt&amp;gt;top&amp;lt;/tt&amp;gt; (&amp;lt;tt&amp;gt;man top&amp;lt;/tt&amp;gt; if you need help). Open a separate terminal on the node that is the NFS server and enter the command &amp;lt;tt&amp;gt;top&amp;lt;/tt&amp;gt;. Then in another terminal window, run your application(s) on the cluster. Watch the load as the applications start - you can even record the peak load during the run.&lt;br /&gt;
&lt;br /&gt;
When you run your applications, be sure you time how long they take to run. You can use the &amp;lt;tt&amp;gt;time&amp;lt;/tt&amp;gt; command in front of your application. If you run your application as &amp;lt;tt&amp;gt;./foo&amp;lt;/tt&amp;gt; you would run it as &amp;lt;tt&amp;gt;time ./foo&amp;lt;/tt&amp;gt;. When the application is done, it will print out the real time, the user time, and the system time. You are interested in the real time.&lt;br /&gt;
&lt;br /&gt;
After you test your cluster with UDP, test it with TCP.&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=3.1_UDP_vs._TCP&amp;diff=2135</id>
		<title>3.1 UDP vs. TCP</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=3.1_UDP_vs._TCP&amp;diff=2135"/>
				<updated>2009-01-11T13:57:10Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [http://en.wikipedia.org/wiki/Network_File_System_(protocol) original NFS] used UDP [http://en.wikipedia.org/wiki/User_Datagram_Protocol UDP] for transmitting data for NFS.&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2102</id>
		<title>NFS</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2102"/>
				<updated>2008-12-24T13:36:09Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page covers some tweak ideas for running NFS on your cluster. It covers both the server and the client. There is no guarantee that these tweaks will work for your particular case. They are tweaks that either people have tried or have been floating around the net for a while. They may help some workloads and not others. So your mileage may vary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Our advice is to try the tweaks on your system '''for your workloads''' and then decide for yourself of the tweaks are worth it or not.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equally important, if you do any tweaks and have success, please add your comments to this wiki so other people can learn from what you did (and you can become famous in the annals of Cluster-Tweaking-lore). We can't beg you enough to add your comments, observations, and success stories with as much detail as you can provide.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With these brief introductory concepts, let's dive into the subtopics for NFS tweaking.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[1. Server Tweaks]]&lt;br /&gt;
:[[1.1 Number of Threads]]&lt;br /&gt;
:[[1.2 Second Topic]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[2. Client Tweaks]]&lt;br /&gt;
:[[2.1 Mount options]]&lt;br /&gt;
:[[2.2 Second Topic]]&lt;br /&gt;
:[[2.3 CacheFS]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[3. Network Tweaks]]&lt;br /&gt;
:[[3.1 UDP vs. TCP]]&lt;br /&gt;
:[[3.2 Jumbo Frames]] (Changing the frame size)&lt;br /&gt;
:[[3.3 Third Topic]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[4. Success Stories]] (Put your success stories here)&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2101</id>
		<title>NFS</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2101"/>
				<updated>2008-12-24T13:35:53Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page covers some tweak ideas for running NFS on your cluster. It covers both the server and the client. There is no guarantee that these tweaks will work for your particular case. They are tweaks that either people have tried or have been floating around the net for a while. They may help some workloads and not others. So your mileage may vary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Our advice is to try the tweaks on your system '''for your workloads''' and then decide for yourself of the tweaks are worth it or not.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equally important, if you do any tweaks and have success, please add your comments to this wiki so other people can learn from what you did (and you can become famous in the annals of Cluster-Tweaking-lore). We can't beg you enough to add your comments, observations, and success stories with as much detail as you can provide.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With these brief introductory concepts, let's dive into the subtopics for NFS tweaking.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[1. Server Tweaks]]&lt;br /&gt;
:[[1.1 Number of Threads]]&lt;br /&gt;
:[[1.2 Second Topic]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[2. Client Tweaks]]&lt;br /&gt;
:[[2.1 Mount options]]&lt;br /&gt;
:[[2.2 Second Topic]]&lt;br /&gt;
:[[2.3 CacheFS]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[3. Network Tweaks]]&lt;br /&gt;
:[[3.1 UDP vs. TCP]]&lt;br /&gt;
:[[3.2 Jumbo Frames]] (Changing the frame size]]&lt;br /&gt;
:[[3.3 Third Topic]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[4. Success Stories]] (Put your success stories here)&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2100</id>
		<title>NFS</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2100"/>
				<updated>2008-12-24T13:35:30Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page covers some tweak ideas for running NFS on your cluster. It covers both the server and the client. There is no guarantee that these tweaks will work for your particular case. They are tweaks that either people have tried or have been floating around the net for a while. They may help some workloads and not others. So your mileage may vary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Our advice is to try the tweaks on your system '''for your workloads''' and then decide for yourself of the tweaks are worth it or not.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equally important, if you do any tweaks and have success, please add your comments to this wiki so other people can learn from what you did (and you can become famous in the annals of Cluster-Tweaking-lore). We can't beg you enough to add your comments, observations, and success stories with as much detail as you can provide.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With these brief introductory concepts, let's dive into the subtopics for NFS tweaking.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[1. Server Tweaks]]&lt;br /&gt;
:[[1.1 Number of Threads]]&lt;br /&gt;
:[[1.2 Second Topic]]&lt;br /&gt;
&lt;br /&gt;
[[2. Client Tweaks]]&lt;br /&gt;
:[[2.1 Mount options]]&lt;br /&gt;
:[[2.2 Second Topic]]&lt;br /&gt;
:[[2.3 CacheFS]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[3. Network Tweaks]]&lt;br /&gt;
:[[3.1 UDP vs. TCP]]&lt;br /&gt;
:[[3.2 Jumbo Frames]] (Changing the frame size]]&lt;br /&gt;
:[[3.3 Third Topic]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[4. Success Stories]] (Put your success stories here)&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2099</id>
		<title>NFS</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2099"/>
				<updated>2008-12-24T13:35:14Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page covers some tweak ideas for running NFS on your cluster. It covers both the server and the client. There is no guarantee that these tweaks will work for your particular case. They are tweaks that either people have tried or have been floating around the net for a while. They may help some workloads and not others. So your mileage may vary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Our advice is to try the tweaks on your system '''for your workloads''' and then decide for yourself of the tweaks are worth it or not.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equally important, if you do any tweaks and have success, please add your comments to this wiki so other people can learn from what you did (and you can become famous in the annals of Cluster-Tweaking-lore). We can't beg you enough to add your comments, observations, and success stories with as much detail as you can provide.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With these brief introductory concepts, let's dive into the subtopics for NFS tweaking.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[1. Server Tweaks]]&lt;br /&gt;
&lt;br /&gt;
:[[1.1 Number of Threads]]&lt;br /&gt;
:[[1.2 Second Topic]]&lt;br /&gt;
&lt;br /&gt;
[[2. Client Tweaks]]&lt;br /&gt;
:[[2.1 Mount options]]&lt;br /&gt;
:[[2.2 Second Topic]]&lt;br /&gt;
:[[2.3 CacheFS]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[3. Network Tweaks]]&lt;br /&gt;
:[[3.1 UDP vs. TCP]]&lt;br /&gt;
:[[3.2 Jumbo Frames]] (Changing the frame size]]&lt;br /&gt;
:[[3.3 Third Topic]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[4. Success Stories]] (Put your success stories here)&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2098</id>
		<title>NFS</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2098"/>
				<updated>2008-12-24T13:34:18Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page covers some tweak ideas for running NFS on your cluster. It covers both the server and the client. There is no guarantee that these tweaks will work for your particular case. They are tweaks that either people have tried or have been floating around the net for a while. They may help some workloads and not others. So your mileage may vary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Our advice is to try the tweaks on your system '''for your workloads''' and then decide for yourself of the tweaks are worth it or not.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equally important, if you do any tweaks and have success, please add your comments to this wiki so other people can learn from what you did (and you can become famous in the annals of Cluster-Tweaking-lore). We can't beg you enough to add your comments, observations, and success stories with as much detail as you can provide.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With these brief introductory concepts, let's dive into the subtopics for NFS tweaking.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[1. Server Tweaks]]&lt;br /&gt;
&lt;br /&gt;
[[1.1 Number of Threads]]&lt;br /&gt;
[[1.2 Second Topic]]&lt;br /&gt;
&lt;br /&gt;
[[2. Client Tweaks]]&lt;br /&gt;
[[2.1 Mount options]]&lt;br /&gt;
[[2.2 Second Topic]]&lt;br /&gt;
[[2.3 CacheFS]]&lt;br /&gt;
&lt;br /&gt;
[[3. Network Tweaks]]&lt;br /&gt;
[[3.1 UDP vs. TCP]]&lt;br /&gt;
[[3.2 Jumbo Frames]] (Changing the frame size]]&lt;br /&gt;
[[3.3 Third Topic]]&lt;br /&gt;
&lt;br /&gt;
[[4. Success Stories]] (Put your success stories here)&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2097</id>
		<title>NFS</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2097"/>
				<updated>2008-12-24T13:34:10Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page covers some tweak ideas for running NFS on your cluster. It covers both the server and the client. There is no guarantee that these tweaks will work for your particular case. They are tweaks that either people have tried or have been floating around the net for a while. They may help some workloads and not others. So your mileage may vary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Our advice is to try the tweaks on your system '''for your workloads''' and then decide for yourself of the tweaks are worth it or not.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equally important, if you do any tweaks and have success, please add your comments to this wiki so other people can learn from what you did (and you can become famous in the annals of Cluster-Tweaking-lore). We can't beg you enough to add your comments, observations, and success stories with as much detail as you can provide.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With these brief introductory concepts, let's dive into the subtopics for NFS tweaking.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[1. Server Tweaks]]&lt;br /&gt;
[[1.1 Number of Threads]]&lt;br /&gt;
[[1.2 Second Topic]]&lt;br /&gt;
&lt;br /&gt;
[[2. Client Tweaks]]&lt;br /&gt;
[[2.1 Mount options]]&lt;br /&gt;
[[2.2 Second Topic]]&lt;br /&gt;
[[2.3 CacheFS]]&lt;br /&gt;
&lt;br /&gt;
[[3. Network Tweaks]]&lt;br /&gt;
[[3.1 UDP vs. TCP]]&lt;br /&gt;
[[3.2 Jumbo Frames]] (Changing the frame size]]&lt;br /&gt;
[[3.3 Third Topic]]&lt;br /&gt;
&lt;br /&gt;
[[4. Success Stories]] (Put your success stories here)&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2096</id>
		<title>NFS</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2096"/>
				<updated>2008-12-24T13:31:35Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page covers some tweak ideas for running NFS on your cluster. It covers both the server and the client. There is no guarantee that these tweaks will work for your particular case. They are tweaks that either people have tried or have been floating around the net for a while. They may help some workloads and not others. So your mileage may vary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Our advice is to try the tweaks on your system '''for your workloads''' and then decide for yourself of the tweaks are worth it or not.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equally important, if you do any tweaks and have success, please add your comments to this wiki so other people can learn from what you did (and you can become famous in the annals of Cluster-Tweaking-lore). We can't beg you enough to add your comments, observations, and success stories with as much detail as you can provide.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With these brief introductory concepts, let's dive into the subtopics for NFS tweaking.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[1. Server Tweaks]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[2. Client Tweaks]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[3. Network Tweaks]]&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2095</id>
		<title>NFS</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2095"/>
				<updated>2008-12-24T13:27:52Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page covers some tweak ideas for running NFS on your cluster. It covers both the server and the client. There is no guarantee that these tweaks will work for your particular case. They are tweaks that either people have tried or have been floating around the net for a while. They may help some workloads and not others. So your mileage may vary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Our advice is to try the tweaks on your system '''for your workloads''' and then decide for yourself of the tweaks are worth it or not.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equally important, if you do any tweaks and have success, please add your comments to this wiki so other people can learn from what you did (and you can become famous in the annals of Cluster-Tweaking-lore). We can't beg you enough to add your comments, observations, and success stories with as much detail as you can provide.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With these brief introductory concepts, let's dive into the subtopics for NFS tweaking.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===1. Server Tweaks===&lt;br /&gt;
&lt;br /&gt;
===2. Client Tweaks===&lt;br /&gt;
&lt;br /&gt;
===3. Network Tweaks===&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2094</id>
		<title>NFS</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2094"/>
				<updated>2008-12-24T13:27:31Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page covers some tweak ideas for running NFS on your cluster. It covers both the server and the client. There is no guarantee that these tweaks will work for your particular case. They are tweaks that either people have tried or have been floating around the net for a while. They may help some workloads and not others. So your mileage may vary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Our advice is to try the tweaks on your system '''for your workloads''' and then decide for yourself of the tweaks are worth it or not.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equally important, if you do any tweaks and have success, please add your comments to this wiki so other people can learn from what you did (and you can become famous in the annals of Cluster-Tweaking-lore). We can't beg you enough to add your comments, observations, and success stories with as much detail as you can provide.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With these brief introductory concepts, let's dive into the subtopics for NFS tweaking.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==1. Server Tweaks==&lt;br /&gt;
&lt;br /&gt;
==# Client Tweaks==&lt;br /&gt;
&lt;br /&gt;
==# Network Tweaks==&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2093</id>
		<title>NFS</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2093"/>
				<updated>2008-12-24T13:27:11Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page covers some tweak ideas for running NFS on your cluster. It covers both the server and the client. There is no guarantee that these tweaks will work for your particular case. They are tweaks that either people have tried or have been floating around the net for a while. They may help some workloads and not others. So your mileage may vary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Our advice is to try the tweaks on your system '''for your workloads''' and then decide for yourself of the tweaks are worth it or not.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equally important, if you do any tweaks and have success, please add your comments to this wiki so other people can learn from what you did (and you can become famous in the annals of Cluster-Tweaking-lore). We can't beg you enough to add your comments, observations, and success stories with as much detail as you can provide.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With these brief introductory concepts, let's dive into the subtopics for NFS tweaking.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==(# Server Tweaks)==&lt;br /&gt;
&lt;br /&gt;
==# Client Tweaks==&lt;br /&gt;
&lt;br /&gt;
==# Network Tweaks==&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2092</id>
		<title>NFS</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2092"/>
				<updated>2008-12-24T13:26:30Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page covers some tweak ideas for running NFS on your cluster. It covers both the server and the client. There is no guarantee that these tweaks will work for your particular case. They are tweaks that either people have tried or have been floating around the net for a while. They may help some workloads and not others. So your mileage may vary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Our advice is to try the tweaks on your system '''for your workloads''' and then decide for yourself of the tweaks are worth it or not.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equally important, if you do any tweaks and have success, please add your comments to this wiki so other people can learn from what you did (and you can become famous in the annals of Cluster-Tweaking-lore). We can't beg you enough to add your comments, observations, and success stories with as much detail as you can provide.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With these brief introductory concepts, let's dive into the subtopics for NFS tweaking.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==# Server Tweaks==&lt;br /&gt;
&lt;br /&gt;
==# Client Tweaks==&lt;br /&gt;
&lt;br /&gt;
==# Network Tweaks==&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2091</id>
		<title>NFS</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2091"/>
				<updated>2008-12-24T13:23:39Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page covers some tweak ideas for running NFS on your cluster. It covers both the server and the client. There is no guarantee that these tweaks will work for your particular case. They are tweaks that either people have tried or have been floating around the net for a while. They may help some workloads and not others. So your mileage may vary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Our advice is to try the tweaks on your system '''for your workloads''' and then decide for yourself of the tweaks are worth it or not.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Equally important, if you do any tweaks and have success, please add your comments to this wiki so other people can learn from what you did (and you can become famous in the annals of Cluster-Tweaking-lore). We can't beg you enough to add your comments, observations, and success stories with as much detail as you can provide.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With these brief introductory concepts, let's dive into the subtopics for NFS tweaking.&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2090</id>
		<title>NFS</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2090"/>
				<updated>2008-12-24T13:23:31Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page covers some tweak ideas for running NFS on your cluster. It covers both the server and the client. There is no guarantee that these tweaks will work for your particular case. They are tweaks that either people have tried or have been floating around the net for a while. They may help some workloads and not others. So your mileage may vary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Our advice is to try the tweaks on your system '''for your workloads''' and then decide for yourself of the tweaks are worth it or not.&lt;br /&gt;
&lt;br /&gt;
Equally important, if you do any tweaks and have success, please add your comments to this wiki so other people can learn from what you did (and you can become famous in the annals of Cluster-Tweaking-lore). We can't beg you enough to add your comments, observations, and success stories with as much detail as you can provide.&lt;br /&gt;
&lt;br /&gt;
With these brief introductory concepts, let's dive into the subtopics for NFS tweaking.&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2089</id>
		<title>NFS</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=NFS&amp;diff=2089"/>
				<updated>2008-12-24T13:23:20Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page covers some tweak ideas for running NFS on your cluster. It covers both the server and the client. There is no guarantee that these tweaks will work for your particular case. They are tweaks that either people have tried or have been floating around the net for a while. They may help some workloads and not others. So your mileage may vary.&lt;br /&gt;
&lt;br /&gt;
Our advice is to try the tweaks on your system '''for your workloads''' and then decide for yourself of the tweaks are worth it or not.&lt;br /&gt;
&lt;br /&gt;
Equally important, if you do any tweaks and have success, please add your comments to this wiki so other people can learn from what you did (and you can become famous in the annals of Cluster-Tweaking-lore). We can't beg you enough to add your comments, observations, and success stories with as much detail as you can provide.&lt;br /&gt;
&lt;br /&gt;
With these brief introductory concepts, let's dive into the subtopics for NFS tweaking.&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=File_Systems&amp;diff=2088</id>
		<title>File Systems</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=File_Systems&amp;diff=2088"/>
				<updated>2008-12-24T13:18:00Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;*  [[Local File Systems]] (ext3, XFS, etc.)&lt;br /&gt;
*  [[NFS]]&lt;br /&gt;
*  [[Parallel File Systems]]&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=File_Systems&amp;diff=2087</id>
		<title>File Systems</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=File_Systems&amp;diff=2087"/>
				<updated>2008-12-24T13:17:42Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;*  [[Local File Systems (ext3, XFS, etc.)]]&lt;br /&gt;
*  [[NFS]]&lt;br /&gt;
*  [[Parallel File Systems]]&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=Cluster_Topics&amp;diff=2086</id>
		<title>Cluster Topics</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=Cluster_Topics&amp;diff=2086"/>
				<updated>2008-12-24T13:16:02Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Main Categories==&lt;br /&gt;
&lt;br /&gt;
*  [[Cluster Application Areas/Markets]]&lt;br /&gt;
*  [[Cluster Concepts]]&lt;br /&gt;
*  [[Cluster Questions]]&lt;br /&gt;
*  [[Cluster Design]]&lt;br /&gt;
*  [[Cluster How-To's]]&lt;br /&gt;
*  [[Cluster Benchmarks]]&lt;br /&gt;
*  [[Hardware]]&lt;br /&gt;
*  [[Software]]&lt;br /&gt;
*  [[Services]]&lt;br /&gt;
*  [[Resources]]&lt;br /&gt;
*  [[File Systems]]&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=Cluster_Benchmarking_Packages&amp;diff=2083</id>
		<title>Cluster Benchmarking Packages</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=Cluster_Benchmarking_Packages&amp;diff=2083"/>
				<updated>2007-06-18T19:03:19Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are some links to benchmarks:&lt;br /&gt;
&lt;br /&gt;
'''Benchmark Suites'''&lt;br /&gt;
* [http://icl.cs.utk.edu/hpcc/ HPCC Challange Benchmark] - uses seven tests to measure several different performance parameters.&lt;br /&gt;
* [http://www.clustermonkey.net//content/view/38/27/ Beowulf Perforamnce Suite (BPS)] - Example results are [http://clustermonkey.net/download/kronos/bps-logs/ here]. &lt;br /&gt;
* [http://cmbp.clustermonkey.net/ Cluster Monkey Benchmark Project] - just started. It is based on the BPS.&lt;br /&gt;
* [http://www.nas.nasa.gov/Resources/Software/npb.html NASA Parallel Benchmarks] - One of the most, if not the most, widely used set of benchmarks for clusters.&lt;br /&gt;
* [http://www.intel.com/cd/software/products/asmo-na/eng/cluster/clustertoolkit/219848.htm Intel MPI Benchmarks] - freely available.&lt;br /&gt;
* [http://perfbase.tigris.org/ perfbase] - a set of front end tools using a PostgreSQL database as backend, which together form a system for the management and analysis of the output of tests and experiments.&lt;br /&gt;
* [http://liinwww.ira.uka.de/~skampi/ SKaMPI] - a suite of tests designed to measure the performance of MPI. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''File Systems I/O'''&lt;br /&gt;
&lt;br /&gt;
* [http://www.nus.edu.sg/comcen/svu/publications/hpc_nus/sep_2005/Performance.pdf GPFS vs. NFS evaluation]&lt;br /&gt;
* EXT3, EXT3, Reiser, JFS, XFS benchmarks [http://linuxgazette.net/102/piszcz.html Part 1] and [http://linuxgazette.net/122/piszcz.html Part 2]&lt;br /&gt;
* [http://www.clustermonkey.net/ Cluster Monkey] Series on Parallel File System Benchmarking&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/62/28/ Benchmarking Parallel File Systems]&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/87/32/  A Benchmark for Parallel File Systems]&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/117/32/ Using the PIO Benchmark ]  &lt;br /&gt;
* [http://stromberg.dnsalias.org/~strombrg/nfs-test.html nfs-test] is a program that tries a bunch of different rsizes, wsizes, protocols (tcp vs udp) and NFS versions to optimize performance&lt;br /&gt;
* [http://www.iozone.org IOZONE] Filesystem Benchmark&lt;br /&gt;
* [http://public.lanl.gov/jnunez/benchmarks/mpiiotest.htm MPI-IO Test] LANL's MPI-IO Test (The best test for MPI-IO codes)&lt;br /&gt;
* [http://www.llnl.gov/icc/lc/siop/downloads/download.html Livermore File System Tests] A collection of tests from Lawrence Livermore for testing parallel IO file systems. This include IOR (a great test for parallel file systems) and a bunch of metadata tests&lt;br /&gt;
&lt;br /&gt;
'''Individual Codes for Benchmarks'''&lt;br /&gt;
&lt;br /&gt;
These are codes that you can download (freely) and use for benchmarking. Some of them have benchmark results available and some don't.&lt;br /&gt;
* [http://www.cnn.com CNN] The CNN benchmark (just kidding)&lt;br /&gt;
* [http://www.gromacs.org/ Gromacs] - A great benchmark code. GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins and lipids that have a lot of complicated bonded interactions.\&lt;br /&gt;
* [http://hmmer.janelia.org/ HMMER] HMMER is used for profiling hidden Markov models (profile HMMs) to do sensitive database searching using statistical descriptions of a sequence family's consensus. HMMER is a freely distributable implementation of profile HMM software for protein sequence analysis.&lt;br /&gt;
* [http://gfs.sourceforge.net/ Gerris] - Gerris is an Open Source Free Software library for the solution of the partial differential equations describing fluid flow.&lt;br /&gt;
* [http://www.hlrs.de/people/resch/PROJECTS/PARACFD.html ParaCFD] While this is an old CFD benchmark, I think it is still useful. It contains both an OpenMP version and a MPI version.&lt;br /&gt;
* [http://www.mgnet.org/mgnet-codes.html MGNet] - A collection of Multi-Grid codes that you can use for benchmarking. There are no published results, but there are some MPI codes that you can use for benchmarks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Network Benchmarks'''&lt;br /&gt;
&lt;br /&gt;
These benchmarks can be used to test the networking aspects of cluster codes. Some of them can test the entire fabric and some test just a pair of nodes.&lt;br /&gt;
* [http://www.scl.ameslab.gov/netpipe/ NetPIPE] - One of the best network benchmarks. It tests a range of packet sizes and measures the time (bandwidth). It can use MPI as the message sending mechanism so it allows you to also test MPI implementations.&lt;br /&gt;
* [http://www.netperf.org/netperf/ Netperf] - Another good networking benchmark that is used frequently.&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=Cluster_Benchmarking_Packages&amp;diff=2082</id>
		<title>Cluster Benchmarking Packages</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=Cluster_Benchmarking_Packages&amp;diff=2082"/>
				<updated>2007-06-18T18:55:26Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are some links to benchmarks:&lt;br /&gt;
&lt;br /&gt;
'''Benchmark Suites'''&lt;br /&gt;
* [http://icl.cs.utk.edu/hpcc/ HPCC Challange Benchmark] - uses seven tests to measure several different performance parameters.&lt;br /&gt;
* [http://www.clustermonkey.net//content/view/38/27/ Beowulf Perforamnce Suite (BPS)] - Example results are [http://clustermonkey.net/download/kronos/bps-logs/ here]. &lt;br /&gt;
* [http://cmbp.clustermonkey.net/ Cluster Monkey Benchmark Project] - just started. It is based on the BPS.&lt;br /&gt;
* [http://www.nas.nasa.gov/Resources/Software/npb.html NASA Parallel Benchmarks] - One of the most, if not the most, widely used set of benchmarks for clusters.&lt;br /&gt;
* [http://www.intel.com/cd/software/products/asmo-na/eng/cluster/clustertoolkit/219848.htm Intel MPI Benchmarks] - freely available.&lt;br /&gt;
* [http://perfbase.tigris.org/ perfbase] - a set of front end tools using a PostgreSQL database as backend, which together form a system for the management and analysis of the output of tests and experiments.&lt;br /&gt;
* [http://liinwww.ira.uka.de/~skampi/ SKaMPI] - a suite of tests designed to measure the performance of MPI. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''File Systems I/O'''&lt;br /&gt;
&lt;br /&gt;
* [http://www.nus.edu.sg/comcen/svu/publications/hpc_nus/sep_2005/Performance.pdf GPFS vs. NFS evaluation]&lt;br /&gt;
* EXT3, EXT3, Reiser, JFS, XFS benchmarks [http://linuxgazette.net/102/piszcz.html Part 1] and [http://linuxgazette.net/122/piszcz.html Part 2]&lt;br /&gt;
* [http://www.clustermonkey.net/ Cluster Monkey] Series on Parallel File System Benchmarking&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/62/28/ Benchmarking Parallel File Systems]&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/87/32/  A Benchmark for Parallel File Systems]&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/117/32/ Using the PIO Benchmark ]  &lt;br /&gt;
* [http://stromberg.dnsalias.org/~strombrg/nfs-test.html nfs-test] is a program that tries a bunch of different rsizes, wsizes, protocols (tcp vs udp) and NFS versions to optimize performance&lt;br /&gt;
* [http://www.iozone.org IOZONE] Filesystem Benchmark&lt;br /&gt;
* [http://public.lanl.gov/jnunez/benchmarks/mpiiotest.htm MPI-IO Test] LANL's MPI-IO Test (The best test for MPI-IO codes)&lt;br /&gt;
* [http://www.llnl.gov/icc/lc/siop/downloads/download.html Livermore File System Tests] A collection of tests from Lawrence Livermore for testing parallel IO file systems. This include IOR (a great test for parallel file systems) and a bunch of metadata tests&lt;br /&gt;
&lt;br /&gt;
'''Individual Codes for Benchmarks'''&lt;br /&gt;
&lt;br /&gt;
These are codes that you can download (freely) and use for benchmarking. Some of them have benchmark results available and some don't.&lt;br /&gt;
* [http://www.cnn.com CNN] The CNN benchmark (just kidding)&lt;br /&gt;
* [http://www.gromacs.org/ Gromacs] - A great benchmark code. GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins and lipids that have a lot of complicated bonded interactions.\&lt;br /&gt;
* [http://hmmer.janelia.org/ HMMER] HMMER is used for profiling hidden Markov models (profile HMMs) to do sensitive database searching using statistical descriptions of a sequence family's consensus. HMMER is a freely distributable implementation of profile HMM software for protein sequence analysis.&lt;br /&gt;
* [http://gfs.sourceforge.net/ Gerris] - Gerris is an Open Source Free Software library for the solution of the partial differential equations describing fluid flow.&lt;br /&gt;
&lt;br /&gt;
'''Network Benchmarks'''&lt;br /&gt;
&lt;br /&gt;
These benchmarks can be used to test the networking aspects of cluster codes. Some of them can test the entire fabric and some test just a pair of nodes.&lt;br /&gt;
* [http://www.scl.ameslab.gov/netpipe/ NetPIPE] - One of the best network benchmarks. It tests a range of packet sizes and measures the time (bandwidth). It can use MPI as the message sending mechanism so it allows you to also test MPI implementations.&lt;br /&gt;
* [http://www.netperf.org/netperf/ Netperf] - Another good networking benchmark that is used frequently.&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=Cluster_Benchmarking_Packages&amp;diff=2081</id>
		<title>Cluster Benchmarking Packages</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=Cluster_Benchmarking_Packages&amp;diff=2081"/>
				<updated>2007-06-18T18:50:01Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are some links to benchmarks:&lt;br /&gt;
&lt;br /&gt;
'''Benchmark Suites'''&lt;br /&gt;
* [http://icl.cs.utk.edu/hpcc/ HPCC Challange Benchmark] - uses seven tests to measure several different performance parameters.&lt;br /&gt;
* [http://www.clustermonkey.net//content/view/38/27/ Beowulf Perforamnce Suite (BPS)] - Example results are [http://clustermonkey.net/download/kronos/bps-logs/ here]. &lt;br /&gt;
* [http://cmbp.clustermonkey.net/ Cluster Monkey Benchmark Project] - just started. It is based on the BPS.&lt;br /&gt;
* [http://www.nas.nasa.gov/Resources/Software/npb.html NASA Parallel Benchmarks] - One of the most, if not the most, widely used set of benchmarks for clusters.&lt;br /&gt;
* [http://www.intel.com/cd/software/products/asmo-na/eng/cluster/clustertoolkit/219848.htm Intel MPI Benchmarks] - freely available.&lt;br /&gt;
* [http://perfbase.tigris.org/ perfbase] - a set of front end tools using a PostgreSQL database as backend, which together form a system for the management and analysis of the output of tests and experiments.&lt;br /&gt;
* [http://liinwww.ira.uka.de/~skampi/ SKaMPI] - a suite of tests designed to measure the performance of MPI. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''File Systems I/O'''&lt;br /&gt;
&lt;br /&gt;
* [http://www.nus.edu.sg/comcen/svu/publications/hpc_nus/sep_2005/Performance.pdf GPFS vs. NFS evaluation]&lt;br /&gt;
* EXT3, EXT3, Reiser, JFS, XFS benchmarks [http://linuxgazette.net/102/piszcz.html Part 1] and [http://linuxgazette.net/122/piszcz.html Part 2]&lt;br /&gt;
* [http://www.clustermonkey.net/ Cluster Monkey] Series on Parallel File System Benchmarking&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/62/28/ Benchmarking Parallel File Systems]&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/87/32/  A Benchmark for Parallel File Systems]&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/117/32/ Using the PIO Benchmark ]  &lt;br /&gt;
* [http://stromberg.dnsalias.org/~strombrg/nfs-test.html nfs-test] is a program that tries a bunch of different rsizes, wsizes, protocols (tcp vs udp) and NFS versions to optimize performance&lt;br /&gt;
* [http://www.iozone.org IOZONE] Filesystem Benchmark&lt;br /&gt;
* [http://public.lanl.gov/jnunez/benchmarks/mpiiotest.htm MPI-IO Test] LANL's MPI-IO Test (The best test for MPI-IO codes)&lt;br /&gt;
* [http://www.llnl.gov/icc/lc/siop/downloads/download.html Livermore File System Tests] A collection of tests from Lawrence Livermore for testing parallel IO file systems. This include IOR (a great test for parallel file systems) and a bunch of metadata tests&lt;br /&gt;
&lt;br /&gt;
'''Individual Codes for Benchmarks'''&lt;br /&gt;
&lt;br /&gt;
These are codes that you can download (freely) and use for benchmarking. Some of them have benchmark results available and some don't.&lt;br /&gt;
* [http://www.cnn.com CNN] The CNN benchmark (just kidding)&lt;br /&gt;
* [http://www.gromacs.org/ Gromacs] - A great benchmark code. GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins and lipids that have a lot of complicated bonded interactions.\&lt;br /&gt;
* [http://hmmer.janelia.org/ HMMER] HMMER is used for profiling hidden Markov models (profile HMMs) to do sensitive database searching using statistical descriptions of a sequence family's consensus. HMMER is a freely distributable implementation of profile HMM software for protein sequence analysis.&lt;br /&gt;
* [http://gfs.sourceforge.net/ Gerris] - Gerris is an Open Source Free Software library for the solution of the partial differential equations describing fluid flow.&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=Cluster_Benchmarking_Packages&amp;diff=2080</id>
		<title>Cluster Benchmarking Packages</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=Cluster_Benchmarking_Packages&amp;diff=2080"/>
				<updated>2007-06-18T18:49:33Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are some links to benchmarks:&lt;br /&gt;
&lt;br /&gt;
Benchmark Suites&lt;br /&gt;
* [http://icl.cs.utk.edu/hpcc/ HPCC Challange Benchmark] - uses seven tests to measure several different performance parameters.&lt;br /&gt;
* [http://www.clustermonkey.net//content/view/38/27/ Beowulf Perforamnce Suite (BPS)] - Example results are [http://clustermonkey.net/download/kronos/bps-logs/ here]. &lt;br /&gt;
* [http://cmbp.clustermonkey.net/ Cluster Monkey Benchmark Project] - just started. It is based on the BPS.&lt;br /&gt;
* [http://www.nas.nasa.gov/Resources/Software/npb.html NASA Parallel Benchmarks] - One of the most, if not the most, widely used set of benchmarks for clusters.&lt;br /&gt;
* [http://www.intel.com/cd/software/products/asmo-na/eng/cluster/clustertoolkit/219848.htm Intel MPI Benchmarks] - freely available.&lt;br /&gt;
* [http://perfbase.tigris.org/ perfbase] - a set of front end tools using a PostgreSQL database as backend, which together form a system for the management and analysis of the output of tests and experiments.&lt;br /&gt;
* [http://liinwww.ira.uka.de/~skampi/ SKaMPI] - a suite of tests designed to measure the performance of MPI. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
File Systems I/O&lt;br /&gt;
&lt;br /&gt;
* [http://www.nus.edu.sg/comcen/svu/publications/hpc_nus/sep_2005/Performance.pdf GPFS vs. NFS evaluation]&lt;br /&gt;
* EXT3, EXT3, Reiser, JFS, XFS benchmarks [http://linuxgazette.net/102/piszcz.html Part 1] and [http://linuxgazette.net/122/piszcz.html Part 2]&lt;br /&gt;
* [http://www.clustermonkey.net/ Cluster Monkey] Series on Parallel File System Benchmarking&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/62/28/ Benchmarking Parallel File Systems]&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/87/32/  A Benchmark for Parallel File Systems]&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/117/32/ Using the PIO Benchmark ]  &lt;br /&gt;
* [http://stromberg.dnsalias.org/~strombrg/nfs-test.html nfs-test] is a program that tries a bunch of different rsizes, wsizes, protocols (tcp vs udp) and NFS versions to optimize performance&lt;br /&gt;
* [http://www.iozone.org IOZONE] Filesystem Benchmark&lt;br /&gt;
* [http://public.lanl.gov/jnunez/benchmarks/mpiiotest.htm MPI-IO Test] LANL's MPI-IO Test (The best test for MPI-IO codes)&lt;br /&gt;
* [http://www.llnl.gov/icc/lc/siop/downloads/download.html Livermore File System Tests] A collection of tests from Lawrence Livermore for testing parallel IO file systems. This include IOR (a great test for parallel file systems) and a bunch of metadata tests&lt;br /&gt;
&lt;br /&gt;
Individual Codes for Benchmarks&lt;br /&gt;
&lt;br /&gt;
These are codes that you can download (freely) and use for benchmarking. Some of them have benchmark results available and some don't.&lt;br /&gt;
* [http://www.cnn.com CNN] The CNN benchmark (just kidding)&lt;br /&gt;
* [http://www.gromacs.org/ Gromacs] - A great benchmark code. GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins and lipids that have a lot of complicated bonded interactions.\&lt;br /&gt;
* [http://hmmer.janelia.org/ HMMER] HMMER is used for profiling hidden Markov models (profile HMMs) to do sensitive database searching using statistical descriptions of a sequence family's consensus. HMMER is a freely distributable implementation of profile HMM software for protein sequence analysis.&lt;br /&gt;
* [http://gfs.sourceforge.net/ Gerris] - Gerris is an Open Source Free Software library for the solution of the partial differential equations describing fluid flow.&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=Cluster_Benchmarking_Packages&amp;diff=2079</id>
		<title>Cluster Benchmarking Packages</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=Cluster_Benchmarking_Packages&amp;diff=2079"/>
				<updated>2007-06-18T18:49:18Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are some links to benchmarks:&lt;br /&gt;
&lt;br /&gt;
Benchmark Suites&lt;br /&gt;
* [http://icl.cs.utk.edu/hpcc/ HPCC Challange Benchmark] - uses seven tests to measure several different performance parameters.&lt;br /&gt;
* [http://www.clustermonkey.net//content/view/38/27/ Beowulf Perforamnce Suite (BPS)] - Example results are [http://clustermonkey.net/download/kronos/bps-logs/ here]. &lt;br /&gt;
* [http://cmbp.clustermonkey.net/ Cluster Monkey Benchmark Project] - just started. It is based on the BPS.&lt;br /&gt;
* [http://www.nas.nasa.gov/Resources/Software/npb.html NASA Parallel Benchmarks] - One of the most, if not the most, widely used set of benchmarks for clusters.&lt;br /&gt;
* [http://www.intel.com/cd/software/products/asmo-na/eng/cluster/clustertoolkit/219848.htm Intel MPI Benchmarks] - freely available.&lt;br /&gt;
* [http://perfbase.tigris.org/ perfbase] - a set of front end tools using a PostgreSQL database as backend, which together form a system for the management and analysis of the output of tests and experiments.&lt;br /&gt;
* [http://liinwww.ira.uka.de/~skampi/ SKaMPI] - a suite of tests designed to measure the performance of MPI. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
File Systems I/O&lt;br /&gt;
&lt;br /&gt;
* [http://www.nus.edu.sg/comcen/svu/publications/hpc_nus/sep_2005/Performance.pdf GPFS vs. NFS evaluation]&lt;br /&gt;
* EXT3, EXT3, Reiser, JFS, XFS benchmarks [http://linuxgazette.net/102/piszcz.html Part 1] and [http://linuxgazette.net/122/piszcz.html Part 2]&lt;br /&gt;
* [http://www.clustermonkey.net/ Cluster Monkey] Series on Parallel File System Benchmarking&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/62/28/ Benchmarking Parallel File Systems]&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/87/32/  A Benchmark for Parallel File Systems]&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/117/32/ Using the PIO Benchmark ]  &lt;br /&gt;
* [http://stromberg.dnsalias.org/~strombrg/nfs-test.html nfs-test] is a program that tries a bunch of different rsizes, wsizes, protocols (tcp vs udp) and NFS versions to optimize performance&lt;br /&gt;
* [http://www.iozone.org IOZONE] Filesystem Benchmark&lt;br /&gt;
* [http://public.lanl.gov/jnunez/benchmarks/mpiiotest.htm MPI-IO Test] LANL's MPI-IO Test (The best test for MPI-IO codes)&lt;br /&gt;
* [http://www.llnl.gov/icc/lc/siop/downloads/download.html Livermore File System Tests] A collection of tests from Lawrence Livermore for testing parallel IO file systems. This include IOR (a great test for parallel file systems) and a bunch of metadata tests&lt;br /&gt;
&lt;br /&gt;
Individual Codes for Benchmarks&lt;br /&gt;
These are codes that you can download (freely) and use for benchmarking. Some of them have benchmark results available and some don't.&lt;br /&gt;
* [http://www.cnn.com CNN] The CNN benchmark (just kidding)&lt;br /&gt;
* [http://www.gromacs.org/ Gromacs] - A great benchmark code. GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins and lipids that have a lot of complicated bonded interactions.\&lt;br /&gt;
* [http://hmmer.janelia.org/ HMMER] HMMER is used for profiling hidden Markov models (profile HMMs) to do sensitive database searching using statistical descriptions of a sequence family's consensus. HMMER is a freely distributable implementation of profile HMM software for protein sequence analysis.&lt;br /&gt;
* [http://gfs.sourceforge.net/ Gerris] - Gerris is an Open Source Free Software library for the solution of the partial differential equations describing fluid flow.&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=Cluster_Benchmarking_Packages&amp;diff=2078</id>
		<title>Cluster Benchmarking Packages</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=Cluster_Benchmarking_Packages&amp;diff=2078"/>
				<updated>2007-06-18T18:43:02Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are some links to benchmarks:&lt;br /&gt;
&lt;br /&gt;
Benchmark Suites&lt;br /&gt;
* [http://icl.cs.utk.edu/hpcc/ HPCC Challange Benchmark] - uses seven tests to measure several different performance parameters.&lt;br /&gt;
* [http://www.clustermonkey.net//content/view/38/27/ Beowulf Perforamnce Suite (BPS)] - Example results are [http://clustermonkey.net/download/kronos/bps-logs/ here]. &lt;br /&gt;
* [http://cmbp.clustermonkey.net/ Cluster Monkey Benchmark Project] - just started. It is based on the BPS.&lt;br /&gt;
* [http://www.nas.nasa.gov/Resources/Software/npb.html NASA Parallel Benchmarks] - One of the most, if not the most, widely used set of benchmarks for clusters.&lt;br /&gt;
* [http://www.intel.com/cd/software/products/asmo-na/eng/cluster/clustertoolkit/219848.htm Intel MPI Benchmarks] - freely available.&lt;br /&gt;
* [http://perfbase.tigris.org/ perfbase] - a set of front end tools using a PostgreSQL database as backend, which together form a system for the management and analysis of the output of tests and experiments.&lt;br /&gt;
* [http://liinwww.ira.uka.de/~skampi/ SKaMPI] - a suite of tests designed to measure the performance of MPI. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
File Systems I/O&lt;br /&gt;
&lt;br /&gt;
* [http://www.nus.edu.sg/comcen/svu/publications/hpc_nus/sep_2005/Performance.pdf GPFS vs. NFS evaluation]&lt;br /&gt;
* EXT3, EXT3, Reiser, JFS, XFS benchmarks [http://linuxgazette.net/102/piszcz.html Part 1] and [http://linuxgazette.net/122/piszcz.html Part 2]&lt;br /&gt;
* [http://www.clustermonkey.net/ Cluster Monkey] Series on Parallel File System Benchmarking&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/62/28/ Benchmarking Parallel File Systems]&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/87/32/  A Benchmark for Parallel File Systems]&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/117/32/ Using the PIO Benchmark ]  &lt;br /&gt;
* [http://stromberg.dnsalias.org/~strombrg/nfs-test.html nfs-test] is a program that tries a bunch of different rsizes, wsizes, protocols (tcp vs udp) and NFS versions to optimize performance&lt;br /&gt;
* [http://www.iozone.org IOZONE] Filesystem Benchmark&lt;br /&gt;
* [http://public.lanl.gov/jnunez/benchmarks/mpiiotest.htm MPI-IO Test] LANL's MPI-IO Test (The best test for MPI-IO codes)&lt;br /&gt;
* [http://www.llnl.gov/icc/lc/siop/downloads/download.html Livermore File System Tests] A collection of tests from Lawrence Livermore for testing parallel IO file systems. This include IOR (a great test for parallel file systems) and a bunch of metadata tests&lt;br /&gt;
&lt;br /&gt;
Individual Codes for Benchmarks&lt;br /&gt;
These are codes that you can download (freely) and use for benchmarking. Some of them have benchmark results available and some don't.&lt;br /&gt;
* [http://www.cnn.com CNN] The CNN benchmark (just kidding)&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=Cluster_Benchmarking_Packages&amp;diff=2077</id>
		<title>Cluster Benchmarking Packages</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=Cluster_Benchmarking_Packages&amp;diff=2077"/>
				<updated>2007-06-18T18:41:23Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are some links to benchmarks:&lt;br /&gt;
&lt;br /&gt;
Benchmark Suites&lt;br /&gt;
* [http://icl.cs.utk.edu/hpcc/ HPCC Challange Benchmark] - uses seven tests to measure several different performance parameters.&lt;br /&gt;
* [http://www.clustermonkey.net//content/view/38/27/ Beowulf Perforamnce Suite (BPS)] - Example results are [http://clustermonkey.net/download/kronos/bps-logs/ here]. &lt;br /&gt;
* [http://cmbp.clustermonkey.net/ Cluster Monkey Benchmark Project] - just started. It is based on the BPS.&lt;br /&gt;
* [http://www.nas.nasa.gov/Resources/Software/npb.html NASA Parallel Benchmarks] - One of the most, if not the most, widely used set of benchmarks for clusters.&lt;br /&gt;
* [http://www.intel.com/cd/software/products/asmo-na/eng/cluster/clustertoolkit/219848.htm Intel MPI Benchmarks] - freely available.&lt;br /&gt;
* [http://perfbase.tigris.org/ perfbase] - a set of front end tools using a PostgreSQL database as backend, which together form a system for the management and analysis of the output of tests and experiments.&lt;br /&gt;
* [http://liinwww.ira.uka.de/~skampi/ SKaMPI] - a suite of tests designed to measure the performance of MPI. &lt;br /&gt;
&lt;br /&gt;
File Systems I/O&lt;br /&gt;
&lt;br /&gt;
* [http://www.nus.edu.sg/comcen/svu/publications/hpc_nus/sep_2005/Performance.pdf GPFS vs. NFS evaluation]&lt;br /&gt;
* EXT3, EXT3, Reiser, JFS, XFS benchmarks [http://linuxgazette.net/102/piszcz.html Part 1] and [http://linuxgazette.net/122/piszcz.html Part 2]&lt;br /&gt;
* [http://www.clustermonkey.net/ Cluster Monkey] Series on Parallel File System Benchmarking&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/62/28/ Benchmarking Parallel File Systems]&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/87/32/  A Benchmark for Parallel File Systems]&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/117/32/ Using the PIO Benchmark ]  &lt;br /&gt;
* [http://stromberg.dnsalias.org/~strombrg/nfs-test.html nfs-test] is a program that tries a bunch of different rsizes, wsizes, protocols (tcp vs udp) and NFS versions to optimize performance&lt;br /&gt;
* [http://www.iozone.org IOZONE] Filesystem Benchmark&lt;br /&gt;
* [http://public.lanl.gov/jnunez/benchmarks/mpiiotest.htm MPI-IO Test] LANL's MPI-IO Test (The best test for MPI-IO codes)&lt;br /&gt;
* [http://www.llnl.gov/icc/lc/siop/downloads/download.html Livermore File System Tests] A collection of tests from Lawrence Livermore for testing parallel IO file systems. This include IOR (a great test for parallel file systems) and a bunch of metadata tests&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=Cluster_Benchmarking_Packages&amp;diff=2076</id>
		<title>Cluster Benchmarking Packages</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=Cluster_Benchmarking_Packages&amp;diff=2076"/>
				<updated>2007-06-18T18:40:47Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are some links to benchmarks:&lt;br /&gt;
&lt;br /&gt;
Benchmark Suites&lt;br /&gt;
* [http://icl.cs.utk.edu/hpcc/ HPCC Challange Benchmark] - uses seven tests to measure several different performance parameters.&lt;br /&gt;
* [http://www.clustermonkey.net//content/view/38/27/ Beowulf Perforamnce Suite (BPS)] - Example results are [http://clustermonkey.net/download/kronos/bps-logs/ here]. &lt;br /&gt;
* [http://cmbp.clustermonkey.net/ Cluster Monkey Benchmark Project] - just started. It is based on the BPS.&lt;br /&gt;
* [http://www.nas.nasa.gov/Resources/Software/npb.html NASA Parallel Benchmarks] - One of the most, if not the most, widely used set of benchmarks for clusters.&lt;br /&gt;
* [http://www.intel.com/cd/software/products/asmo-na/eng/cluster/clustertoolkit/219848.htm Intel MPI Benchmarks] - freely available.&lt;br /&gt;
* [http://perfbase.tigris.org/ perfbase] - a set of front end tools using a PostgreSQL database as backend, which together form a system for the management and analysis of the output of tests and experiments.&lt;br /&gt;
* [http://liinwww.ira.uka.de/~skampi/ SKaMPI] - a suite of tests designed to measure the performance of MPI. &lt;br /&gt;
&lt;br /&gt;
File Systems I/O&lt;br /&gt;
&lt;br /&gt;
* [http://www.nus.edu.sg/comcen/svu/publications/hpc_nus/sep_2005/Performance.pdf GPFS vs. NFS evaluation]&lt;br /&gt;
* EXT3, EXT3, Reiser, JFS, XFS benchmarks [http://linuxgazette.net/102/piszcz.html Part 1] and [http://linuxgazette.net/122/piszcz.html Part 2]&lt;br /&gt;
* [http://www.clustermonkey.net/ Cluster Monkey] Series on Parallel File System Benchmarking&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/62/28/ Benchmarking Parallel File Systems]&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/87/32/  A Benchmark for Parallel File Systems]&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/117/32/ Using the PIO Benchmark ]  &lt;br /&gt;
* [http://stromberg.dnsalias.org/~strombrg/nfs-test.html] is a program that tries a bunch of different rsizes, wsizes, protocols (tcp vs udp) and NFS versions to optimize performance&lt;br /&gt;
* [http://www.iozone.org IOZONE] Filesystem Benchmark&lt;br /&gt;
* [http://public.lanl.gov/jnunez/benchmarks/mpiiotest.htm MPI-IO Test] LANL's MPI-IO Test (The best test for MPI-IO codes)&lt;br /&gt;
* [http://www.llnl.gov/icc/lc/siop/downloads/download.html Livermore File System Tests] A collection of tests from Lawrence Livermore for testing parallel IO file systems. This include IOR (a great test for parallel file systems) and a bunch of metadata tests&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=Cluster_Benchmarking_Packages&amp;diff=2075</id>
		<title>Cluster Benchmarking Packages</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=Cluster_Benchmarking_Packages&amp;diff=2075"/>
				<updated>2007-06-18T18:39:50Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are some links to benchmarks:&lt;br /&gt;
&lt;br /&gt;
Benchmark Suites&lt;br /&gt;
* [http://icl.cs.utk.edu/hpcc/ HPCC Challange Benchmark] - uses seven tests to measure several different performance parameters.&lt;br /&gt;
* [http://www.clustermonkey.net//content/view/38/27/ Beowulf Perforamnce Suite (BPS)] - Example results are [http://clustermonkey.net/download/kronos/bps-logs/ here]. &lt;br /&gt;
* [http://cmbp.clustermonkey.net/ Cluster Monkey Benchmark Project] - just started. It is based on the BPS.&lt;br /&gt;
* [http://www.nas.nasa.gov/Resources/Software/npb.html] - NASA Parallel Benchmarks. One of the most, if not the most, widely used set of benchmarks for clusters.&lt;br /&gt;
* [http://www.intel.com/cd/software/products/asmo-na/eng/cluster/clustertoolkit/219848.htm Intel MPI Benchmarks] - freely available.&lt;br /&gt;
* [http://perfbase.tigris.org/ perfbase] - a set of front end tools using a PostgreSQL database as backend, which together form a system for the management and analysis of the output of tests and experiments.&lt;br /&gt;
* [http://liinwww.ira.uka.de/~skampi/ SKaMPI] - a suite of tests designed to measure the performance of MPI. &lt;br /&gt;
&lt;br /&gt;
File Systems I/O&lt;br /&gt;
&lt;br /&gt;
* [http://www.nus.edu.sg/comcen/svu/publications/hpc_nus/sep_2005/Performance.pdf GPFS vs. NFS evaluation]&lt;br /&gt;
* EXT3, EXT3, Reiser, JFS, XFS benchmarks [http://linuxgazette.net/102/piszcz.html Part 1] and [http://linuxgazette.net/122/piszcz.html Part 2]&lt;br /&gt;
* [http://www.clustermonkey.net/ Cluster Monkey] Series on Parallel File System Benchmarking&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/62/28/ Benchmarking Parallel File Systems]&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/87/32/  A Benchmark for Parallel File Systems]&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/117/32/ Using the PIO Benchmark ]  &lt;br /&gt;
* [http://stromberg.dnsalias.org/~strombrg/nfs-test.html] is a program that tries a bunch of different rsizes, wsizes, protocols (tcp vs udp) and NFS versions to optimize performance&lt;br /&gt;
* [http://www.iozone.org IOZONE] Filesystem Benchmark&lt;br /&gt;
* [http://public.lanl.gov/jnunez/benchmarks/mpiiotest.htm] LANL's MPI-IO Test (The best test for MPI-IO codes)&lt;br /&gt;
* [http://www.llnl.gov/icc/lc/siop/downloads/download.html] A collection of tests from Lawrence Livermore for testing parallel IO file systems. This include IOR (a great test for parallel file systems) and a bunch of metadata tests&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	<entry>
		<id>https://www.clustermonkey.net/cdp/index.php?title=Cluster_Benchmarking_Packages&amp;diff=2074</id>
		<title>Cluster Benchmarking Packages</title>
		<link rel="alternate" type="text/html" href="https://www.clustermonkey.net/cdp/index.php?title=Cluster_Benchmarking_Packages&amp;diff=2074"/>
				<updated>2007-06-18T18:37:57Z</updated>
		
		<summary type="html">&lt;p&gt;Laytonjb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are some links to benchmarks:&lt;br /&gt;
&lt;br /&gt;
Benchmark Suites&lt;br /&gt;
* [http://icl.cs.utk.edu/hpcc/ HPCC Challange Benchmark] - uses seven tests to measure several different performance parameters.&lt;br /&gt;
* [http://www.clustermonkey.net//content/view/38/27/ Beowulf Perforamnce Suite (BPS)] - Example results are [http://clustermonkey.net/download/kronos/bps-logs/ here]. &lt;br /&gt;
* [http://cmbp.clustermonkey.net/ Cluster Monkey Benchmark Project] - just started. It is based on the BPS.&lt;br /&gt;
* [http://www.intel.com/cd/software/products/asmo-na/eng/cluster/clustertoolkit/219848.htm Intel MPI Benchmarks] - freely available.&lt;br /&gt;
* [http://perfbase.tigris.org/ perfbase] - a set of front end tools using a PostgreSQL database as backend, which together form a system for the management and analysis of the output of tests and experiments.&lt;br /&gt;
* [http://liinwww.ira.uka.de/~skampi/ SKaMPI] - a suite of tests designed to measure the performance of MPI. &lt;br /&gt;
&lt;br /&gt;
File Systems I/O&lt;br /&gt;
&lt;br /&gt;
* [http://www.nus.edu.sg/comcen/svu/publications/hpc_nus/sep_2005/Performance.pdf GPFS vs. NFS evaluation]&lt;br /&gt;
* EXT3, EXT3, Reiser, JFS, XFS benchmarks [http://linuxgazette.net/102/piszcz.html Part 1] and [http://linuxgazette.net/122/piszcz.html Part 2]&lt;br /&gt;
* [http://www.clustermonkey.net/ Cluster Monkey] Series on Parallel File System Benchmarking&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/62/28/ Benchmarking Parallel File Systems]&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/87/32/  A Benchmark for Parallel File Systems]&lt;br /&gt;
** [http://www.clustermonkey.net//content/view/117/32/ Using the PIO Benchmark ]  &lt;br /&gt;
* [http://stromberg.dnsalias.org/~strombrg/nfs-test.html] is a program that tries a bunch of different rsizes, wsizes, protocols (tcp vs udp) and NFS versions to optimize performance&lt;br /&gt;
*[http://www.iozone.org IOZONE] Filesystem Benchmark&lt;br /&gt;
*[http://public.lanl.gov/jnunez/benchmarks/mpiiotest.htm] LANL's MPI-IO Test (The best test for MPI-IO codes)&lt;br /&gt;
*[http://www.llnl.gov/icc/lc/siop/downloads/download.html] A collection of tests from Lawrence Livermore for testing parallel IO file systems. This include IOR (a great test for parallel file systems) and a bunch of metadata tests&lt;/div&gt;</summary>
		<author><name>Laytonjb</name></author>	</entry>

	</feed>