Article Index

Sidebar One: Configuring An Interface
Adding a second Ethernet interface for testing purposes is not difficult. The most important thing is to use up-to-date kernels and drivers. The driver should be compiled as a module so that it can be easily added or removed from the kernel. Assuming you have two nodes that can communicate through a network, the follow steps, performed on each node, should allow you to easily bring up the test interface.

Enter the following command to load the Tigon 3 module (the module name may vary for the adapter under test).

# insmod tg3

The module should load successfully. Check the end of the output from

dmesg |tail
If you are using the tg3 module, you should see two lines similar to the following. Other adapter modules will produce a different message, but still list the Ethernet port. You may also want to check to see if the driver allows for any tunable parameters such as interrupt mitigation settings, which may effect latency.

eth1: Tigon3 [partno(AC91002A1) rev 0105 PHY(5701)] (PCI:33MHz:32-bit) \
10/100/1000BaseT Ethernet 00:09:5b:22:cd:bc

The dmesg output tells us, among other things that the card is assigned to eth1 and the card is in 33Mhz:32 bit PCI slot.

Now that the driver is loaded and recognizes the card, we need to bring up the interface. Because we will be playing with a parameter (MTU size -- Ethernet packet size), we will use

ifconfig
to assign the IP address (in this case 192.168.1.2) and start the interface.

# ifconfig eth1 inet 192.168.1.2 netmask 255.255.255.0 broadcast 192.168.1.255 mtu 1500

If this command was successful, you should be able to issue a

ifconfig eth1
command and get something similar to the following.

eth1      Link encap:Ethernet  HWaddr 00:09:5B:60:18:E5
          inet addr:192.168.1.2  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:869
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:100
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
          Interrupt:18

Once this process is performed on both nodes, using different IP addresses of course, you should be able to successfully ping between the nodes. In our case, we used 192.168.1.1 and 192.168.1.2 as the IP numbers so we know the interface is communicating if we issue the following:

#ping 192.168.1.1

from the node whose interface we assigned as 192.168.1.2. Once the interfaces are up we can begin testing.

To vary any of the parameters (such as IP or MTU size), you simply take the interface down using an ifconfig Interface down

# ifconfig eth1 down
# ifconfig eth1 inet 192.168.1.2 netmask 255.255.255.0 broadcast 192.168.1.255 mtu 3000

The MTU for both adapters must be the same.

Sidebar Two: Running Netpipe
From one of the test nodes, open a window and login to the other node. Start Netpipe in receive mode (-r) by entering:

NPtcp -r

Open a second window on the first node and enter:

NPtcp -t -h 192.168.1.2 -P -o NP_output_file

using the IP address for the receiving machine (-h option). The -P option tells Netpipe to print to the screen, the -o option produces an output file for plotting, and the -t options tells Netpipe to run in transmit mode. There are other options for Netpipe, but these will provide a basic test of the interface. Once it is running you should see something similar to the following:

Latency: 0.000035
Now starting main loop
  0:         1 bytes 7241 times -->    0.23 Mbps in 0.000033 sec
  1:         2 bytes 7511 times -->    0.46 Mbps in 0.000033 sec
  2:         3 bytes 7473 times -->    0.68 Mbps in 0.000034 sec
  3:         4 bytes 4956 times -->    0.91 Mbps in 0.000033 sec
  4:         6 bytes 5601 times -->    1.37 Mbps in 0.000033 sec
  5:         8 bytes 3734 times -->    1.81 Mbps in 0.000034 sec
  6:        12 bytes 4642 times -->    2.69 Mbps in 0.000034 sec
  (continues)

The default Netpipe test is self limiting as the block size is increased from a single byte (by various non-standard increments) until the transmission time exceeds 1 second.

Sidebar Three: Plotting Results
The Netpipe output file can be easily plotted with Gnuplot. The plotting file for Figure One, a standard plot of Throughput vs. Blocksize, is as follows.

# gnuplot file for plotting Netpipe data
#
set title "Netpipe TCP - Throughput vs. Blocksize \n Netgear GA302T Adapter"
set xlabel "Blocksize (Bytes)"
set ylabel "Throughput (Mbits/s)"
set logscale x
set key bottom right

#Uncomment to produce a png file
#set terminal png picsize 1200 896
#set output "netpipe.throughput_vs_blocksize.png"

# Uncomment these to produce an eps file
#set terminal postscript monochrome  "Helvetica" 10
#set pointsize .6
#set output "netpipe.throughput_vs_blocksize.eps"
#set size 0.6,0.6

plot [] [] \
 "NP.1500.33-1" using 4:2 title "1500 MTU 33 MHz PCI" w linespoints, \
 "NP.3000.33-1" using 4:2 title "3000 MTU 33 MHz PCI" w linespoints, \
 "NP.1500.66-1" using 4:2 title "1500 MTU 66 MHz PCI" w linespoints, \
 "NP.3000.66-1" using 4:2 title "3000 MTU 66 MHz PCI" w linespoints
                                                                                
# wait so we can view it! (comment out when making files)
pause -1

The output files produced by Netpipe are listed as part of the plot line (i.e. NP.1500.33-1, etc.) You can easily edit the Gnuplot file to view your tests results. To plot data, simply enter:

gnuplot filename.gp

where

filename.gp
is the name of the Gnuplot file similar to the one shown above. You can also generate other views of the data. See the Netpipe/Gnuplot documentation for more information.

To plot the "Netpipe Signature" graph shown in Figure Two, you can use this gnuplot file:

set title "Netpipe Data - Signature Graph (Throughput vs. Time)\n Netgear GA302T Adapter"
set xlabel "Time"
set ylabel "Throughput (Mb/s)"
set logscale x
#Uncomment to produce a png file
#set terminal png picsize 1200 896
#set output "netpipe.network_signature_graph.png"

# Uncomment these to produce an eps file
#set terminal postscript monochrome  "Helvetica" 10
#set pointsize .6
#set output "netpipe.network_signature_graph.eps"
#set size 0.6,0.6

set key bottom right

plot [] [] \
 "NP.1500.33-1" using 1:2 title "1500 MTU 33 MHz PCI" w linespoints, \
 "NP.3000.33-1" using 1:2 title "3000 MTU 33 MHz PCI" w linespoints, \
 "NP.1500.66-1" using 1:2 title "1500 MTU 66 MHz PCI" w linespoints, \
 "NP.3000.66-1" using 1:2 title "3000 MTU 66 MHz PCI" w linespoints


# wait so we can view it! (comment out when making files)
pause -1

Sidebar Four: Resources

Netpipe

Gnuplot

This article was originally published in ClusterWorld Magazine. It has been updated and formated for the web. If you want to read more about HPC clusters and Linux you may wish visit Linux Magazine.

You have no rights to post comments

Search

Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.

Feedburner


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.