ComputerStuff‎ > ‎

LinuxNettuning


http://caia.swin.edu.au/tools/spp/documentation.html  -- passive link RTT determination


It looks like Linux NFS uses the configured MTU, regardless of any path discovery information.
As a result, the configured MTU on Linux NFS clients must be set to the smallest MTU acceptable to your environment.
If the NFS server is a ReadyNAS box that is configured for Jumbo frames, the MTU should be 7936, or less.


http://wiki.maxgigapop.net/twiki/bin/view/MAX/PerformanceTuning
at 10G, you may not want your packets going through the 8021q module to send tagged frames, for example (Not enough CPU juice to run a trunk port, go back to access)
make sure iptables is not running: lsmod | grep -i ipt  (security or performance, where have I heard that before?)
check interface statistics on the switch/router interfaces along the path
  • look for CRC errors, input/output queue drops, etc: sh int gi1/0/2 (for most devices)
  • watch to see if the counters increase after a perf test — could indicate dirty fiber, not enough buffering, etc
  • ideally these kinds of counters should stay at zero:





http://www.psc.edu/networking/projects/tcptune/#Linux


Tuning TCP for Linux 2.4 and 2.6

NB: Recent versions of Linux (version 2.6.17 and later) have full autotuning with 4 MB maximum buffer sizes. Except in some rare cases, manual tuning is unlikely to substantially improve the performance of these kernels over most network paths, and is not generally recommended



http://www.pcausa.com/Utilities/pcattcp.htm

http://www.enterpriseitplanet.com/networking/features/article.php/3497796

Many people recommend setting the MTU of your network interface larger. This basically means telling the network card to send a larger Ethernet frame. While this may be useful when connecting two hosts directly together, it becomes less useful when connecting through a switch that doesn't support larger MTUs (define). At any rate, this isn't necessary. 900Mb/s can be attained at the normal 1500 byte MTU setting.

For attaining maximum throughput, the most important options involve TCP window sizes. The TCP window controls the flow of data, and is negotiated during the start of a TCP connection. Using too small of a size will result in slowness, since TCP can only use the smaller of the two end system's capabilities. It is quite a bit more complex than this, but here's the information you really need to know:

For both Linux and FreeBSD we're using the sysctl utility. For all of the following options, entering the command 'sysctl variable=number' should do the trick. To view the current settings use: 'sysctl <variable name>'

  • Maximum window size:
    • FreeBSD:
      kern.ipc.maxsockbuf=262144
    • Linux:
      net.core.wmem_max=8388608

  • Default window size:
    • FreeBSD, sending and receiving:
      net.inet.tcp.sendspace=65536
      net.inet.tcp.recvspace=65536
    • Linux, sending and receiving:
      net.core.wmem_default = 65536
      net.core.rmem_default = 65536

  • RFC 1323:
    This enables the useful window scaling options defined in rfc1323, which allows the windows to dynamically get larger than we specified above.
    • FreeBSD:
      net.inet.tcp.rfc1323=1
    • Linux:
      net.ipv4.tcp_window_scaling=1

  • Buffers:
    When sending large amounts of data, we can run the operating system out of buffers. This option should be enabled before attempting to use the above settings. To increase the amount of "mbufs" available:
    • FreeBSD:
      kern.ipc.nmbclusters=32768
    • Linux:
      net.ipv4.tcp_mem= 98304 131072 196608

These quick changes will skyrocket TCP performance. Afterwards we were able to run ttcp and attain around 895 Mb/s every time – quite an impressive data rate. There are other options available for adjusting the UDP datagram sizes as well, but we're mainly focusing on TCP here.



Comments