I had two Gigabit ethernet ports sitting on this motherboard and I was only using one. So I decided to take the path less travelled, as you never know where it can lead…
Yes, I decided to try to see if I could use two Gigabit ethernet ports for this fileserver. That’s a pretty crazy idea considering that even using one Gigabit ethernet port should give potential transfer speeds across the network of up to 100MBytes/second. In practice it seems quite difficult to get much over about 60MBytes/second sustained. As disk speeds are around these kinds of speeds, using a second Gigabit port is not that crazy, and might even give faster speeds than with one.
Using two network connections joined into one interface has the advantage of redundancy too, although in a home environment this is unlikely to be useful. Anyway, let’s put all arguments out of the window and forge ahead to see what we can do here.
There are various terms given to this idea such as ethernet trunking, channel bonding, teaming, link aggregation etc.
First things first: I’m using Solaris SXCE build 85 (Solaris Express Community Edition), and I am using a vanilla installation (out of the box), so just using all the default settings. This means that I had automatic network settings configured by the Solaris installer. The name of the service that controls this automatic network configuration is called ‘Network Automagic’ or NWAM for short. Therefore this NWAM service is running by default.
NWAM works by assuming that you have a DHCP server running that will provide your NIC with an IP address based upon recognition of the NIC’s MAC address. Many people will have a cheap router box connected to their ADSL modem which will act as a firewall, use NAT to insulate your computers from the internet, and will also provide a DHCP server to provide IP addresses to any network devices on your network.
In order to achieve our fusion of two Gigabit ethernet NICs into one, we have to disable this NWAM service as it works with a single NIC as far as I am aware. Also, if you have your current NIC’s MAC address programmed into your router box, you will probably need to remove this, as we will be specifying the IP address to use below.
To disable the NWAM service:
# svcadm disable /network/physical:nwam
My motherboard (Asus M2N-SLI Deluxe) uses an NVidia chipset to provide network capabilities, and it has two Gigabit ethernet NICs, known as nge0 and nge1, controlled by a Marvell ethernet chipset. I had a category 6 network cable plugged into the ethernet port connected to nge0. nge1 was unused. So first I need to ensure both ethernet devices are ‘unplumbed’, otherwise Solaris will consider them in use, at least nge0 in my case.
To see what happens if you don’t unplumb these links (nge0 and nge1) before attempting to aggregate them into a single aggregated link see the following:
# dladm show-link LINK CLASS MTU STATE OVER nge0 phys 1500 up -- nge1 phys 1500 up -- # dladm create-aggr -d nge0 -d nge1 1 dladm: create operation failed: link busy
Solaris considers them in-use (busy) so we really do have to unplumb them so we can aggregate them later:
# ifconfig nge0 unplumb # ifconfig nge1 unplumb
Now that they are not in use, we are free to create an aggregated link with nge0 and nge1 Gigabit links:
# dladm create-aggr -d nge0 -d nge1 1
That has created a single link called ‘aggr1′ which is an aggregation of the two Gigabit links nge0 and nge1.
Now we need to plumb it and assign an interface IP address, specify the netmask to use, and finally, bring the interface up.
# ifconfig aggr1 plumb # ifconfig aggr1 192.168.0.7 netmask 255.255.255.0 up
Now if you go to /etc you will find a file there called ‘hostname.linkname’ — in my case as I was using nge0 with NWAM it was called ‘hostname.nge0′.
We need now to rename this file to ‘hostname.aggr1′ as this is the new name of the link we will use for network communications:
# cd /etc # ls -l hostname* -rw-r--r-- 1 root root 13 Apr 5 14:49 hostname.nge0 # mv hostname.nge0 hostname.aggr1
This file should contain the IP address of the interface that you want. You can check your file’s content for correctness.
Here, for this example, let’s assume I want to use the address 192.168.0.7 for the interface for this fileserver, so I’ll just fill the file with this address:
# echo 192.168.0.7 > hostname.aggr1 # cat hostname.aggr1 192.168.0.7
Now let’s check how our network setup looks after these changes:
# dladm show-dev LINK STATE SPEED DUPLEX nge0 up 1000Mb full nge1 up 1000Mb full # dladm show-link LINK CLASS MTU STATE OVER nge0 phys 1500 up -- nge1 phys 1500 up -- aggr1 aggr 1500 up nge0 nge1 # dladm show-aggr LINK POLICY ADDRPOLICY LACPACTIVITY LACPTIMER FLAGS aggr1 L4 auto off short ----- # # ifconfig -a aggr1: flags=xxxxxxxx
mtu 1500 index 2 inet 192.168.0.7 netmask ffffff00 broadcast 192.168.0.255 ether xx:xx:xx:xx:xx:xx #
As you can see, we have physical links nge0 and nge1 which are up, and a link that is an aggregation of nge0 and nge1 called ‘aggr1′.
As an aside, note that the ‘MTU’ is 1500, which means that the maximum transfer unit size is 1500 bytes. In theory we could bump this up to 9000 bytes to get what’s known as ‘Jumbo frames’. This means that in theory this should give faster transfer speeds, but according to someone who already tried this with Solaris, he said that in practice it made little difference, and that changing things like NFS block size would be more likely to increase speed. However, I’m using CIFS on this fileserver but there are probably some equivalent settings for CIFS which could be investigated at a later stage.
The next thing we need to do is to provide a default gateway IP address. This is the address of the router on your network:
# echo 192.168.1.1 > /etc/defaultrouter
After buying an IEEE 802.3ad compliant switch, and getting Link Aggregation to work, I found the following additional steps are required.
First we need to add a line to the /etc/netmasks file. The line contains the fileserver’s IP ‘network number’ and the netmask used. The file is read only so we need to make it writable first:
# cd /etc # ls -l netmasks lrwxrwxrwx 1 root root 15 Apr 27 11:06 netmasks -> ./inet/netmasks # chmod u+w netmasks # vi netmasks
add this at the end of the file:
Now put the file back as read only:
# chmod u-w netmasks
Also we’ll no longer have our DNS working, so we’ll copy /etc/nsswitch.dns to /etc/nsswitch.conf:
# cd /etc # mv nsswitch.conf nsswitch.conf.org # cp nsswitch.dns nsswitch.conf
Earlier we disabled the NWAM service, and now we need to start the default network service instead of NWAM:
# svcadm enable /network/physical:default
Check that the new settings persist after rebooting.
Voila, you should now have an aggregated ethernet connection capable of theoretical speeds of up to 2 Gigabits per second full-duplex — i.e. 2 Gbits/sec simultaneously in each direction! In real life situations, though, you will almost certainly see much lower speeds.
I didn’t manage to saturate the bandwidth of one Gigabit link yet, as I have a single disk on the computer connected to this fileserver, but this setup will probably come in handy occasionally when the fileserver is doing a backup, and serving files to other machines etc. Also, later on when I upgrade the disks with the new breed of fast terabyte disks like the Samsung Spinpoint F1, which can read and write at upto 120MBytes per second, the network shouldn’t get saturated under most situations.
So, there we have it, the conclusion of a crazy but fun experiment. I will do some experiments later on to try to saturate the bandwidth of one Gigabit link and see if the second one kicks in when the OS sees that it needs more bandwidth…
After a test transfer across the network with this new setup, I mostly only see one light on the Gigabit switch flashing, so it seems like Solaris is not using both Gigabit links when I transfer stuff, but this may be due to the fact that it’s not saturating the bandwidth of one GbE port.
Keep on trunking! Sorry, I had to slip that one in there somewhere
After aggregating both the ZFS fileserver and the Mac Pro, I managed to get transfer speeds of around 80 to 100 MBytes/sec using CIFS sharing. This is just a rough ballpark figure, as I just looked at the Network monitoring tab of the Activity Monitor in Mac OS X. This is around double the speed I was previously getting with transfers to and from the CIFS share, and this speed is now at the limits of the read/write speed of the single disk in the Mac Pro, which is a Western Digital WD5000AAKS, which I saw benchmarked with a maximum write speed around 87 MBytes/sec and burst speed was over 100 MBytes/sec. So this seems to tally up. It also begs the question of what speeds would be possible using a RAID 0 with 2 WD5000AAKS disks on the Mac Pro… but that will have to remain a mystery for another day
Popularity: 33% [?]