I had two Gigabit ethernet ports sitting on this motherboard and I was only using one. So I decided to take the path less travelled, as you never know where it can lead…
Yes, I decided to try to see if I could use two Gigabit ethernet ports for this fileserver. That’s a pretty crazy idea considering that even using one Gigabit ethernet port should give potential transfer speeds across the network of up to 100MBytes/second. In practice it seems quite difficult to get much over about 60MBytes/second sustained. As disk speeds are around these kinds of speeds, using a second Gigabit port is not that crazy, and might even give faster speeds than with one.
Using two network connections joined into one interface has the advantage of redundancy too, although in a home environment this is unlikely to be useful. Anyway, let’s put all arguments out of the window and forge ahead to see what we can do here.
There are various terms given to this idea such as ethernet trunking, channel bonding, teaming, link aggregation etc.
First things first: I’m using Solaris SXCE build 85 (Solaris Express Community Edition), and I am using a vanilla installation (out of the box), so just using all the default settings. This means that I had automatic network settings configured by the Solaris installer. The name of the service that controls this automatic network configuration is called ‘Network Automagic’ or NWAM for short. Therefore this NWAM service is running by default.
NWAM works by assuming that you have a DHCP server running that will provide your NIC with an IP address based upon recognition of the NIC’s MAC address. Many people will have a cheap router box connected to their ADSL modem which will act as a firewall, use NAT to insulate your computers from the internet, and will also provide a DHCP server to provide IP addresses to any network devices on your network.
In order to achieve our fusion of two Gigabit ethernet NICs into one, we have to disable this NWAM service as it works with a single NIC as far as I am aware. Also, if you have your current NIC’s MAC address programmed into your router box, you will probably need to remove this, as we will be specifying the IP address to use below.
To disable the NWAM service:
# svcadm disable /network/physical:nwam
My motherboard (Asus M2N-SLI Deluxe) uses an NVidia chipset to provide network capabilities, and it has two Gigabit ethernet NICs, known as nge0 and nge1, controlled by a Marvell ethernet chipset. I had a category 6 network cable plugged into the ethernet port connected to nge0. nge1 was unused. So first I need to ensure both ethernet devices are ‘unplumbed’, otherwise Solaris will consider them in use, at least nge0 in my case.
To see what happens if you don’t unplumb these links (nge0 and nge1) before attempting to aggregate them into a single aggregated link see the following:
# dladm show-link LINK CLASS MTU STATE OVER nge0 phys 1500 up -- nge1 phys 1500 up -- # dladm create-aggr -d nge0 -d nge1 1 dladm: create operation failed: link busy
Solaris considers them in-use (busy) so we really do have to unplumb them so we can aggregate them later:
# ifconfig nge0 unplumb # ifconfig nge1 unplumb
Now that they are not in use, we are free to create an aggregated link with nge0 and nge1 Gigabit links:
# dladm create-aggr -d nge0 -d nge1 1
That has created a single link called ‘aggr1’ which is an aggregation of the two Gigabit links nge0 and nge1.
Now we need to plumb it and assign an interface IP address, specify the netmask to use, and finally, bring the interface up.
# ifconfig aggr1 plumb # ifconfig aggr1 192.168.0.7 netmask 255.255.255.0 up
Now if you go to /etc you will find a file there called ‘hostname.linkname’ — in my case as I was using nge0 with NWAM it was called ‘hostname.nge0’.
We need now to rename this file to ‘hostname.aggr1’ as this is the new name of the link we will use for network communications:
# cd /etc # ls -l hostname* -rw-r--r-- 1 root root 13 Apr 5 14:49 hostname.nge0 # mv hostname.nge0 hostname.aggr1
This file should contain the IP address of the interface that you want. You can check your file’s content for correctness.
Here, for this example, let’s assume I want to use the address 192.168.0.7 for the interface for this fileserver, so I’ll just fill the file with this address:
# echo 192.168.0.7 > hostname.aggr1 # cat hostname.aggr1 192.168.0.7
Now let’s check how our network setup looks after these changes:
# dladm show-dev LINK STATE SPEED DUPLEX nge0 up 1000Mb full nge1 up 1000Mb full # dladm show-link LINK CLASS MTU STATE OVER nge0 phys 1500 up -- nge1 phys 1500 up -- aggr1 aggr 1500 up nge0 nge1 # dladm show-aggr LINK POLICY ADDRPOLICY LACPACTIVITY LACPTIMER FLAGS aggr1 L4 auto off short ----- # # ifconfig -a aggr1: flags=xxxxxxxxmtu 1500 index 2 inet 192.168.0.7 netmask ffffff00 broadcast 192.168.0.255 ether xx:xx:xx:xx:xx:xx #
As you can see, we have physical links nge0 and nge1 which are up, and a link that is an aggregation of nge0 and nge1 called ‘aggr1’.
As an aside, note that the ‘MTU’ is 1500, which means that the maximum transfer unit size is 1500 bytes. In theory we could bump this up to 9000 bytes to get what’s known as ‘Jumbo frames’. This means that in theory this should give faster transfer speeds, but according to someone who already tried this with Solaris, he said that in practice it made little difference, and that changing things like NFS block size would be more likely to increase speed. However, I’m using CIFS on this fileserver but there are probably some equivalent settings for CIFS which could be investigated at a later stage.
The next thing we need to do is to provide a default gateway IP address. This is the address of the router on your network:
# echo 192.168.1.1 > /etc/defaultrouter
Update: 27/04/2008
After buying an IEEE 802.3ad compliant switch, and getting Link Aggregation to work, I found the following additional steps are required.
First we need to add a line to the /etc/netmasks file. The line contains the fileserver’s IP ‘network number’ and the netmask used. The file is read only so we need to make it writable first:
# cd /etc # ls -l netmasks lrwxrwxrwx 1 root root 15 Apr 27 11:06 netmasks -> ./inet/netmasks # chmod u+w netmasks # vi netmasks
add this at the end of the file:
192.168.0.0 255.255.255.0
Now put the file back as read only:
# chmod u-w netmasks
Also we’ll no longer have our DNS working, so we’ll copy /etc/nsswitch.dns to /etc/nsswitch.conf:
# cd /etc # mv nsswitch.conf nsswitch.conf.org # cp nsswitch.dns nsswitch.conf
Earlier we disabled the NWAM service, and now we need to start the default network service instead of NWAM:
# svcadm enable /network/physical:default
Check that the new settings persist after rebooting.
Voila, you should now have an aggregated ethernet connection capable of theoretical speeds of up to 2 Gigabits per second full-duplex — i.e. 2 Gbits/sec simultaneously in each direction! In real life situations, though, you will almost certainly see much lower speeds.
I didn’t manage to saturate the bandwidth of one Gigabit link yet, as I have a single disk on the computer connected to this fileserver, but this setup will probably come in handy occasionally when the fileserver is doing a backup, and serving files to other machines etc. Also, later on when I upgrade the disks with the new breed of fast terabyte disks like the Samsung Spinpoint F1, which can read and write at upto 120MBytes per second, the network shouldn’t get saturated under most situations.
So, there we have it, the conclusion of a crazy but fun experiment. I will do some experiments later on to try to saturate the bandwidth of one Gigabit link and see if the second one kicks in when the OS sees that it needs more bandwidth…
After a test transfer across the network with this new setup, I mostly only see one light on the Gigabit switch flashing, so it seems like Solaris is not using both Gigabit links when I transfer stuff, but this may be due to the fact that it’s not saturating the bandwidth of one GbE port.
Keep on trunking! Sorry, I had to slip that one in there somewhere 🙂
Update: 27/04/2008
After aggregating both the ZFS fileserver and the Mac Pro, I managed to get transfer speeds of around 80 to 100 MBytes/sec using CIFS sharing. This is just a rough ballpark figure, as I just looked at the Network monitoring tab of the Activity Monitor in Mac OS X. This is around double the speed I was previously getting with transfers to and from the CIFS share, and this speed is now at the limits of the read/write speed of the single disk in the Mac Pro, which is a Western Digital WD5000AAKS, which I saw benchmarked with a maximum write speed around 87 MBytes/sec and burst speed was over 100 MBytes/sec. So this seems to tally up. It also begs the question of what speeds would be possible using a RAID 0 with 2 WD5000AAKS disks on the Mac Pro… but that will have to remain a mystery for another day 😉
For more ZFS Home Fileserver articles see here: A Home Fileserver using ZFS. Alternatively, see related articles in the following categories: ZFS, Storage, Fileservers, NAS.
I think that in order to get any use out of this, you need to have trunking enabled on your switch (or whatever it is on the other end of things).
Furthermore, you should be aware that there are to commonly-used types of link aggregation and both ends should be using the same method. They are static and dynamic (802.1ad protocol). I’m not sure which kind is in use here because I’m not yet familiar enough with Solaris. It’s worth pointing out that many cheaper switches which support trunking/aggregation only support static mode. I’ve been bitten by this when trying to create a trunked connection from some of my SAN storage units or XServes, which only support the 802.1ad dynamic trunking.
Thanks for the info Kamil !
I have a DLink DGS-1008D green ethernet 8-port Gigabit switch – see here:
ftp://ftp.dlink.eu/datasheets/DGS-1008D.pdf
Of relevance, perhaps, in this data sheet it says it supports “IEEE 802.3x Flow control”, which I assume covers IEEE 802.3ad, which appears to be what’s required for link aggregation / port trunking:
http://en.wikipedia.org/wiki/Link_aggregation
Does this look like this switch should support link aggregation to you?
Here’s additional info from their website:
* 8 10/100/1000 Mbps Gigabit ports on Cat. 5
* 16Gbps switching fabric
* Auto MDI/MDIX cross over for all ports
* Secure store-and-forward switching scheme
* Full/half-duplex for Ethernet/Fast Ethernet speeds
* Blazing 2000Mbps full duplex for Gigabit speed
* IEEE 802.3x Flow Control
* Plug-and-play installation
* Easily installable on desktop
Well I just did a bit more hunting and found this page:
http://en.wikipedia.org/wiki/IEEE_802.3
It has separate entries for 802.3x and 802.3ad:
* 802.3x: Full Duplex and flow control; also incorporates DIX framing, so there’s no longer a DIX/802.3 split
* 802.3ad: Link aggregation for parallel links
From this, it seems that my switch doesn’t support link aggregation, as it only mentions “802.3x Flow control” in the specs and not “802.3ad Link aggregation”. Oh well, never mind, aggregation was never a requirement when buying the switch, just something that occurred to me later. But it’s interesting to know these things for possible future projects.
But my question is, does link aggregation always rely on the hardware — i.e. the switch, or can it be done in software using the OS, in this case Solaris?
From these pages, it seems like the switch definitely needs to support link aggregation:
http://blogs.sun.com/droux/entry/link_aggregation_vs_ip_multipathing
http://blogs.sun.com/droux/entry/link_aggregation_plumbing
Well, I’m not 100% convinced that the aggregated link is *not* working, as I still have a working network connection if I pull either one of the two ethernet cables out of the switch:
with both ethernet cables attached to the switch:
# dladm show-link
LINK CLASS MTU STATE OVER
nge0 phys 1500 up —
nge1 phys 1500 up —
aggr1 aggr 1500 up nge0 nge1
now disconnect nge0 cable:
# dladm show-link
LINK CLASS MTU STATE OVER
nge0 phys 1500 down —
nge1 phys 1500 up —
aggr1 aggr 1500 up nge0 nge1
I still have a usable connection (I can surf), so it’s now using nge1 — or the ether 😉
now plug nge0 cable back in and pull out nge1 cable from the switch:
# dladm show-link
LINK CLASS MTU STATE OVER
nge0 phys 1500 up —
nge1 phys 1500 down —
aggr1 aggr 1500 up nge0 nge1
I can still surf, so it’s using nge0.
So the redundancy/fail-over aspect of the aggregated link is working.
When I think of a heavy enough test to really stress the link I will see if it’s using both links or not — will probably need some network monitoring tool to to see the figures…
The type of trunking compatible with dladm is typically only found on high-end managed switches (Cisco, etc.) I would suspect that in the absence of this support, Solaris is falling back to IP multi-pathing (IPMP), or something similar, which doesn’t require any support on the switch. IPMP will still provide redundancy, and it will load-balance outgoing packets across the two NICs, but incoming packets will all go to whichever NIC Solaris considers to be active, since the switch can only have one port per MAC in its MAC address table, and other hosts can only have one MAC per IP address in their ARP tables. When you pull cable on the active NIC, Solaris sends gratuitous ARP packets forcing hosts on its network to update their ARP tables with the MAC of the other NIC. Before Solaris 10, IPMP was the only type of failover available without purchasing additional software. Hope this helps.
@Jamie: Thanks a lot for the info — it helps explain why I got redundancy to work, but not proper trunking. When you say high-end, I assume you mean that switches that support trunking (i.e. IEEE 802.3ad) are typically quite expensive.
I just stumbled upon your blog whilst WWILFing around the ZFS/nas arena. I’ve been putting together a similar nas box, based on the new Chenbro NAS case.
I too have two Gb ethernet ports and have looked at aggregating them. The switch I’m considering is the Netgear G108T which is a smart switch and claims to support link aggregation. For the prosumer it looks to be at the correct price point – unless you can get hold on a Cisco at a good price that is!
@Andy: Thanks for the tip — I’ll check out this Netgear G108T switch.
Also, around the same kind of price, I’ve seen a Linksys (now Cisco) switch: the SLM2008, which also appears to support IEEE 802.3ad for Link Aggregation. I don’t know how these 2 switches compare with each other, but I might take a read.
I see also there is a Linksys SRW2008 switch, which seems about twice the price.
If I really want to checkout Link Aggregation then I’ll need to get something like one of these switches that supports IEEE 802.3ad.
For now, I’ve destroyed the aggregated link and returned to NWAM with a single Gigabit ethernet link.
Now I’ve seen aggregation though, I am tempted to get one of those IEEE 802.3ad compliant switches as they’re quite inexpensive for a prosumer model like the first two mentioned above — something like $100 / €100.
Take into consideration that flow control is decisive for port trunking. I have had some bad experiences with switches that don’t support flow control (or they don’t do it well) and trunking results in performance decreas. The server sends too fast (2Gbps) compared to the 1Gbps receiving link. If that happens, and there is no good flow control, the switch will start to drop packets, and the performance will be dog-slow quickly.
Hi bisho,
Thanks for the info — I think I saw the other day that one of the links on the Mac or Solaris box didn’t appear to have flow control enabled, so I will take a closer look at that. Cheers.
The GS108T does indeed support trunking, I use it myself in the office to dual-trunk to each of 3 computers, and a dual-trunk uplink to another GS108T. Works great, very fast, although currently I’m using Ubuntu Linux for most of the computers.
Thanks Kamilion, good to know. I got a Linksys SRW2008 and got dual-trunks between Mac Pro and ZFS fileserver working — nice and fast too. But it needs Internet Explorer to use web interface, which is less than ideal if you’re using UNIX. I used a virtualised Windows to run IE to setup the switch. If I get a different switch one day, I think I’d get an HP ProCurve 1800-8G as user reports are very good.
I’m considering a Dell PowerConnect 2716 switch, it’s supposed to do 802.3ad.
Hi Simon.
Thanks for documenting your efforts. I’m just collecting hardware now, but have made almost the same choices as you. …Though I’m planning to install the OS on flash, and use magnetic media for storage only.
About your trunking efforts:
I’m not surprised that having link aggregation at one end (solaris), but not the other end (switch) of your link appears to work. Each transmitted frame is an atomic operation on each end of the link: The sender does what he will, and the receiver tries to make sense of it. My bet is that Solaris was trunking, and that the MAC address table on your switch was being constantly thrashed.
When you get trunking fully supported, I don’t think you’re going to see benefits between a single pair of computers. The trunking mechanisms go to great lengths to not allow frames within a flow to be mis-ordered. A flow is usually defined by the 5-tuple: src_ip, dst_ip, protocol, src_port, dst_port. …But the particulars are configurable. Frame mis-ordering is accomplished by assigning a flow to a particular link, and making sure that all frames in the flow are sent over the same link.
Note that the link chosen by the switch doesn’t have to match the link chosen by the server because they balance the links independently.
When your fileserver is performing bulk transfers to or from multiple systems you might see a throughput benefit. If not, then the benefit is limited to link redundancy.
/chris
Thanks for your posts about setting up OS, especially this post regarding trunking. I’m a OS newbie, but your instructions helped us get our test server up and running very quickly. Cheers!
Hi Chris,
You’re welcome — it’s nice to be able to help others.
Using CIFS sharing, with one GbE link I got around 40 MBytes/sec, and with trunking enabled using two GbE links I’m getting around 80 MBytes/sec.
Hi Matt, great to hear this helped you get it all working!
Why not try copying a file to dev null on the mac for a proper speed test that ignores destination hd speed.
Hi Joshka, not a bad idea. I will give it a try.
I got this working with a Cisco 2970 switch and Intel e1000 nics on an AMD box.
First, I configured the switch:
Switch# configure terminal
Switch(config)# interface range gigabitethernet0/17 -18
Switch(config-if-range)# spanning-tree portfast
Switch(config-if-range)# switchport mode access
Switch(config-if-range)# switchport access vlan 10
Switch(config-if-range)# channel-group 5 mode active
Switch(config-if-range)# end
Then I configured Open Solaris snv_97
# svcadm disable /network/physical:nwam
# ifconfig e1000g0 unplumb
# ifconfig e1000g1 unplumb
# dladm create-aggr -d e1000g0 -d e1000g1 1
# ifconfig aggr1 plumb
# ifconfig aggr1 10.1.1.3 netmask 255.255.255.224 up
# svcadm enable /network/physical:default
# dladm show-dev
# dladm show-link
# dladm show-aggr
# ifconfig aggr1
Unfortunately, this didn’t work. On the Cisco, the ports were ORANGE and I got the following log messages:
“LACP currently not enabled on the remote port”
Ok. So the ports are set to ACTIVE on the Cisco side, but not on the Solaris side. So I had to modify the aggregate interface with this command:
# dladm modify-aggr -L active aggr1
Now it works. Of course, I learned the 802.3ad standard does not allow round robin’ing of packets. I need to figure out a way to load balance the two interfaces. The Cisco supports the following:
Switch(config)#port-channel load-balance ?
dst-ip Dst IP Addr
dst-mac Dst Mac Addr
src-dst-ip Src XOR Dst IP Addr
src-dst-mac Src XOR Dst Mac Addr
src-ip Src IP Addr
src-mac Src Mac Addr
I just wanted to clarify, although many of you may know this already, that AFAIK there are no switches that do per packet load balancing, but rather rely on src+dest ip/mac hashes for load balancing. In practice this means that you can’t get above 1Gbps for a single connection without 10GbE hardware, so don’t expect any single CIFS/NFS connection to get above ~120MB/s performance. If you have multiple clients accessing the server, then you can go beyond 1Gbps as long as the connections get mapped to different ports.
I’m currently using a Solaris server with 2 aggregated 1Gbps links with a switch that supports trunking (but not LACP), and I can push the network traffic on the server up to 195-205MB/s with 2 clients, and up to 120MB/s with a single client. The clients also have dual aggregated 1Gbps links.
You might get more mileage using iSCSI with two NICs. If you create multiple targets (one per NIC on the server) and then use a load balancing algorithm in your iSCSI initiator… Just thinking aloud. (I use this with VMware ESX…)
Good website. 🙂
Ok, there is a little bit of confusion here.
Before I begin I would like to say a bit on my background. I use to work for Netgear as tier two senior support engineer for their prosafe product line which includes their managed switches. After that I worked for Juniper Networks one their ERX support team. So I have a bit of background on this subject.
Lets cover terminology for managed/smart switches.
1) A trunked port is a port between two switches, it does not have to more than one port. This is where we get the word ‘trunk,’ like the a tree trunk, because switches form the trunk of a network and this is the port that the switches communicate over. Because switches are the trunk of a network these trunk ports generally need to have more bandwidth as the port will be aggregating all the inter-switch traffic. Sometimes venders will provide special high bandwidth ports, such as GigE on a Fast switch, or 10GigE on a Gig switch, or even proprietary back-plain designs. However, sometimes they do not, and on a managed switch that supports 802.3ae you are able to LAG multiple ports together to increase inter-switch bandwidth. This is probably where the misconception of the term ‘trunk’ came from. This is important to understand as on some switches trunk ports communicate special command information that allows multiple managed switches to operate as one large switch matrix, and these trunk ports are not allowed for general switching.
2) The term for aggregated ports is called a LAG, Link Aggregation Group. This was true when I was supporting $6k switches at Netgear and was still true with I was supporting million dollar routers at Juniper.
On to 802.3ae…
802.3ae does not define a load balancing method; that is up to the vender.
At Netgear we had two classes or ‘non-dumb’ switches, smart switch and managed switches. The funny thing is that the smart switches, which the above mentioned GS108T (G is for Gigabit, S is for smart, 8 means there are 8 ports), are not all that smart. They way dumbed down managed switches with less hardware. They feature things like only webgui operation, and reduced feature sets, and reduced granularity for the features they do support. For instance the smart switches sports IP based round robin aggregated port packet distribution. Once a port has been assigned to a given IP it will always get that IP until the aggregation table is flushed. All of Netgears switches that start with two letter, the second being an S, are smart switches.
The actual managed switches are the ones with three letters followed by four numbers. Examples would be GSM7324 and GSM7224. In this example these would be ‘G’ for Gigabit, S for Switch, M for managed, the second number is the layer (2 is a layer 2 switch, 3 is a layer 3 switch), and the last numbers are the port count. Aside for the greatly increased feature set, much improved granularity, addition of CLI control, very low latencies, one of the managed switch’s claims to fame is it’s hash based packet load balancing. (This is proprietary design as the 802.3ad spec does not specifically address load balancing.) As packets traverse the switch their frames are hashed and tracked in a hash table, this allows the switch to insure in sequence layer two communication, however the higher layers (TCP/IP can handle out of order processing as there is no guarantee that packets traversing large networks, thing the Internet, are going to arrive in the same order they were sent) are then load balanced on the ports based on Netgear’s proprietary algorithm. This allows two hosts, for instance, a Mac Pro and a Solaris server to achieve 2Gbps speeds. However you do have to have 802.3ae configured on the Mac, the switch ports facing the Mac, and the Solaris box, and the switch ports facing the Solaris box.
Concerning bandwidth over a network. Yes, you can see full Gigabit speeds over the a network assuming the following.
1) Your switch needs to be able to actually handle it. Just because you have a “Gigabit switch” doesn’t mean that the switch fabric can handle Gigabit rates to all ports at the same time. If you have 8 ports, and they are full duplex, then you have 1Gbps * 8 * 2 (1Gpbs for each direction) which would mean the switch fabric would need to be able to handle 16Gbps! Think about a 24 port or a 48 port…
2) Your cable needs to be able to handle the speeds. Things that are going to affect this is impedance, signal noise and distance. Make sure you are running CAT6 cable. If you are distance shouldn’t be much of an issue in home implementations. Be mindful of things that will generate noise, lights, electric motors, speakers, etc… don’t run your cable by these. If you have to cross an electrical cable do it at a 90.
3) You have to remember that there is protocol overhead that has to be accounted for. Gigabit = 1024Mbps = 128 MBps. SMB, AFP, iSCSI, are all TCP protocols which means their is a 3 way handshake. (SYN, SYN-ACK, ACK) So there is allot of over head packets for each communication. More over there is over head for each packet, you have the layer two frame, the IP header, the TCP header, the SMB/AFP/iSCSI header, and then finally the data. This is why jumbo packets are recommend for large data throughput networks, it lowers the header vs data ratio. However this doesn’t affect TCP’s inherent overhead or the fact that it prefers reliability over speed. For instance if your network becomes saturated and an ACK or SYN-ACK does not come within the expected response time TCP down scales its window to easy network burden and to increase the chances that it’s data will get through. Remember TCP was created back in the late 60s early 70s with UNIX and C and networking and when the networks weren’t very reliable.
4) IEEE 802.3ab, Gigabit over UTP, compatibility is not enough to deliver Gigabit rates, the hardware has to be able to keep up with line speeds and the driver has to be able to keep up with the buffers. The chipset and the drive makes are HUGE difference as to what the speeds will be. I personally have had good luck with Intel, and I have seen many issues with integrated chipsets.
In my testing with my Mac Pro and my Macbook Pro over a DLinkDGS-2208 with CAT6 cable I have achieved sustained speeds of 87MBps. I did this with setting both of the Macs to full duplex with jumbo frames. Then I created a ram disk on both machines and copied a 2GB disk image back and forth across the network from one machine’s ramdisk to the other’s. This isolated the network component of the equation. The protocol I used was AFP running on top of 10.6.2 on both computers.
LOL, I meant jumbo frames. It helps if you proof read before you post!!! :p Please forgive my typos. 🙂
-James
So James for a home user what would you recommend in the $300 range?
I found this pages a good supplement to whyt you describe here:
http://blog.allanglesit.com/2011/03/solaris-11-network-configuration-basics/
http://blog.allanglesit.com/2011/03/solaris-11-network-configuration-advanced/
Cheers
Otto
Very nice tutorials! Very handy. BUT if you want persistent networking in Solaris 11 the configuration files in /etc won’t do the job anymore. Very nice Oracle documentation is available at http://docs.oracle.com/cd/E23824_01/html/E24456/gliyc.html#scrolltoc if you want to do a persitent manual config
I have a few questions. I’m a newbie & all the terminology thrown around here has me a little dazed. I do have a M2NSLI-Deluxe mobo that I am planning on turning into a FreeNAS box (8.3.0). Will this switch – Cisco SLM2008T-NA – combined with the two gigabit ports on the Asus mobo be able to produce higher bandwidth than just one port alone? If that’s the wrong switch, please suggest one.
thanks.
THis isnt trunking though its just port aggregation, trunking is when you run some kind of encapsulated vlans (ISL or 802.1Q) down the link. The two are often combined but are two different things