Ole,
Here are some notes from my last attempt to make this work on my servers.
I never did get it to work. Bottom line - My 3 year old Dell 2300 server
was not compatible. The newer models are.
From my research I would strongly suggest a CISCO switch paired with the
Intel NIC's.
Hopefully these notes will give you some idea of what is involved. Watch
out for the issue of dual bridging chips on the same PCI bus. This is
specifically what KO'd my attempt as my Dell server PCI chipset would not
support dual bridging chips.
You will have to use NETMON and setup some serious testing to decide whether
this idea is actually working. Using 2 NIC cards with the best settings
from INTEL yielded LOWER transfer rates then just using 1 card at 100 mips
full duplex. Vantage logon times went from 10-15 seconds to over a minute
when I first tried this. Best optimizing brought it down to 30 seconds. In
theory this should work just fine. In reality you need a server that can
handle it.
I did not run across any NT issues with this equipment. It was strictly an
issue between the PCI BUS, the NIC cards, and the switch. Sounds easy - it
wasn't.
Todd Anderson
----------------------------------------------------------------------------
-----------------------
Intel PRO-100+ Server NICs - Setup Instructions - As Of :
03-21-01<?xml:namespace prefix = o ns =
"urn:schemas-microsoft-com:office:office" />
Reference:
Intel Pro/100+ Dual Port Server Adapter (or single port cards)
Intel Part # - 714303
Note: NIC address is on card on a white label and refers to the primary 'A'
port - or the port nearest the PCI connector. The second slot is always One
more than the primary. Hence - NIC Address - Port 'A' - 0003473BF5F6 - Port
'B' - 0003473BF5F7
To Install Drivers:
* Un-Install current NIC drivers
* Install New Drivers ( Download current drivers from web before
starting - currently V 5.0 )
o Click on Add Adapter
o Point to C:\INTEL for the drivers ( this is just where I exploded
the drivers downloaded from the web ).
o Take defaults
o On the Advanced Tab - Set "Adaptive Technology" to "ON" for both
ports.
o When done you will have to enter the TCP/IP addresses for BOTH slots
- this is TEMPORARY!!!
o Re-boot the PC for the new drivers to take effect
* After Re-Booting
o Goto Control Panel / "Intel(R) ProSet II"
o Right click on adapter #1 - Create team.- As "Fast Etherchannel/Link
Aggregation Mode"
o Right click on adapter #2 - add to team
o When done you will have to enter ONE TCP/IP address for the new
"Virtual" adapter.
o Re-Boot to make active
o See detailed notes below for specific settings on 'Adanced' tabs
* Link to Cisco switch and create "Group" using ports 11 and 12 (
whatever ). Turn OFF spanning tree for the two ports. Don't forget to save
changes from the system tag.
Notes:
* When you reboot server you will see the ports on the Cisco switch
goto green on startup and then change to orange about the same time you are
signing on and then back to green about 20 seconds later. The first
sequence is the switch acknowledging the ports as being active. When they
goto orange the Intel team software is communicating with the switch to
"Port Bind" the two lines into one virtual link. When the ports go back to
green the team has been established.
* For the Intel card - "Teaming" is only relevant for routable
protocols such as IPX and TCP/IP. DLC and SNA traffic is only valid on the
primary address. So if you are setting up SNA on the AS/400 you need to
link to Ethernet address of port 'A' and that port will carry all traffic.
* One card = 2 ports = 2 x 100 mips * Full-Duplex(2) = 400 mips of
bandwidth.
* You can bind more than one card - the limit being the available
ports on the switch. Note, however, that the 2924XL Cisco switches only
support -4- total groups.
Relevant Intel Documentation From Help Screens:
Note: I copied most of this from the help screens to this document for
readability. All comments that begin with "JR-NOTE:" are comments that I
added based on a conversation with an Intel engineer from second level
support.
* Intel Link Aggregation or Cisco
<outbind://358/JRS1G$Department-DirMISIntel-Pro100-Dual100pdiskINFO%22%20l%2
0> Fast EtherChannel* (FEC) -- creates a team of two to eight adapters to
increase transmission and reception throughput. Also includes adapter fault
tolerance and load balancing. Requires a switch with Link Aggregation or FEC
capability. A team of 2-8 10/100 adapters which simultaneously receive and
transmit. Includes fault tolerance and load balancing. Must match
speed/duplex settings on all team members Requires a switch that supports
Intel Link Aggregation or Cisco's FEC. Spanning Tree Protocol must be turned
off. Must match switch aggregation requirements.
* Advanced Settings For "Teams" ( settings refer to team - not
individual cards )
o Check Time ( seconds )
? How often Fault Tolerance checks the status of the adapters in a
Fault Tolerance team.
? Default: 1 second
? Range: 1 - 3 seconds
? JR-Note: This should be set to 1 second. No advantage in setting
it to 3. (Currently set to 1)
o Load Balance Refresh Rate a Set to 1
? Amount of time ALB waits before resetting or refreshing the
current load across the adapters in the load balancing team. Keep this
setting at the default or as low as possible for optimum performance.
? Default: 10 Check Time units
? Range: 1-50 Check Time units
? The number of seconds you specify for the Check Time setting will
be used as the measure for each Check Time unit here. See the Check Time
setting, below, for more details.
? JR-Note: Set this to the lowest number possible. If CPU
performance takes a hit then bump the number higher. (Currently set to 1)
o Locally Administered Address
? Allows you to override the MAC address that is automatically
specified for the adapter team. To do so, enter a different MAC address in
the Locally Administered Address field. This option is provided for those
who use the Windows NT Load Balancing Service (WLBS) or other drivers of its
class. In all other cases, this field is to be left blank.
? To enter a new network address, type a 12 digit hexadecimal number
in this box.
? The address entered should be in the range of: 0000 0000 0001 -
FFFF FFFF FFFD.
? Exceptions:
? Do not use a multicast address (LSB of the high byte = 1). For
example, in the address 0Y123456789A, "Y" cannot be an odd number (must be
2, 6, A, or E).
? Do not use all zeros or all F's.
? NOTE: To revert back to the MAC address that is automatically
specified for the adapter team, remove any entry in the Locally Administered
Address field.
? JR-Note: Leave blank unless you need to override. (Currently set
to blank)
o NumRxPackets
? Specifies the number of NDIS receive packets that the ANS driver
allocates for its receive pool, per ANS virtual adapter. If left at the
default (32) ANS automatically adapts to network traffic load. If this
setting is too low, packets may be dropped. If set too high, memory may be
wasted. When adapter is used in a team, it is recommended this value be set
to 150. In other cases, by modifying this value you could degrade
performance on your system. Use at your own risk.
? Default: 32
? Range: 32 - 512, step 16
? JR-Note: Here's the math: 512 x 1,600 bytes/packet = 819K ---
Not much of a memory waste in a system with 512 MEG of RAM. (Currently set
to 512)
o NumTxPackets
? Specifies the number of NDIS transmit packets that the ANS driver
will allocate for its transmit pool, per ANS virtual adapter. If left at
the default (32) ANS automatically adapts to network traffic load. If this
setting is too low, packets may be dropped. If set too high, memory may be
wasted. By modifying this value, you could degrade performance on your
system. Use at your own risk.
? Default: "Default" (Note: "Default" refers to the hardware
default. You can override the hardware default by changing the setting,
using the range listed below.)
? Default: 32
? Range: 16 - 512, step 16
? JR-Note: Here's the math: 512 x 1,600 bytes/packet = 819K ---
Not much of a memory waste in a system with 512 MEG of RAM. (Currently set
to 512)
* Advanced Settings For Individual Cards: ( update both cards !!! )
o Adaptive Inter-Frame Spacing a Set to 0
? This is a performance setting that compensates for excessive
Ethernet packet collisions on your network. The default setting works best
for most computers and networks by dynamically adapting to the network
traffic conditions. However, in some rare cases you may obtain better
performance by manually setting the spacing value. Setting a value forces a
static gap between packets.
? Increasing the value increases the delay between frames being
transmitted.
? Default: 0
? Range: 0 - 255
? JR-Note: Leave at ZERO when doing port binding using the CISCO
switch. (Currently set to 0)
o Adaptive Performance Tuning
? Sets the number of frames the adapter receives before triggering an
interrupt. Under normal operation, the adapter generates an interrupt every
time a frame is received. Reducing the number of interrupts improves CPU
utilization.
? Move the slider to Max Adapter Bandwidth to generate one interrupt
per frame. This increases adapter bandwidth, but may increase CPU
utilization, slowing your computer. Move the slider toward Max CPU
Utilization to increase the number of frames the adapter receives before
generating an interrupt. This improves CPU utilization, but may reduce
adapter bandwidth. The default setting works well for most computers.
? JR-Note: Set this number as low as you can without hurting CPU
utilization. This is a notched, sliding scale with 16 increments between
full adapter bandwidth on the far left and absolute minimum CPU usage on the
right. Or - trash the CPU performance on the left end and no impact on CPU
performance on the right end. (Currently set to # 9 from left - or middle
of the road)
o Adaptive Technology
? Default = Off
? This parameter either enables or disables the Adaptive Technology
performance enhancement feature. To enable the feature, click ON. To disable
the feature, click OFF. To adjust performance against CPU utilization, turn
this parameter ON.
? JR-Note: Turning this ON allows for Adaptive Performance Tuning.
(Currently set to ON)
o Adaptive Transmit Threshold
? Recommended value = 12 (Recommended value = 16 in Windows NT*)
? Specifies the number of bytes before the PCI adapter empties its
internal transmit FIFO onto the wire. The value is multiplied by 8 to
produce the number of bytes.
? For example, if Transmit Threshold = 200, the number of bytes is
1600. This is greater than the maximum packet size for Ethernet.
Consequently, the adapter won't attempt early transmits. Although this is
the safest setting, the best performance is achieved when the threshold
parameter is as low as possible (without producing underruns).
? To experiment, set the parameter to 16 and then incrementally
increase it if performance drops significantly.
? NOTE: Don't set the transmit threshold parameter below 200 for
computers with multiple busmastering cards, or computers with otherwise high
latency.
? JR-Note: Since a RAID controller card and this Intel card BOTH
count as busmastering cards this setting should be set to 200. Translation
of this setting is whether you WANT your Ethernet packets sent as COMPLETE
packets or broken up into smaller packets. Having small packets can
adversely effect the performance of the PCI bus. (Currently set to 200)
o Coalesce Buffers a Set to 8
? Recommended value = 8
? Specifies the number of memory buffers available to the driver in
case the driver runs out of available Map Registers. This buffer area is
also used when a packet consists of many fragments.
? If no coalesce buffers or map registers are available, the driver
will be forced to queue the packet for later transmission. The preferred
method of transmitting data is to use map registers since it's the most
efficient method.
? Coalesce buffers range: 1-32
? JR-Note: Not relevant to teaming/port-binding. (Currently set to
8)
o Link Speed & Duplex a Force to 100 Full Duplex to match Cisco Switch
? Recommended setting = Auto Detect (default)
? This parameter lets the PRO/100 S adapter know what speed to use on
the Ethernet wire, and how to send/receive packets, either full or half
duplex.
? Options available include:
* Auto Detect - The adapter detects whether its environment can
support 100 Mbps speed (and uses 100 Mbps if possible), and negotiates with
the switch on how to send/receive packets (either full or half duplex).
* NOTE: You must have an auto-negotiating switch to get full duplex
support with the Link Speed & Duplex option set to Auto Detect.
* 10Mbps/Half Duplex - The adapter uses 10 Mbps speed and performs
one operation at a time. It either sends or receives.
* 10Mbps/Full Duplex - The adapter uses 10 Mbps speed and sends and
receives packets at the same time. This improves the performance of your
adapter. Select this mode only if you have a full duplex switch.
* 100Mbps/Half Duplex - The adapter uses 100 Mbps speed and performs
one operation at a time. It either sends or receives.
* 100Mbps/Full Duplex - The adapter uses 100 Mbps speed and sends and
receives packets at the same time. This improves the performance of your
adapter. Select this mode only if you have a full duplex switch.
? JR-Note: Setting this to AUTO DETECT yielded a half-duplex/100Mbps
connection. Force feed this to 100Mpbs/Full-Duplex or performance will be
drastically reduced. (Currently set to 100Mbps/Full-Duplex)
o Locally Administered Address
? You can optionally override the factory default network address of
the adapter. To enter a new network address, type a 12 digit hexadecimal
number in this box.
? The address entered should be in the range of: 0000 0000 0001 -
FFFF FFFF FFFD.
? Exceptions:
? Do not use a multicast address (LSB of the high byte = 1). For
example, in the address 0Y123456789A, "Y" cannot be an odd number (must be
2, 6, A, or E).
? Do not use all zeros or all F's.
? JR-Note: Leave this alone.
o PCI Bus Efficiency
? When enabled, causes all transmit packets to be coalesced into a
single buffer before being sent to the network card. Because the entire
frame requires only one PCI transaction, the PCI bus is more efficient but
transmit time is slightly longer.
? When disabled, the packets are not coalesced, and each packet
requires several PCI transactions. The PCI Bus is less efficient, but
transmit time is faster.
? JR-Note: This is directly related to Adaptive Transmit Threshold.
Where the Transmit Threshold refers to the size of the packets being sent
"Out" on the network wire this refers to the size of the packets being
transmitted over the PCI bus. (Currently set to ENABLE)
o QoS Packet Tagging
? Enables or disables IEEE 802.1p/802.1Q tagging for the priority
filters you have set up (via the Priority Packet utility) to send network
traffic with different priority levels. You must set this option to
'Enabled' in order for Priority Packet filters to function properly.
? If you have assigned filters using 802.1p/Q tagging but this
setting is 'Disabled', the corresponding packets will still be prioritized,
using Intel's High Priority Queue (HPQ).
? JR-Note: Not relevant as most switches, routers, etc do not yet
support this standard. (Currently set to DISABLE)
o Receive Buffers
? Default value = 32
? Specifies the number of buffers used by the driver when copying
data to the protocol memory.
? In high network load situations, increasing receive buffers can
increase performance. The tradeoff is that this also increases the amount of
system memory used by the driver. When adapter is used in a team, it is
recommended this value be set to 150. In other cases, by modifying this
value you could degrade performance on your system. Use at your own risk.
? Receive Buffers range: 1-1024
? JR-Note: Basic math: 1,024 x 1,600 bytes/packet = 1,638 K --- Not
much when you've got 512 Meg of RAM to play with. (Currently set to 1024)
o Respond to Flow Control
? When enabled, causes the adapter to pause the transmission of
packets according to the time specified in a received full duplex flow
control frame.
? Default: Disabled
? JR-Note: Not relevant given the CISCO switches. (Currently set to
DISABLE)
o Trasmit Control Blocks
? Default value = 64
? Specifies how many transmit control blocks the driver allocates for
adapter use. This directly corresponds to how many outstanding packets the
driver can have in its "send" queue.
? If too few transmit control blocks are used, performance will
suffer. If too many transmit control blocks are used, the driver will
unnecessarily consume memory resources.
? Transmit Control Blocks range: 8 - 64
? JR-Note: Basic math: 64 x 1,600 bytes/packet = 102K. Ditto above
comments about memory usage. (Currently set to 64)
NOTE: Forcing all parameters to their max settings for using RAM adds about
3.4 Meg of memory usage. ( This was according to the Intel engineer. In
the real world it appeared to consume alot more memory. I suggest you play
with the settings and watch the results. Memory consumption may not be
immediate and may increase as the card adapts to the real world traffic. )
NOTE: Setting "Priorities" on the various team members has no effect when
TEAMING as a FAST-ETHERNET channel. If this was strictly for failsafe
teaming then the priority code basically states which card is the default
and which is the backup.
NOTE: For a SNA connection you must specify in the AS/400 the specific NIC
address of the team. I was expecting it to be the MAC address of the
primary port --- WRONG. Start PRO-SET and click on the team --- it will
show you the MAC address of the "Team". In my case it was the primary PLUS
one. I have my suspicions that in some cases the primary might show up.
NOTE: In cases where you have a RAID controller on your PCI buss Intel
recommends that you use -2- PRO-100+ server cards instead of -1-
PRO-100+-Dual-port-card. It seems RAID controllers and dual-port cards
both use a 'Bridging' chip to allow 2 physical devices to be on one card and
the the presence of more than one card that uses a bridging chip and a PCI
buss can yield unpredictable results such as horrible performance, packets
being dropped, the color of the sky changing from blue to green, etc.
(Wish I would have known this before purchasing the cards! Oh well, in
some cases there are no problems. Let's wait and see.)
----------------------------------------------------------------------------
------------------------
-----Original Message-----
From:
ake@... [mailto:
ake@...]
Sent: Tuesday, August 28, 2001 3:43 PM
To:
vantage@yahoogroups.com
Subject: [Vantage] Multiple NICs w/NT Server 4?
Hello Vantage gurus!
I have just ordered a new 10/100 switch for our network, and since
I'm speeding up the traffic with a switch I also considered adding a
second NIC to our NT 4 server to "widen the pipe" by allowing a
second (or third?) path from the switch to the server.
I was wondering if anyone has done this, or has enough experience
with NT Server 4 to judge whether its possible. I know that it works
with Linux, but are there any problems with NT...
TIA for any help!
Ole Latham
Akron Equipment Co.
Akron, Ohio
Yahoo! Groups Sponsor
ADVERTISEMENT
<
http://us.a1.yimg.com/us.yimg.com/a/di/dietsmart/itapeblu4_11_5.gif>
<
http://us.a1.yimg.com/us.yimg.com/a/di/dietsmart/111.jpg>
<
http://us.a1.yimg.com/us.yimg.com/a/di/dietsmart/darkblue.gif>
Start here...
<
http://us.a1.yimg.com/us.yimg.com/a/di/dietsmart/clear.gif>
Height:
3 4 5 6 7 8 ft 0 1 2 3 4 5 6 7 8 9 10 11in
<
http://us.a1.yimg.com/us.yimg.com/a/di/dietsmart/clear.gif>
Weight:
lbs. kg.
<
http://us.a1.yimg.com/us.yimg.com/a/di/dietsmart/clear.gif>
<
http://us.a1.yimg.com/us.yimg.com/a/di/dietsmart/333.gif>
<
http://us.adserver.yahoo.com/l?M=210544.1579977.3132570.1261774/D=egroupmai
l/S=1705007183:HM/A=776686/rand=549372570>
Useful links for the Yahoo!Groups Vantage Board are: ( Note: You must have
already linked your email address to a yahoo id to enable access. )
(1) To access the Files Section of our Yahoo!Group for Report Builder and
Crystal Reports and other 'goodies', please goto:
http://groups.yahoo.com/group/vantage/files/.
<
http://groups.yahoo.com/group/vantage/files/.>
(2) To search through old msg's goto:
http://groups.yahoo.com/group/vantage/messages
<
http://groups.yahoo.com/group/vantage/messages>
(3) To view links to Vendors that provide Vantage services goto:
http://groups.yahoo.com/group/vantage/links
<
http://groups.yahoo.com/group/vantage/links>
Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service
<
http://docs.yahoo.com/info/terms/> .
[Non-text portions of this message have been removed]