메뉴 닫기

Bonding mode별 트래픽 체크

본문은 Bonding Mode별 실제 트래픽 수치를 체크해본 결과이며 mode는 0, 5, 6 을 비교체크하였습니다.
조건은 1G NIC * 2EA 를 가지고 실제 송/수신 최대성능과 Mode별 실제 동작여부를 확인하기위해 진행하였습니다.

 

Bonding Mode 
 
설명 원문출처 http://www.unixmen.com/linux-basics-create-network-bonding-on-ubuntu-14-10/

mode=0 (balance-rr)
:: 패킷을 보낼때마다 순차전송 Fault Tolerance
Round-robin policy: It the default mode. It transmits packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.

mode=1 (active-backup)
:: Active 된 이더넷에 문제발생시 backup 디바이스로 전환 Fault Tolerance
Active-backup policy: In this mode, only one slave in the bond is active. The other one will become active, only when the active slave fails. The bond’s MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance.

mode=2 (balance-xor)
:: xor연산을 이용한 목적지 Mac과 근원지 Mac을 이용해 패킷 분배 Fault tolerance
XOR policy: Transmit based on [(source MAC address XOR’d with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.

mode=3 (broadcast)
:: 송수신 패킷을 복사해서 모든 이더넷디바이스에 동일한 패킷 전송
Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.

mode=4 (802.3ad)
:: IEEE 802.3ad 프로토콜을 사용해서 스위치와의 사이에서 동적으로 aggregation을 작성
IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.
Prerequisites:
– Ethtool support in the base drivers for retrieving the speed and duplex of each slave.
– A switch that supports IEEE 802.3ad Dynamic link aggregation. Most switches will require some type of configuration to enable 802.3ad mode.

mode=5 (balance-tlb)
:: 이더넷 디바이스중 부하가 낮은 장치를 선택해서 패킷을 보내며 수신은 특정 장치로 지정
Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
Prerequisite:
– Ethtool support in the base drivers for retrieving the speed of each slave.

mode=6 (balance-alb)
:: 송수신 모두 부하가 낮은 장치를 체크해서 사용
Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.

 

bonding mode 0
# ethtool bond0
Settings for bond0:
Supported ports: [ ]
Supported link modes: Not reported
Supported pause frame use: No
Supports auto-negotiation: No
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Speed: 2000Mb/s
Duplex: Full
Port: Other
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Link detected: yes

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 0c:c4:7a:72:4a:40
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 0c:c4:7a:72:4a:41
Slave queue ID: 0

 

bonding mode 5
# ethtool bond0
Settings for bond0:
Supported ports: [ ]
Supported link modes: Not reported
Supported pause frame use: No
Supports auto-negotiation: No
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Speed: 2000Mb/s
Duplex: Full
Port: Other
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Link detected: yes

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: transmit load balancing
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 0c:c4:7a:72:4a:40
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 0c:c4:7a:72:4a:41
Slave queue ID: 0

 

bonding mode 6
# ethtool bond0
Settings for bond0:
Supported ports: [ ]
Supported link modes: Not reported
Supported pause frame use: No
Supports auto-negotiation: No
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Speed: 2000Mb/s
Duplex: Full
Port: Other
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Link detected: yes

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 0c:c4:7a:72:4a:40
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 0c:c4:7a:72:4a:41
Slave queue ID: 0

 

bonding mode에 따른 송수신 트래픽 변화체크
bonding

① bond mode = 5 , multi client
수신트래픽(업로드) 1G
⑧ bond mode = 5 , multi client
송신트래픽(다운로드)
==>> 수신 1G, 송신 2G

② bond mode = 6 , multi client
수신트래픽(업로드) 2G
③ bond mode = 6, single client
수신트래픽(업로드) 1G
⑥ bond mode = 6
⑦ bond mode = 6
==>> 부하가 낮은 장치를 체크해서 트래픽을 송/수신하나 세션연결 지향적이라 Client 여러대에서 테스트
          최대 2G 성능체크

④ bond mode = 0 , multi client
수신트래픽(업로드) 1.4~1.5G
⑤ bond mode = 0, single client
수신트래픽(업로드) 1.4~1.5G
==>> 동일조건내 mode 6에 비해 송/수신 모두 성능저하

기본적인 구조내에서 성능을 최대로 뽑으려면 6으로… bond 0 / bond 6은 서비스 환경에 따라 유동적으로….
※ bond 6 > bond 5 >= bond 0

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x