Tags Archives: network bonding

LPIC3 DIPLOMA Linux Clustering – LAB NOTES: Lesson Network Bonding

 

LAB on Network Bonding

 

These are my notes made during my lab practical as part of my LPIC3 Diploma course in Linux Clustering. They are in “rough format”, presented as they were written.

 

 

LPIC3 Syllabus for Network Bonding

 

364.4 Network High Availability
Weight: 5
Description: Candidates should be able to configure redundant networking connec-
tions and manage VLANs. Furthermore, candidates should have a basic understanding
of BGP.
Key Knowledge Areas:
• Understand and configure bonding network interface
• Network bond modes and algorithms (active-backup, blance-tlb, balance-alb,
802.3ad, balance-rr, balance-xor, broadcast)
• Configure switch configuration for high availability, including RSTP
• Configure VLANs on regular and bonded network interfaces
• Persist bonding and VLAN configuration
• Understand the principle of autonomous systems and BGP to manage external
redundant uplinks
• Awareness of traffic shaping and control capabilities of Linux
 

Partial list of the used files, terms and utilities:
• bonding.ko (including relevant module options)
• /etc/network/interfaces
• /etc/sysconfig/networking-scripts/ifcfg-*
• /etc/systemd/network/*.network
• /etc/systemd/network/*.netdev
• nmcli
• /sys/class/net/bonding_masters
• /sys/class/net/bond*/bonding/miimon
• /sys/class/net/bond*/bonding/slaves
• ifenslave
• ip

 

Cluster Overview

 

The cluster comprises three nodes installed with CentOS 7 and housed on a KVM virtual machine system on a Linux Ubuntu host.

 

Network Card Bonding On CentOS

 

Ethernet network bonding, sometimes known as port trunking or link aggregation is a connection method in which multiple network links are bound together to operate as one single link, effectively combining the bandwidth into a single connection.

 

Linux deploys a special kernel module named bonding to connect multiple network interfaces into a single interface. Most popular Linux distro kernels ship with the bonding driver already available as a module and the ifenslave user level control program installed and ready for use. This can be used to provide redundant links, fault tolerance, load balancing or increased effective bandwidth capacity for a service connection.

 

The two main reasons to use network interface bonding are:

 

1. To provide increased bandwidth
2. To provide redundancy in the event of hardware failure

 

 

Network bonding has different modes. You specify the mode to be used by your bonding interface in /etc/sysconfig/network-scripts/ifcfg-bond0 interfaces file by defining the line bond-mode, for example:

 

bond-mode active-backup

 

or alternatively with:

 

BONDING_OPTS=”mode=1 miimon=100″

 

 

After the channel bonding interface is created, the network interfaces to be bound together must be configured by adding the MASTER and SLAVE directives to their configuration files.

 

 

There are six main modes of bonding:

 

mode=1 (active-backup)
mode=2 (balance-xor)
mode=3 (broadcast)
mode=4 (802.3ad)
mode=5 (balance-tlb)
mode=6 (balance-alb)

 

 

Types of Network Bonding

 

mode=0 (balance-rr)

 

This mode is based on Round-robin policy and it is the default mode. This mode offers fault tolerance and load balancing features. It transmits the packets in Round robin fashion that is from the first available slave through the last.

 

mode-1 (active-backup)

 

This mode is based on Active-backup policy. Only one slave is active in this band, and another one will act only when the other fails. The MAC address of this bond is available only on the network adapter part to avoid confusing the switch. This mode also provides fault tolerance.

 

mode=2 (balance-xor)

 

This mode sets an XOR (exclusive or) mode that is the source MAC address is XOR’d with destination MAC address for providing load balancing and fault tolerance. Each destination MAC address the same slave is selected.

 

mode=3 (broadcast)

 

This method is based on broadcast policy that is it transmitted everything on all slave interfaces. It provides fault tolerance. This can be used only for specific purposes.

 

mode=4 (802.3ad)

 

This mode is known as a Dynamic Link Aggregation mode that has it created aggregation groups having same speed. It requires a switch that supports IEEE 802.3ad dynamic link. The slave selection for outgoing traffic is done based on a transmit hashing method. This may be changed from the XOR method via the xmit_hash_policy option.

 

mode=5 (balance-tlb)

 

This mode is called “adaptive transmit load balancing”. The outgoing traffic is distributed based on the current load on each slave and the incoming traffic is received by the current slave. If the incoming traffic fails, the failed receiving slave is replaced by the MAC address of another slave. This mode does not require any special switch support.

 

mode=6 (balance-alb)

 

This mode is called adaptive load balancing. This mode does not require any special switch support.

 

 

Overview of the bonding mode codes deployed

 

Bonding Mode Bonding Policy Description Fault Tolerance Load balancing
0 Round Robin

Packets are sequentially transmitted/received via each interface in turn. 

 

No Yes
1 Active Backup

One interface is active while another is asleep. If the active interface fails, another interface takes over. NOTE: Active Backup is only supported in x86 systems. 

 

Yes No
2 XOR [exclusive OR]

MAC address of the slave interface is matched to the incoming request’s MAC address. Once the connection is made the same interface is then used to transmit/receive for the destination MAC.

 

Yes Yes
3 Broadcast

All transmissions are sent via all slaves.

 

Yes No
4 Dynamic Link Aggregation

All interfaces are actively aggregated together as one bonded interface to deliver greater bandwidth and also failover in the case of an interface failure. Dynamic Link Aggregation requires a switch that supports IEEE standard 802.3ad.

 

Yes Yes
5 Transmit Load Balancing (TLB)

Outgoing network traffic is distributed according to current load on each slave interface. The incoming network traffic is received by the current slave. If the receiving slave interface goes down, then another slave takes over the MAC address from the failed slave interface.

 

Yes Yes
6 Adaptive Load Balancing (ALB)

In contrast to Dynamic Link Aggregation, Adaptive Load Balancing does not require specific switch configuration. Incoming network traffic is load balanced through ARP negotiation.  NOTE: Adaptive Load Balancing is only supported in x86 systems. 

 

Yes Yes

 

 

Network Bonding Configuration

 

 

Load the bonding module:

 

modprobe bonding

 

check with:

 

modinfo bonding

 

 

[root@ceph-mon network-scripts]# modinfo bonding
filename: /lib/modules/3.10.0-1160.24.1.el7.x86_64/kernel/drivers/net/bonding/bonding.ko.xz
author: Thomas Davis, tadavis@lbl.gov and many others
description: Ethernet Channel Bonding Driver, v3.7.1
version: 3.7.1
license: GPL
alias: rtnl-link-bond
retpoline: Y
rhelversion: 7.9
srcversion: 3B2F8F8533AEAE2EB01F706
depends:
intree:
.. … .. (long list of output)… ..

 

1) Create the bonding file ( ifcfg-bond0 ) and set the IP address, netmask  and gateway.

 

nano /etc/sysconfig/network-scripts/ifcfg-bond0

 

DEVICE=bond0

IPADDR=10.0.9.45

NETMASK=255.255.255.0

GATEWAY=10.0.9.1

TYPE=Bond

ONBOOT=yes

NM_CONTROLLED=no

BOOTPROTO=static

 

2) Edit the files of eth1 and eth2 and define the master and slave entries: 

 

nano /etc/sysconfig/network-scripts/ifcfg-eth1

 

 

DEVICE=eth1

HWADDR=52:54:00:d7:a5:b0

TYPE=Ethernet

ONBOOT=yes

NM_CONTROLLED=no

MASTER=bond0

SLAVE=yes

 

 

nano /etc/sysconfig/network-scripts/ifcfg-eth2

 

DEVICE=eth2

HWADDR=52:54:00:87:8f:0b

TYPE=Ethernet

ONBOOT=yes

NM_CONTROLLED=no

MASTER=bond0

SLAVE=yes

 

 

3) Create the bonding file bonding.conf

 

nano /etc/modprobe.d/bonding.conf

 

alias bond0 bonding

 

options bond0 mode=1 miimon=100

 

 

4) restart the networking service

 

systemctl restart networking 

 

ifup ifcfg-bond0

 

 

5) To check the bond interface:

 

ifconfig bond0

 

 

6) To verify the status of the bond interface:

 

cat /proc/net/bonding/bond0

 

 

[root@ceph-mon ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

 

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

 

Slave Interface: eth1
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 52:54:00:d7:a5:b0
Slave queue ID: 0

 

Slave Interface: eth2
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 52:54:00:87:8f:0b
Slave queue ID: 0
[root@ceph-mon ~]#

 

We can see that our bonding mode is set to load balancing (round-robin or rr)

 

and that we are using eth1 and eth2 bonded together as bond0

 

You can also verify with lsmod:

 

[root@ceph-mon ~]# lsmod |grep bond
bonding 152979 0
[root@ceph-mon ~]#

 

 

and with:

 

 

[root@ceph-mon network-scripts]# ip -br address
lo UNKNOWN 127.0.0.1/8 ::1/128
eth0 UP 192.168.122.40/24 fe80::6e18:9a8a:652c:1700/64 fe80::127d:ea0d:65b7:30e5/64 fe80::4ad9:fabb:aad4:9468/64
eth1 UP
eth2 UP
eth3 UP
bond0 UP 10.0.9.45/24 fe80::5054:ff:fed7:a5b0/64
[root@ceph-mon network-scripts]#

 

 

To test fault tolerance:

 

To test the fault tolerance, shut down one interface and check whether you are still able access the server.

 

ifdown eth1

 

or

 

ifdown eth2

 

You should still be able to access the machine via the bond0 interface IP address.

 

 

[root@ceph-mon ~]# ifdown eth1
[root@ceph-mon ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

 

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

 

Slave Interface: eth2
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 52:54:00:87:8f:0b
Slave queue ID: 0
[root@ceph-mon ~]#

 

we can see that eth1 is now no longer part of the bond.

 

bring the interface back up again with

 

ifup eth1

 

 

To Change Bonding Mode

 

 

To change the bonding mode set the BONDING_OPTS value accordingly, eg to set to aggregate bonding (4):

in the interface file:

 

BONDING_OPTS=”mode=4 miimon=100″

 

and then

 

ifdown bond0

 

and

 

ifup bond0

 

verify the change with

 

cat /proc/net/bonding/bond0

 

 

 

Using ifenslave for bonding

 

The tool ifenslave can also be used to configure bonding interfaces. It can be used to attach or detach or change the currently active slave interface from the bonding.

 

to display interface info:

 

[root@ceph-mon network-scripts]# ifenslave -a
The result of SIOCGIFFLAGS on lo is 49.
The result of SIOCGIFADDR is 7f.00.00.01.
The result of SIOCGIFHWADDR is type 772 00:00:00:00:00:00.
The result of SIOCGIFMETRIC is 0
The result of SIOCGIFMTU is 65536
The result of SIOCGIFFLAGS on eth0 is 1043.
The result of SIOCGIFADDR is ffffffc0.ffffffa8.7a.28.
The result of SIOCGIFHWADDR is type 1 52:54:00:93:ca:03.
The result of SIOCGIFMETRIC is 0
The result of SIOCGIFMTU is 1500
The result of SIOCGIFFLAGS on bond0 is 1443.
The result of SIOCGIFADDR is 0a.00.09.2d.
The result of SIOCGIFHWADDR is type 1 52:54:00:d7:a5:b0.
The result of SIOCGIFMETRIC is 0
The result of SIOCGIFMTU is 1500
[root@ceph-mon network-scripts]#

 

 

To create a bond device, follow these three steps :

 

– ensure that the required drivers are properly loaded :
# modprobe bonding ; modprobe <3c59x|eepro100|pcnet32|tulip|…>

 

– assign an IP address to the bond device :

# ifconfig bond0 <addr> netmask <mask> broadcast <bcast>

 

– attach all the interfaces you need to the bond device :
# ifenslave [{-f|–force}] bond0 eth0 [eth1 [eth2]…]

 

If bond0 didn’t have a MAC address, it will take eth0’s. Then, all interfaces attached AFTER this assignment will get the same MAC addr.
(except for ALB/TLB modes)

 

 

-c, –change-active
Change active slave.

 

-d, –detach
Removes slave interfaces from the bonding device.

 

 

To detach an interface from a bond:

 

ifenslave -d <master iface> <slave iface>

 

ifenslave -d bond0 eth2

 

To add an interface to a bond:

 

 

ifenslave <master iface> <slave iface>

 

ifenslave bond0 eth2

 

 

 

To bond 2 interfaces together:

 

ifenslave bond0 eth1 eth2

 

For example, to change the active slave from bond0 to eth2:

 

 

ifenslave -c bond0 eth2

 

 

 

Important: To make changes permanent you must define them in the respective network interfaces config files.

Continue Reading