Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 290444 Details for
Bug 426842
Ethernet Channel Bonding Not working in Cluster Suite
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
The following text file contains Ethernet Channel Bonding Configuration Details and Other Details
ClusterEthernetBonding.txt (text/plain), 2.58 KB, created by
Balaji.S
on 2007-12-27 08:16:34 UTC
(
hide
)
Description:
The following text file contains Ethernet Channel Bonding Configuration Details and Other Details
Filename:
MIME Type:
Creator:
Balaji.S
Created:
2007-12-27 08:16:34 UTC
Size:
2.58 KB
patch
obsolete
>I have configured RHEL Cluster Suite with 2 servers and cluster to monitor the services >Server 1 : 192.168.13.110 IP Address and hostname is primary >Server 2 : 192.168.13.179 IP Address and hostname is secondary >Floating : 192.168.13.83 IP Address (Assumed by currently active server) > >In am followed the RHEL Cluster Suite Configuration document "rh-cs-en-4.pdf" and >I have configured Ethernet Channel Bonding in Each Cluster Nodes to avoid the network single point failure > >Channel Bonding Configuration Details are >1) Created bonding devices in "/etc/modprobe.conf" file > alias bond0 bonding > options bonding miimon=100 mode=1 >2) Edit the "/etc/sysconfig/network-scripts/ifcfg-eth0 and ifcfg-eth1" configuration > DEVICE=eth0 > USERCTL= no > ONBOOT=yes > MASTER=bond0 > SLAVE=yes > BOOTPROTO=none > > DEVICE=eth1 > USERCTL= no > ONBOOT=yes > MASTER=bond0 > SLAVE=yes > BOOTPROTO=none >3) Created a network script for the bonding device is "/etc/sysconfig/network-scripts/ifcfg-bond0" > DEVICE=bond0 > USERCTL=no > ONBOOT=yes > NETMASK=255.255.255.0 > GATEWAY=192.168.13.1 > IPADDR=192.168.13.110 >4) Reboot the system for the changes to take effect. > >After i am rebooted both the server are active and then cluster node becomes simplex and Services are started on both the nodes > >The cluster output in primary node >Member Status: Quorate > >Member Name Status >------ ---- ------ >primary Online, Local, rgmanager >secondary Offline > >Service Name Owner (Last) State >------- ---- ------------- ----- >Service primary started > >The cluster output in secondary node >Member Status: Quorate > >Member Name Status >----- ---- ------ >primary Offline >secondary Online, Local, rgmanager > >Service Name Owner (Last) State >------ ---- ------------ ------ >Service secondary started > >Before Ethernet Channel Bonding configuration cluster services are active in primary node and >other nodes acts as passive node and member status is Online for both the nodes of cluster > >But after Ethernet Channel Bonding configuration cluster services are active on both the nodes and >member status is current node is Online and other node is Offline > >We are not sure why this is happening. Can some one throw light on this. >
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 426842
: 290444 |
301748