+++ This bug was initially created as a clone of Bug #613835 +++
Description of problem:
corosync behaves erratically when invalid multicast address is specified in corosync.conf file.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. set multicast address to 126.96.36.199
2. corosync wont form configuration
coroysnc fails in odd ways
error should be logged and corosync should fail to start
Created attachment 450056 [details]
Only allow the multicast address range
Only allow the multicast address range (188.8.131.52 to 184.108.40.206).
Correct mcast address works well, autoconfigured also, manually specified non-mcast address results in:
$ service cman restart
Leaving fence domain... [ OK ]
Stopping gfs_controld... [ OK ]
Stopping dlm_controld... [ OK ]
Stopping fenced... [ OK ]
Stopping cman... [ OK ]
Waiting for corosync to shutdown: [ OK ]
Unloading kernel modules... [ OK ]
Unmounting configfs... [ OK ]
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... corosync died: Could not read cluster configuration Check cluster logs for details
Mar 9 10:01:20 z4 corosync: [MAIN ] Corosync Cluster Engine ('1.2.3'): started and ready to provide service.
Mar 9 10:01:20 z4 corosync: [MAIN ] Corosync built-in features: nss dbus rdma snmp
Mar 9 10:01:20 z4 corosync: [MAIN ] Successfully read config from /etc/cluster/cluster.conf
Mar 9 10:01:20 z4 corosync: [MAIN ] Successfully parsed cman config
Mar 9 10:01:20 z4 corosync: [MAIN ] parse error in config: mcastaddr is not a correct multicast address.
Mar 9 10:01:20 z4 corosync: [MAIN ] Corosync Cluster Engine exiting with status 8 at main.c:1679.
Works as expected.
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.