Red Hat Bugzilla – Bug 1266155
Master fails to start if networkConfig.serviceNetworkCIDR is unset
Last modified: 2015-12-27 21:57:43 EST
Description of problem:
If a user is upgrading from 3.0.2 and hasn't followed the upgrade documentation and added a networkConfig.serviceNetworkCIDR to their master config the master will fail to start. We should be able to work around this by defaulting the value.
The error in the master logs is
SDN initialization failed: ClusterNetwork "default" is invalid: serviceNetwork: invalid value '', Details: invalid CIDR address:
This is just backporting the upstream PR so assigning this to myself.
I reproduced this issue on puddle 3.0/2015-09-16.2. After upgrade to 2015-09-24.1. the bug disappears. so move bug to verified.
The reproduce steps is as the following:
1. Install prior version of openshift via ansible. For example: 3.0/2015-09-16.2.
2. systemctl stop openshift-master. and comment the line "serviceNetworkCIDR: 172.30.0.0/16" in master-config.yaml
3. delete /var/lib/openshift/openshift.local.etcd/member
4. systemctl start openshift-master. I can hit the message as following.
Sep 25 16:45:44 openshift-117.lab.eng.nay.redhat.com openshift-master: F0925 16:45:44.577209 21539 multitenant.go:36] SDN initialization failed: ClusterNetwork "default" is invalid: servic...DR address:
Sep 25 16:45:44 openshift-117.lab.eng.nay.redhat.com systemd: openshift-master.service: main process exited, code=exited, status=255/n/a
Sep 25 16:45:44 openshift-117.lab.eng.nay.redhat.com systemd: Unit openshift-master.service entered failed state.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.