Description of problem:Unable to deploy latest 4.2 nightly build. I get these errors every time i try to install 4.2.0-0.nightly-2019-07-01-102521 level=debug msg="Still waiting for the cluster to initialize: Working towards 4.2.0-0.nightly-2019-07-01-102521: 91% complete" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.2.0-0.nightly-2019-07-01-102521: 92% complete" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.2.0-0.nightly-2019-07-01-102521: 94% complete" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.2.0-0.nightly-2019-07-01-102521: 95% complete" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.2.0-0.nightly-2019-07-01-102521: 98% complete" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.2.0-0.nightly-2019-07-01-102521: 98% complete, waiting on authentication, machine-config, monitoring" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.2.0-0.nightly-2019-07-01-102521: 99% complete" level=debug msg="Still waiting for the cluster to initialize: Cluster operator machine-config is reporting a failure: Failed to resync 4.2.0-0.nightly-2019-07-01-102521 because: timed out waiting for the condition during syncRequiredMachineConfigPools: pool master has not progressed to latest configuration: configuration status for pool master is empty, retrying" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.2.0-0.nightly-2019-07-01-102521: 99% complete" level=debug msg="Still waiting for the cluster to initialize: Cluster operator machine-config is reporting a failure: Failed to resync 4.2.0-0.nightly-2019-07-01-102521 because: timed out waiting for the condition during syncRequiredMachineConfigPools: pool master has not progressed to latest configuration: configuration status for pool master is empty, retrying" level=fatal msg="failed to initialize the cluster: Cluster operator machine-config is reporting a failure: Failed to resync 4.2.0-0.nightly-2019-07-01-102521 because: timed out waiting for the condition during syncRequiredMachineConfigPools: pool master has not progressed to latest configuration: configuration status for pool master is empty, retrying" Version-Release number of selected component (if applicable):4.2.0-0.nightly-2019-07-01-102521 How reproducible: Always for me Steps to Reproduce: 1.Install 4.2.0-0.nightly-2019-07-01-102521 2. 3. Actual results: Installation aborts due to above said error Expected results: Installation should succeed Additional info: Seems like routes were not setup before it froze so there seems no way to login to the cluster $oc login xx.xx.xx.xx:6443 -u kubeadmin -p WJbvg-SRuah-UnsD8-tCJoV --insecure-skip-tls-verify=true error: dial tcp 10.0.142.132:6443: i/o timeout - verify you have provided the correct host and port and that the server is currently running. Unable to ping, $ ping api.qe-anusaxen-ocp42.qe.devcluster.openshift.com
can you provide must-gather (oc adm must-gather) - that's gonna shed some light on this.
(In reply to Antonio Murdaca from comment #1) > can you provide must-gather (oc adm must-gather) - that's gonna shed some > light on this. I can try again for that but as i mentioned in additional info above, i am not able to go inside the cluster for now
Likely https://bugzilla.redhat.com/show_bug.cgi?id=1725478 - can you retest with a payload newer than 07/01?
(In reply to Antonio Murdaca from comment #7) > Likely https://bugzilla.redhat.com/show_bug.cgi?id=1725478 - can you retest > with a payload newer than 07/01? Hi Antonia, newer nighltly payloads seems all red (Rejected)
(In reply to Anurag saxena from comment #8) > (In reply to Antonio Murdaca from comment #7) > > Likely https://bugzilla.redhat.com/show_bug.cgi?id=1725478 - can you retest > > with a payload newer than 07/01? > > Hi Antonia, newer nighltly payloads seems all red (Rejected) There are some CI issues
Moving to modified once a new payload is green to retest
Update - Cluster configured correctly and seemed healthy on Red one 4.2.0-0.nightly-2019-07-03-082520 which came after that. SO will wait for green once to verify this issue. Thanks
latest Update: No green build yet on 4.2, but the cluster seems healthy on recent 4.2.0-0.nightly-2019-07-08-142835 which is a Rejected one. Have to wait for a green build to confirm the whether issue still persist
I failed to install latest 4.2 payload env due to bug 1728639 . Then I tried to install earlier 4.2 payload env, also met this bug. Bug 1728639 blocks this bug's verification, agreeing with above comment.
Seems okay on green builds, no issues on 4.2.0-0.nightly-2019-07-11-023129
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922