Bug 1707502

Summary: ClusterOperatorDegraded: Cluster operator machine-config is reporting a failure... syncRequiredMachineConfigPools: error pool master is not ready, retrying
Product: OpenShift Container Platform Reporter: bpeterse
Component: Machine Config OperatorAssignee: Antonio Murdaca <amurdaca>
Status: CLOSED DUPLICATE QA Contact: Micah Abbott <miabbott>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: unspecified   
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-05-07 16:14:53 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1524987, 1525003, 1541206, 1542659, 1558876, 1568482, 1592041, 1592863, 1599994, 1628406, 1637242, 1637246, 1637731, 1638330, 1641036, 1641148, 1641225, 1641342, 1641479, 1641500, 1641559, 1641897, 1642340, 1642343, 1642628, 1642697, 1642712, 1642911, 1642914, 1642917, 1642928, 1642969, 1642970, 1642971, 1642972, 1642973, 1642975, 1642976, 1642978, 1642981, 1642986, 1642987, 1642990, 1642991, 1642992, 1643021, 1643208, 1643225, 1643228, 1643268, 1643532, 1646206, 1649174, 1650108, 1652012, 1652014, 1656146, 1656151, 1656153, 1657045, 1657900, 1658237, 1662121, 1664387, 1664562, 1666659, 1667539, 1667893, 1689242, 1831043    
Bug Blocks:    

Description bpeterse 2019-05-07 16:12:28 UTC
Failing:
https://openshift-gce-devel.appspot.com/build/origin-ci-test/pr-logs/pull/openshift_console-operator/226/pull-ci-openshift-console-operator-master-e2e-aws-upgrade/76

Last log lines:
````````````````
May 07 01:41:00.228 E ns/openshift-monitoring pod/grafana-579c8fd887-qwkhk node/ip-10-0-140-68.ec2.internal container=grafana-proxy container exited with code 255 (Error): 
May 07 01:41:01.022 E ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-175-199.ec2.internal container=prometheus container exited with code 1 (Error): 
May 07 01:42:38.269 E clusterversion/version changed Failing to True: UpdatePayloadFailed: Could not update deployment "openshift-machine-config-operator/etcd-quorum-guard" (315 of 350)
May 07 01:48:15.472 E clusteroperator/machine-config changed Degraded to True: timed out waiting for the condition during syncRequiredMachineConfigPools: error pool master is not ready, retrying. Status: (pool degraded: true total: 3, ready 1, updated: 1, unavailable: 1): Failed to resync 0.0.1-2019-05-07-003443 because: timed out waiting for the condition during syncRequiredMachineConfigPools: error pool master is not ready, retrying. Status: (pool degraded: true total: 3, ready 1, updated: 1, unavailable: 1)
May 07 01:49:53.267 E clusterversion/version changed Failing to True: UpdatePayloadFailed: Could not update deployment "openshift-machine-config-operator/etcd-quorum-guard" (315 of 350)
May 07 01:57:23.268 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator machine-config is reporting a failure: Failed to resync 0.0.1-2019-05-07-003443 because: timed out waiting for the condition during syncRequiredMachineConfigPools: error pool master is not ready, retrying. Status: (pool degraded: true total: 3, ready 1, updated: 1, unavailable: 1)
May 07 02:05:53.267 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator machine-config is reporting a failure: Failed to resync 0.0.1-2019-05-07-003443 because: timed out waiting for the condition during syncRequiredMachineConfigPools: error pool master is not ready, retrying. Status: (pool degraded: true total: 3, ready 1, updated: 1, unavailable: 1)
May 07 02:16:08.268 E clusterversion/version changed Failing to True: ClusterOperatorDegraded: Cluster operator machine-config is reporting a failure: Failed to resync 0.0.1-2019-05-07-003443 because: timed out 
````````````````

Comment 1 Antonio Murdaca 2019-05-07 16:14:53 UTC

*** This bug has been marked as a duplicate of bug 1706606 ***