Bug 1702390

Summary: [upgrade] node didn’t reboot during upgrade
Product: OpenShift Container Platform Reporter: Antonio Murdaca <amurdaca>
Component: Machine Config OperatorAssignee: Antonio Murdaca <amurdaca>
Status: CLOSED DUPLICATE QA Contact: Micah Abbott <miabbott>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 4.1.0CC: ccoleman, wking
Target Milestone: ---Keywords: BetaBlocker, TestBlocker, Upgrades
Target Release: 4.1.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-05-02 17:57:42 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1703879    

Description Antonio Murdaca 2019-04-23 16:17:59 UTC
Description of problem:

This job https://gcsweb-ci.svc.ci.openshift.org/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade/581/artifacts/e2e-aws-upgrade/ shows that a node failed to upgrade and we need to investigate why since it borked the upgrade completely.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 W. Trevor King 2019-04-23 16:20:22 UTC
Possibly related to bug 1701291 , where job 581's 7:5x errors may be related to one of the nodes not rebooting.

Comment 2 Antonio Murdaca 2019-04-29 13:10:52 UTC
This is likely related to https://bugzilla.redhat.com/show_bug.cgi?id=1703699 but I can't see from logs if it's exactly that as we added logs later. Are there other jobs failing with this?

Comment 3 Antonio Murdaca 2019-05-02 17:57:42 UTC

*** This bug has been marked as a duplicate of bug 1703699 ***