Bug 1712034
| Summary: | Unable to mount volumes for pod | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Andrew Butcher <abutcher> |
| Component: | Node | Assignee: | Seth Jennings <sjenning> |
| Status: | CLOSED ERRATA | QA Contact: | Sunil Choudhary <schoudha> |
| Severity: | urgent | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.2.0 | CC: | aos-bugs, ccoleman, jligon, jokerman, mmccomas, wking |
| Target Milestone: | --- | ||
| Target Release: | 4.2.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | buildcop | ||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-10-16 06:29:13 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Disabled in https://github.com/openshift/support-operator/pull/7 Not clear why this happens Support operator re-enabled in https://github.com/openshift/support-operator/commit/fce6e9c6f9e198b4cd186aa789c8d40aaaec3bcc No longer see the error in recent upgrade CI tests. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922 |
fail [github.com/openshift/origin/test/e2e/upgrade/upgrade.go:138]: during upgrade Unexpected error: <*errors.errorString | 0xc0045c0c40>: { s: "Cluster did not complete upgrade: timed out waiting for the condition", } Cluster did not complete upgrade: timed out waiting for the condition occurred Several pods (support-operator, image-registry) are failing to mount volumes with timeout expired waiting for volumes to attach or mount for pod. Additionally, both pods exit frequently in the build log. May 20 13:10:56.489 W ns/openshift-support pod/support-operator-9cd87985f-dkk8h Unable to mount volumes for pod "support-operator-9cd87985f-dkk8h_openshift-support(6ad8d075-7b00-11e9-add6-122eab0cd460)": timeout expired waiting for volumes to attach or mount for pod "openshift-support"/"support-operator-9cd87985f-dkk8h". list of unmounted volumes=[snapshots operator-token-85k6q]. list of unattached volumes=[snapshots operator-token-85k6q] May 20 14:14:43.608 E ns/openshift-support pod/support-operator-6b6bdb7cb9-9hqvd node/ip-10-0-175-110.ec2.internal container=operator container exited with code 137 (ContainerStatusUnknown): The container could not be located when the pod was terminated https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_installer/1161/pull-ci-openshift-installer-master-e2e-aws-upgrade/430/build-log.txt Cluster Version: { "metadata": { "name": "version", "selfLink": "/apis/config.openshift.io/v1/clusterversions/version", "uid": "08b30497-7afd-11e9-ac40-127d4303b792", "resourceVersion": "60072", "generation": 2, "creationTimestamp": "2019-05-20T12:44:40Z" }, "spec": { "clusterID": "25354c61-c91f-4663-abfb-134c829d9aed", "desiredUpdate": { "version": "", "image": "registry.svc.ci.openshift.org/ci-op-ryvgwxg8/release@sha256:651930c24c6621dfbe9a10db3b7cf8912898b2a962b962184897c8bbacb94029", "force": true }, "upstream": "https://api.openshift.com/api/upgrades_info/v1/graph", "channel": "stable-4.1" }, "status": { "desired": { "version": "0.0.1-2019-05-20-122935", "image": "registry.svc.ci.openshift.org/ci-op-ryvgwxg8/release@sha256:651930c24c6621dfbe9a10db3b7cf8912898b2a962b962184897c8bbacb94029", "force": true }, "history": [ { "state": "Partial", "startedTime": "2019-05-20T13:02:40Z", "completionTime": null, "version": "0.0.1-2019-05-20-122935", "image": "registry.svc.ci.openshift.org/ci-op-ryvgwxg8/release@sha256:651930c24c6621dfbe9a10db3b7cf8912898b2a962b962184897c8bbacb94029", "verified": false }, { "state": "Completed", "startedTime": "2019-05-20T12:45:24Z", "completionTime": "2019-05-20T13:02:40Z", "version": "0.0.1-2019-05-20-122309", "image": "registry.svc.ci.openshift.org/ci-op-ryvgwxg8/release@sha256:ef62323387550d9c9501fa96b59daac227ea0be3de67ad0732734e6fc648d826", "verified": false } ], "observedGeneration": 2, "versionHash": "9qWcdwYEgAg=", "conditions": [ { "type": "Available", "status": "True", "lastTransitionTime": "2019-05-20T12:59:55Z", "message": "Done applying 0.0.1-2019-05-20-122309" }, { "type": "Failing", "status": "True", "lastTransitionTime": "2019-05-20T14:14:21Z", "reason": "ClusterOperatorNotAvailable", "message": "Cluster operator support is still updating" }, { "type": "Progressing", "status": "True", "lastTransitionTime": "2019-05-20T13:02:40Z", "reason": "ClusterOperatorNotAvailable", "message": "Unable to apply 0.0.1-2019-05-20-122935: the cluster operator support has not yet successfully rolled out" }, { "type": "RetrievedUpdates", "status": "False", "lastTransitionTime": "2019-05-20T12:45:24Z", "reason": "RemoteFailed", "message": "Unable to retrieve available updates: currently installed version 0.0.1-2019-05-20-122935 not found in the \"stable-4.1\" channel" } ], "availableUpdates": null } } How reproducible: Appears to be a flake or bug with master.