Description of problem: oc --context build01 get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.5.4 True True 4h47m Unable to apply 4.5.5: the cluster operator monitoring has not yet successfully rolled out oc --context build01 get pod -n openshift-monitoring alertmanager-main-2 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES alertmanager-main-2 0/5 ContainerCreating 0 4h59m <none> ip-10-0-144-106.ec2.internal <none> <none> Warning FailedAttachVolume 49s (x146 over 4h41m) attachdetach-controller AttachVolume.Attach failed for volume "pvc-be5418b6-0ed4-4422-b2d6-4c639c2bd290" : volume is still being detached from the node Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Master Log: Node Log (of failed PODs): PV Dump: PVC Dump: StorageClass Dump (if StorageClass used by PV/PVC): Additional info: The cluster is on AWS. After manually deleting the pod, the upgrade was resumed automatically and completed successfully.
I am fixing this bug here - https://github.com/kubernetes/kubernetes/pull/93567
*** Bug 1872842 has been marked as a duplicate of this bug. ***
Rebase has landed quite some time ago.
Pod consuming an attached volume still can not run correctly. Checked the code, looks the PR is not in origin. Hemant could you help double check? Thx!
@Hermant, thank you very much for your help. Verified the bugs with the following steps: 1. Clone a storageclass(gp2-test) from gp2 with "Immediate" volumeBindMode. 2. Create a PVC with gp2-test sc 3. When PV is created, attach volume from aws console to the worker instance to /dev/xvdbd 4. Create a Pod using the PVC 5. Check Pod status. 6. Pod can be run successfully. Cluster version: 4.6.0-0.nightly-2020-09-23-022756