Bug 1866843

Summary: upgrade got stuck because of FailedAttachVolume
Product: OpenShift Container Platform Reporter: Hongkai Liu <hongkliu>
Component: StorageAssignee: OpenShift Storage Bugzilla Bot <ocp-storage-bot>
Storage sub component: Storage QA Contact: Wei Duan <wduan>
Status: CLOSED CURRENTRELEASE Docs Contact:
Severity: high    
Priority: high CC: aos-bugs, hekumar, jsafrane, khnguyen, nmalik, vlaad, wking
Version: 4.5Keywords: ServiceDeliveryImpact, Upgrades
Target Milestone: ---   
Target Release: 4.6.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1867800 (view as bug list) Environment:
Last Closed: 2022-08-25 21:19:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1867800    

Description Hongkai Liu 2020-08-06 14:27:00 UTC
Description of problem:

oc --context build01 get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.5.4     True        True          4h47m   Unable to apply 4.5.5: the cluster operator monitoring has not yet successfully rolled out

oc --context build01 get pod -n openshift-monitoring alertmanager-main-2 -o wide
NAME                  READY   STATUS              RESTARTS   AGE     IP       NODE                           NOMINATED NODE   READINESS GATES
alertmanager-main-2   0/5     ContainerCreating   0          4h59m   <none>   ip-10-0-144-106.ec2.internal   <none>           <none>


Warning  FailedAttachVolume  49s (x146 over 4h41m)   attachdetach-controller                AttachVolume.Attach failed for volume "pvc-be5418b6-0ed4-4422-b2d6-4c639c2bd290" : volume is still being detached from the node


Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:

Master Log:

Node Log (of failed PODs):

PV Dump:

PVC Dump:

StorageClass Dump (if StorageClass used by PV/PVC):

Additional info:
The cluster is on AWS.
After manually deleting the pod, the upgrade was resumed automatically and completed successfully.

Comment 2 Hemant Kumar 2020-08-06 14:43:20 UTC
I am fixing this bug here - https://github.com/kubernetes/kubernetes/pull/93567

Comment 6 Tomas Smetana 2020-09-08 10:57:17 UTC
*** Bug 1872842 has been marked as a duplicate of this bug. ***

Comment 7 Jan Safranek 2020-09-08 15:54:29 UTC
Rebase has landed quite some time ago.

Comment 11 Qin Ping 2020-09-16 08:12:26 UTC
Pod consuming an attached volume still can not run correctly.

Checked the code, looks the PR is not in origin. 

Hemant could you help double check? Thx!

Comment 16 Qin Ping 2020-09-24 14:46:09 UTC
@Hermant, thank you very much for your help.

Verified the bugs with the following steps:
1. Clone a storageclass(gp2-test) from gp2 with "Immediate" volumeBindMode.
2. Create a PVC with gp2-test sc
3. When PV is created, attach volume from aws console to the worker instance to /dev/xvdbd
4. Create a Pod using the PVC
5. Check Pod status.
6. Pod can be run successfully.

Cluster version: 4.6.0-0.nightly-2020-09-23-022756