Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1883583

Summary: [RHOSP 13][RHOCP 3.11] Unable to detach cinder volume after deleting openshift pod
Product: Red Hat OpenStack Reporter: camorris@redhat.co <camorris>
Component: openstack-novaAssignee: OSP DFG:Compute <osp-dfg-compute>
Status: CLOSED INSUFFICIENT_DATA QA Contact: OSP DFG:Compute <osp-dfg-compute>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 13.0 (Queens)CC: dasmith, eglynn, jhakimra, kchamart, mwitt, sbauza, sgordon, smooney, vromanso
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-11-12 20:27:13 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description camorris@redhat.co 2020-09-29 15:54:01 UTC
Description of problem:
Using RHOCP v3.11 on RHOSP 13, when a pod is deleted, we are unable to detach cinder volume without resetting cinder state

Version-Release number of selected component (if applicable):
openstack-cinder-12.0.10-2.el7ost.noarch                    Tue May 19 23:04:54 2020
openstack-nova-api-17.0.13-2.el7ost.noarch                  Tue May 19 23:04:45 2020
openstack-nova-common-17.0.13-2.el7ost.noarch               Tue May 19 23:03:49 2020
openstack-nova-compute-17.0.13-2.el7ost.noarch              Tue May 19 23:04:33 2020
openstack-nova-conductor-17.0.13-2.el7ost.noarch            Tue May 19 23:04:45 2020
openstack-nova-console-17.0.13-2.el7ost.noarch              Tue May 19 23:04:45 2020
openstack-nova-novncproxy-17.0.13-2.el7ost.noarch           Tue May 19 23:04:45 2020
openstack-nova-scheduler-17.0.13-2.el7ost.noarch            Tue May 19 23:04:45 2020
puppet-cinder-12.4.1-5.el7ost.noarch                        Tue May 19 23:03:27 2020
puppet-nova-12.5.0-5.el7ost.noarch                          Tue May 19 23:03:27 2020
python2-cinderclient-3.5.0-1.el7ost.noarch                  Sat Feb  8 19:37:04 2020
python2-novaclient-10.1.1-1.el7ost.noarch                   Tue May 19 22:35:52 2020
python-cinder-12.0.10-2.el7ost.noarch                       Tue May 19 23:03:34 2020
python-nova-17.0.13-2.el7ost.noarch                         Tue May 19 23:03:49 2020


How reproducible:
Most of the time, but not always

Steps to Reproduce:
1. Run nova volume-detach UUID UUID
2. Get error ERROR (BadRequest): Invalid volume: Invalid input received: Invalid volume: Unable to detach volume. Volume status must be 'in-use' and attach_status must be 'attached' to detach. (HTTP 400) 
3. Run cinder reset-state UUID  --state in-use --attach-status attached
4. Run nova volume-detach now works


Actual results:
nova volume-detach does not work unless we cinder reset-state

Expected results:
We should not need to run cinder reset-state

Additional info:
sosreports are available on supportshell

Comment 8 smooney 2020-11-12 20:27:13 UTC
its be 4 weeks since we requested the log and 2 months since the first request so i am closing this for insuffenct data.

in the event that the customer finally provides the for the required node then feel free to reopen.