Bug 1883583 - [RHOSP 13][RHOCP 3.11] Unable to detach cinder volume after deleting openshift pod
Summary: [RHOSP 13][RHOCP 3.11] Unable to detach cinder volume after deleting openshif...
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: OSP DFG:Compute
QA Contact: OSP DFG:Compute
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-29 15:54 UTC by camorris@redhat.co
Modified: 2023-03-21 19:36 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-12 20:27:13 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-5838 0 None None None 2022-08-11 12:12:20 UTC

Description camorris@redhat.co 2020-09-29 15:54:01 UTC
Description of problem:
Using RHOCP v3.11 on RHOSP 13, when a pod is deleted, we are unable to detach cinder volume without resetting cinder state

Version-Release number of selected component (if applicable):
openstack-cinder-12.0.10-2.el7ost.noarch                    Tue May 19 23:04:54 2020
openstack-nova-api-17.0.13-2.el7ost.noarch                  Tue May 19 23:04:45 2020
openstack-nova-common-17.0.13-2.el7ost.noarch               Tue May 19 23:03:49 2020
openstack-nova-compute-17.0.13-2.el7ost.noarch              Tue May 19 23:04:33 2020
openstack-nova-conductor-17.0.13-2.el7ost.noarch            Tue May 19 23:04:45 2020
openstack-nova-console-17.0.13-2.el7ost.noarch              Tue May 19 23:04:45 2020
openstack-nova-novncproxy-17.0.13-2.el7ost.noarch           Tue May 19 23:04:45 2020
openstack-nova-scheduler-17.0.13-2.el7ost.noarch            Tue May 19 23:04:45 2020
puppet-cinder-12.4.1-5.el7ost.noarch                        Tue May 19 23:03:27 2020
puppet-nova-12.5.0-5.el7ost.noarch                          Tue May 19 23:03:27 2020
python2-cinderclient-3.5.0-1.el7ost.noarch                  Sat Feb  8 19:37:04 2020
python2-novaclient-10.1.1-1.el7ost.noarch                   Tue May 19 22:35:52 2020
python-cinder-12.0.10-2.el7ost.noarch                       Tue May 19 23:03:34 2020
python-nova-17.0.13-2.el7ost.noarch                         Tue May 19 23:03:49 2020


How reproducible:
Most of the time, but not always

Steps to Reproduce:
1. Run nova volume-detach UUID UUID
2. Get error ERROR (BadRequest): Invalid volume: Invalid input received: Invalid volume: Unable to detach volume. Volume status must be 'in-use' and attach_status must be 'attached' to detach. (HTTP 400) 
3. Run cinder reset-state UUID  --state in-use --attach-status attached
4. Run nova volume-detach now works


Actual results:
nova volume-detach does not work unless we cinder reset-state

Expected results:
We should not need to run cinder reset-state

Additional info:
sosreports are available on supportshell

Comment 8 smooney 2020-11-12 20:27:13 UTC
its be 4 weeks since we requested the log and 2 months since the first request so i am closing this for insuffenct data.

in the event that the customer finally provides the for the required node then feel free to reopen.


Note You need to log in before you can comment on or make changes to this bug.