Bug 1071184

Summary: cinder: if cinder-volume is restarted during the execution of a command, the volume is left in a non-consistent status
Product: Red Hat OpenStack Reporter: Flavio Percoco <fpercoco>
Component: openstack-cinderAssignee: Sergey Gotliv <sgotliv>
Status: CLOSED ERRATA QA Contact: Yogev Rabl <yrabl>
Severity: medium Docs Contact:
Priority: high    
Version: 4.0CC: dron, eharney, fpercoco, mlopes, nlevinki, sclewis, scohen, sgotliv, yeylon
Target Milestone: gaKeywords: Reopened, TestOnly
Target Release: 7.0 (Kilo)   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1035891 Environment:
Last Closed: 2015-08-05 13:11:46 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Flavio Percoco 2014-02-28 08:38:08 UTC
The issue here is that volume operations interrupted when cinder-volume goes down are not rolled back, which leaves the volume status in a not consistent state. However, there are a few other implications here:

1. There's no easy way to know when something has gone wrong with one of the cinder-volumes node after the rpc.cast is sent unless we add a sentinel that watches over volume's status changes.

2. If the cinder-volume goes down *while* extending a volume, we can't simply restore the volume status because we don't actually know what the status is. The volume at this point could be completely broken and there's no way - without manual inspection - to know that.

So, we could think about possible solutions for this issue but it would go way beyond this bug and Icehouse. It requires discussions upstream - if it is even considered a real issue.

Thing is, if cinder-scheduler sends a cast command to cinder-volume and then cinder-volume goes down *while* executing that command, I think the wrong status in the volume is the least important issue a cloud operator would have.

This issue probably requires a blueprint. I'm cloning it to keep track of where it came from.

Comment 5 Yogev Rabl 2015-02-03 08:25:23 UTC
This bug is too general and doesn't have a specific scenario

Comment 9 Yogev Rabl 2015-07-22 11:52:37 UTC
Verified in 
python-cinder-2015.1.0-3.el7ost.noarch
openstack-cinder-2015.1.0-3.el7ost.noarch

action:
create a new volume & restart cinder services
result:
error in Cinder scheduler: cinder-volume in not available 

action: 
extend a volume & restart cinder-volume service
result:
the volume stuck in extending status

Comment 11 errata-xmlrpc 2015-08-05 13:11:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2015:1548