This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 977814 - [RFE] don't continue with create of volumes on nfs storage if mount fails
[RFE] don't continue with create of volumes on nfs storage if mount fails
Status: CLOSED WONTFIX
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder (Show other bugs)
unspecified
x86_64 Linux
medium Severity high
: Upstream M3
: 5.0 (RHEL 7)
Assigned To: Eric Harney
Dafna Ron
: FutureFeature, Reopened
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-06-25 07:45 EDT by Dafna Ron
Modified: 2016-04-26 12:12 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-03-12 09:07:32 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
logs (2.79 KB, application/x-gzip)
2013-06-25 07:45 EDT, Dafna Ron
no flags Details

  None (edit)
Description Dafna Ron 2013-06-25 07:45:26 EDT
Created attachment 765066 [details]
logs

Description of problem:

If I run a script that creates 100 volumes on nfs based cinder storage and mount fails, we create the volumes but set them in error status. 

thinking of user experience, even with the whole openstack view of no rollback, I think that if mount fails we should not continue with the create of the volume since technically we have not started to create the volumes yet.  

Version-Release number of selected component (if applicable):

[root@opens-vdsb ~]# rpm -qa |grep cinder
openstack-cinder-2013.1.2-3.el6ost.noarch
python-cinderclient-1.0.4-1.el6ost.noarch
python-cinder-2013.1.2-3.el6ost.noarch


How reproducible:

100%

Steps to Reproduce:
1. add nfs storage to cinder
2. make sure that rpcbind is stopped 
3. create a volume
4. list volume

Actual results:

volume is created with status error because mount fails.
user now must delete the volume created since there is no roll forward or rollback. 

Expected results:

I don't think that we should continue with create of volumes if mount fails. 

Additional info: logs
Comment 1 Eric Harney 2013-08-01 11:57:49 EDT
IIUC, the suggestion here is that if we cannot successfully create the volume, there should not be a cinder volume record created at all.

But, many decisions and operations that determine whether the volume can be created successfully or not (especially those at the volume driver level like mounting the NFS share) happen after the volume record has been created and returned to the client in the "creating" state.  This would be a fundamental change to the current model, which does not treat volume creation as a synchronous all-or-nothing operation.
Comment 2 Dafna Ron 2013-08-19 07:17:48 EDT
This is reproduced 100%. 
please close as wontfix and not as worksfor me if you do not wish to fix this behaviour.
Comment 3 Eric Harney 2013-11-18 12:08:43 EST
This has been improved in Havana in that volume drivers that fail to initialize will no longer attempt to handle requests.

It will not be fixed in the way described here, as the Cinder model is still
1) Create volume record
2) Hand that to scheduler
3) Volume service gets this, now fails (can't mount storage)
4) Volume status is set to "error"
5) User deletes the volume record

It is not practical to have the API service decide to not create the volume record based on cinder-volume service results.

Note You need to log in before you can comment on or make changes to this bug.