Bug 1272957 - gluster driver: same volumes are re-used with vol mapped layout after restarting manila services
gluster driver: same volumes are re-used with vol mapped layout after restart...
Product: RDO
Classification: Community
Component: openstack-manila (Show other bugs)
Unspecified Unspecified
unspecified Severity urgent
: ---
: Kilo
Assigned To: Pete Zaitcev
Depends On:
  Show dependency treegraph
Reported: 2015-10-19 05:59 EDT by krishnaram Karthick
Modified: 2016-05-19 11:41 EDT (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2016-05-19 11:41:01 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description krishnaram Karthick 2015-10-19 05:59:49 EDT
Description of problem:

Unlike directory based layout, with volume mapped layout a complete volume is picked to create a share. Volume thus picked should be used to create only one share.

However, shares created after manila service restart picks the volume once again although there is already a share created out of it. This is not expected.

Version-Release number of selected component (if applicable):

# rpm -qa | grep 'manila'

How reproducible:

Steps to Reproduce:

1) create a backend gluster volume
2) create a share from the volume created above
3) Attempt to create share once again - share creation fails as there are no volumes to pick up - This is expected
4) restart all manila services
5) Attempt to create share once again - share gets created on volume which was already used in step 1

Actual results:
volume is re-used

Expected results:
volume should not be re-used

Additional info:
Comment 1 Chandan Kumar 2016-05-19 11:41:01 EDT
This bug is against a Version which has reached End of Life.
If it's still present in supported release (http://releases.openstack.org), please update Version and reopen.

Note You need to log in before you can comment on or make changes to this bug.