Bug 983579 - [RHS-RHOS] Detaching the cinder volume from VM instance, doesn't umount the glusterfs volume mounted at /var/lib/nova/mnt/<vol-uuid>
Summary: [RHS-RHOS] Detaching the cinder volume from VM instance, doesn't umount the g...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: unspecified
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: rc
: 5.0 (RHEL 7)
Assignee: RHOS Maint
QA Contact: Toure Dunnon
URL:
Whiteboard:
Depends On:
Blocks: 961016
TreeView+ depends on / blocked
 
Reported: 2013-07-11 13:56 UTC by SATHEESARAN
Modified: 2019-09-09 14:21 UTC (History)
10 users (show)

Fixed In Version: openstack-nova-2014.1-3.el7ost
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
virt rhos cinder integration
Last Closed: 2014-07-08 15:26:31 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
shares.conf and cinder.conf in tar file (40.00 KB, application/x-tar)
2013-07-11 13:56 UTC, SATHEESARAN
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1285209 0 None None None Never
OpenStack gerrit 76558 0 None None None Never
Red Hat Product Errata RHEA-2014:0853 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform Enhancement - Compute 2014-07-08 19:22:38 UTC

Description SATHEESARAN 2013-07-11 13:56:51 UTC
Created attachment 772244 [details]
shares.conf and cinder.conf in tar file

Description of problem:
Attaching the volume to VM instance,causes nova to mount glusterfs volume at /var/lib/nova/mnt/<vol-uuid>,but after detaching this volume, glusterfs volume still persists.

Version-Release number of selected component (if applicable):
RHS: glusterfs-3.3.0.11rhs-1.x86_64
RHOS : http://download.lab.bos.redhat.com/rel-eng/OpenStack/Grizzly/2013-07-08.1/
Cinder : 1.0.4

How reproducible:
Always

Steps to Reproduce:
1. Create a 6X2 distribute replicate volume, using 4 RHS servers and 3 bricks per RHS server
   (i.e) gluster volume create cinder-vol replica 2 <brick1> .... <brick12>

2. Tag this volume with group virt
   (i.e) gluster volume set cinder-vol group virt

3. Set storage.owner-uid & storage.owner-gid to 165:165
   (i.e) gluster volume set cinder.vol storage.owner-uid 165
         gluster volume set cinder.vol storage.owner-gid 165

4. Configure cinder to use the above created gluster volume
   This can be done by editing /etc/cinder/cinder.conf file
   (i.e) openstack-config --set /etc/cinder/cinder.conf volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
         openstack-config --set /etc/cinder/cinder.conf glusterfs_shares_config /etc/cinder/shares.conf
         openstack-config --set /etc/cinder/cinder.conf glusterfs_mount_point_base /var/lib/cinder/images

5. Add an volume entry for the volume created in step 1 to /etc/cinder/shares.conf. The content of this file should be,
    10.70.36.70:cinder-vol
   
6. Restart Cinder services
   (i.e) for i in api scheduler volume; service openstack-cinder-$i restart;done

7. See to that gluster volumes are mounted automatically using mount command

8. Use the default nova configuration, and create a VM Instance using Horizon [ dasbhboard]
    NOTE: a. create a local network for VM to use. VM doesn't comes up without network
          b. Use any readily available OS cloud images, to upload to glance.
In this test case, glance is also using the gluster volume

9. Create a cinder volume now. 
   (i.e) cinder create 10 --display-name volume43
 
10. Check the cinder volume
    (i.e) cinder list
    Here you can see the status of cinder volume as "available"

11. Attach the volume to previously created VM Instance.This can be done through dashboard, by using volumes tab
    (i.e) nova attach-volume <server-id> <volume-id> /dev/vdc
<server-id> can be obtained through the command, "nova list"
<volume-id> can be obtained through the command, "cinder list"

12. use "cinder list" command to check for the status of the cinder volume
    and status should be "in-use"

13. Use mount command and check nova mounts the gluster volume under /var/lib/nova/mnt/<vol-UUID>

14. Detach the volume from the VM Instance
    (i.e) nova volume-detach <server-id> <volume-id>

15. Again check for glusterfs mount, using mount command

Actual results:
glusterfs volume is still mounted at /var/lib/nova/mnt/<vol-uuid>, though none of the volume is attached to VM instance 

Expected results:
There should be a way or method, which unmounts the gluster volume, after detaching the volume, since those volumes are no longer used by nova instance 

Additional info:

1. gluster volume info
=======================
Volume Name: cinder-vol
Type: Distributed-Replicate
Volume ID: c7d79599-c54e-47e2-babe-a7bcc5d2fed2
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.73:/rhs/brick1/cinder1
Brick2: 10.70.37.166:/rhs/brick1/cinder1
Brick3: 10.70.37.73:/rhs/brick2/cinder2
Brick4: 10.70.37.166:/rhs/brick2/cinder2
Brick5: 10.70.37.73:/rhs/brick3/cinder3
Brick6: 10.70.37.166:/rhs/brick3/cinder3
Brick7: 10.70.37.124:/rhs/brick1/cinder1
Brick8: 10.70.37.217:/rhs/brick1/cinder1
Brick9: 10.70.37.124:/rhs/brick2/cinder2
Brick10: 10.70.37.217:/rhs/brick2/cinder2
Brick11: 10.70.37.124:/rhs/brick3/cinder3
Brick12: 10.70.37.217:/rhs/brick3/cinder3
Options Reconfigured:
storage.owner-uid: 165
storage.owner-gid: 165
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: on

2.gluster volume status
=======================
[Thu Jul 11 13:50:23 UTC 2013 root.37.73:~ ] # gluster volume status cinder-vol
Status of volume: cinder-vol
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.73:/rhs/brick1/cinder1                   24015   Y       1705
Brick 10.70.37.166:/rhs/brick1/cinder1                  24014   Y       1653
Brick 10.70.37.73:/rhs/brick2/cinder2                   24016   Y       1710
Brick 10.70.37.166:/rhs/brick2/cinder2                  24015   Y       1659
Brick 10.70.37.73:/rhs/brick3/cinder3                   24017   Y       1716
Brick 10.70.37.166:/rhs/brick3/cinder3                  24016   Y       1665
Brick 10.70.37.124:/rhs/brick1/cinder1                  24015   Y       2990
Brick 10.70.37.217:/rhs/brick1/cinder1                  24014   Y       19186
Brick 10.70.37.124:/rhs/brick2/cinder2                  24016   Y       1697
Brick 10.70.37.217:/rhs/brick2/cinder2                  24015   Y       19191
Brick 10.70.37.124:/rhs/brick3/cinder3                  24017   Y       1703
Brick 10.70.37.217:/rhs/brick3/cinder3                  24016   Y       19197
NFS Server on localhost                                 38467   Y       18018
Self-heal Daemon on localhost                           N/A     Y       18024
NFS Server on 10.70.37.166                              38467   Y       1677
Self-heal Daemon on 10.70.37.166                        N/A     Y       1683
NFS Server on 10.70.37.124                              38467   Y       8767
Self-heal Daemon on 10.70.37.124                        N/A     Y       8773
NFS Server on 10.70.37.217                              38467   Y       17464
Self-heal Daemon on 10.70.37.217                        N/A     Y       17470

3. /etc/cinder.conf and /etc/shares.conf are attached

Comment 2 Russell Bryant 2013-11-14 15:49:28 UTC
From a look at the code, this appears to still be an issue.  It looks like it affects NFS as well.

Comment 9 errata-xmlrpc 2014-07-08 15:26:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0853.html


Note You need to log in before you can comment on or make changes to this bug.