Bug 991490 - [RHS-RHOS] Mount options specified in glusterfs_shares_config not applied when cinder-volume is restarted
Summary: [RHS-RHOS] Mount options specified in glusterfs_shares_config not applied whe...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 4.0
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: z5
: 5.0 (RHEL 7)
Assignee: Deepak C Shetty
QA Contact: lkuchlan
URL: https://review.openstack.org/#/c/86888/
Whiteboard:
Depends On:
Blocks: 1045196 1254739
TreeView+ depends on / blocked
 
Reported: 2013-08-02 14:29 UTC by Gowrishankar Rajaiyan
Modified: 2016-04-26 14:08 UTC (History)
13 users (show)

Fixed In Version: openstack-cinder-2014.1.4-4.el7ost
Doc Type: Bug Fix
Doc Text:
Previously, Red Hat Gluster Storage volumes were not re-mounted when restarting the Block Storage volumes using them as back ends. This prevented updated mount options in the glusterfs shares configuration file from being honored, even after restarting the Block Storage service volumes. With this update, the Block Storage glusterfs driver now explicitly unmounts and mounts the Red Hat Gluster Storage volumes explicitly. This, in turn, ensures that updated mount options are honored as expected upon restarting the Block Storage volume.
Clone Of:
: 1254739 (view as bug list)
Environment:
virt rhos cinder rhs integration
Last Closed: 2015-09-10 11:48:05 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 86888 0 'None' 'MERGED' 'glusterfs: Honor mount options when restarting cinder service' 2019-11-11 00:13:40 UTC
Red Hat Product Errata RHBA-2015:1758 0 normal SHIPPED_LIVE openstack-cinder bug fix advisory 2015-09-10 15:47:56 UTC

Description Gowrishankar Rajaiyan 2013-08-02 14:29:43 UTC
Description of problem: I provide mount options ("-o selinux" in this case) as argument in $glusterfs_shares_config (/etc/cinder/cinder-volume1.conf in this case) specified in enabled_backend group and restart openstack-cinder-volume service. Even though the mount is successful I fail to see the mount option in the output of "ps" command. This however, works when I do it from the DEFAULT section of cinder.conf.


Version-Release number of selected component (if applicable):
openstack-cinder-2013.1.2-3.el6ost.noarch
glusterfs-fuse-3.4.0.14rhs-1.el6.x86_64

How reproducible: Always


Steps to Reproduce:
1. Create distribute replicate volume and apply all the necessary volume options.[1]
2. Install openstack using "packstack --allinone"
3. Enable backend to use glusterfs driver. [2]
4. Create /etc/cinder/cinder-volume1.conf and ensure to include a mount option. (-o selinux in this case0 [3]
5. Restart openstack-cinder-volume service
6. Verify if the mount options specified are applied using "ps" command.

Actual results:
# ps aux | grep glusterfs
root      1678  0.0  0.2 301788 42528 ?        Ssl  Jul31   0:03 /usr/sbin/glusterfs --volfile-id=/glance-vol --volfile-server=10.65.201.183 /var/lib/glance/images
root     28205  0.0  0.2 301656 41144 ?        Ssl  17:26   0:00 /usr/sbin/glusterfs --volfile-id=cinder-vol --volfile-server=10.65.201.183 /var/lib/cinder/cinder-volume1/586c24173ac3ab5d1d43aed1f113d9f6


However, I see mount option in cinder/volume.log:
2013-08-02 17:57:44    DEBUG [cinder.volume.drivers.nfs] shares loaded: {'10.65.201.183:cinder-vol': '-o selinux'}
2013-08-02 17:57:44    DEBUG [cinder.utils] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t glusterfs 10.65.201.183:cinder-vol /var/lib/cinder/cinder-volume1/586c24173ac3ab5d1d43aed1f113d9f6 -o selinux


but not in glusterfs log:
[2013-08-02 12:55:46.096230] I [glusterfsd.c:1970:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.4.0.14rhs (/usr/sbin/glusterfs --volfile-id=cinder-vol --volfile-server=10.65.201.183 /var/lib/cinder/cinder-volume1/586c24173ac3ab5d1d43aed1f113d9f6)



Also, this works as expected when executed manually or provided in the DEFAULT section of cinder.conf:
[root@dhcp201-146 ~(keystone_admin)]# sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t glusterfs 10.65.201.183:cinder-vol /var/lib/cinder/cinder-volume1/586c24173ac3ab5d1d43aed1f113d9f6 -o selinux

[root@dhcp201-146 ~(keystone_admin)]# ps aux | grep glusterfs | grep -v glance | grep -v grep
root      1624  0.3  0.2 301656 39140 ?        Ssl  17:59   0:00 /usr/sbin/glusterfs --selinux --volfile-id=cinder-vol --volfile-server=10.65.201.183 /var/lib/cinder/cinder-volume1/586c24173ac3ab5d1d43aed1f113d9f6
[root@dhcp201-146 ~(keystone_admin)]# 

glusterfs log:
[2013-08-02 12:55:05.470278] I [glusterfsd.c:1970:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.4.0.14rhs (/usr/sbin/glusterfs --selinux --volfile-id=cinder-vol --volfile-server=10.65.201.183 /var/lib/cinder/cinder-volume1/586c24173ac3ab5d1d43aed1f113d9f6)



Expected results: Mount options specified should be applied when specified in enabled_backends groups.


Additional info:
[1]
Volume Name: cinder-vol
Type: Distributed-Replicate
Volume ID: 2f4edaef-678b-492a-b972-bd95c1c490a3
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.65.201.183:/rhs/brick1/cinder-vol
Brick2: 10.65.201.223:/rhs/brick1/cinder-vol
Brick3: 10.65.201.183:/rhs/brick2/cinder-vol
Brick4: 10.65.201.223:/rhs/brick2/cinder-vol
Options Reconfigured:
storage.owner-gid: 165
storage.owner-uid: 165
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off


[2]
relevant section from cinder.conf:
[DEFAULT]
debug=true
verbose=True
enabled_backends = GLUSTERFS_DRIVER1
[GLUSTERFS_DRIVER1]
volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver
glusterfs_shares_config = /etc/cinder/cinder-volume1.conf
glusterfs_mount_point_base = /var/lib/cinder/cinder-volume1
volume_backend_name = glusterfs_cinder_volume1

[3]
# cat /etc/cinder/cinder-volume1.conf 
10.65.201.183:cinder-vol -o selinux

Comment 2 Gowrishankar Rajaiyan 2013-08-08 11:27:03 UTC
Unable to reproduce this on a fresh setup. Feel free to mark this as CLOSED WORKSFORME. I shall re-open if hit in future.


[root@rhs-hpc-srv1 cinder]# ps aux | grep gluster
root      7827  0.0  0.1 347652 85272 ?        Ssl  09:41   0:00 /usr/sbin/glusterfs --acl --selinux --volfile-id=glance-vol --volfile-max-fetch-attempts=3 --volfile-server=10.70.43.44 /var/lib/glance/images/
root     12324  1.3  0.1 345544 77128 ?        Ssl  10:55   0:00 /usr/sbin/glusterfs --selinux --volfile-id=cinder-vol --volfile-server=10.70.43.47 /var/lib/cinder/volumes/c607a8395f811f8de0e5af0c4a1c1846
root     12326  1.2  0.1 347656 75356 ?        Ssl  10:55   0:00 /usr/sbin/glusterfs --acl --selinux --volfile-id=cinder-vol --volfile-max-fetch-attempts=3 --volfile-server=10.70.43.44 /var/lib/cinder/volumes/1d12e17a168a458a2db39ca37ee302fd

Comment 4 Deepak C Shetty 2014-03-28 11:08:11 UTC
I will start to look at this

Comment 6 Deepak C Shetty 2014-04-02 13:41:58 UTC
It looks like the root cause of the problem seen is due to the fact that when we restart cinder-vol service, it doesn't mount the gluster volume (since its already mounted) and that happens because when we kill/stop cinder-vol the existing gluster mounts exists, hence during restart nothing is done as far as mount point is concerned.

The fix for that would be to ensure that when c-vol service is stopped/killed, the existing gluster cinder mounts are also un-mounted.

When i did that manually, the mount options specicifed were seen in the glusterfs process since during restart the cinder service remounted and the -o option took effect.

Working on a patch for the same.

thanx,
deepak

Comment 7 Deepak C Shetty 2014-04-04 11:53:36 UTC
Seems like cinder doesn't have a framework to do cleanup as part of Ctrl-C or cinder vol service going down/restart.

I tried to use __del__ and atexit but they don't seem to be working as expected. More details abt the issue I am seeing can be found @ 
http://lists.openstack.org/pipermail/openstack-dev/2014-April/031779.html

Comment 8 Deepak C Shetty 2014-04-11 13:06:05 UTC
It looks like given the current cinder framework of things, the only thing that can be done to fix the problem mentioned here is to force a umount and then re-mount during gluster driver startup.

I posted a patch to that effect @ https://review.openstack.org/#/c/86888/

FWIW : Cinder improvements towards handling shares config file as documented in this bp @ 
https://blueprints.launchpad.net/cinder/+spec/remotefs-share-cfg-improvements
should provide more cleaner approach to handle such issues in near future

thanx,
deepak

Comment 10 Deepak C Shetty 2014-06-24 10:57:21 UTC
https://review.openstack.org/#/c/86888/ has been merged upstream.

Comment 18 lkuchlan 2015-09-02 07:14:12 UTC
Tested using:
openstack-cinder-2014.1.5-1.el7ost.noarch
python-cinderclient-1.0.9-1.el7ost.noarch
python-cinder-2014.1.5-1.el7ost.noarch

Verification instructions:
1. Create distribute replicate volume and apply all the necessary volume options.
2. Install openstack using "packstack --allinone"
3. Enable backend to use glusterfs driver. 
4. Create /etc/cinder/cinder-volume1.conf and ensure to include a mount option. (-o selinux in this case0
5. Restart openstack-cinder-volume service
6. Verify if the mount options specified are applied using "ps" command.

Results:
#ps aux | grep gluster
root     21853  0.0  0.0 112640   928 pts/0    S+   10:05   0:00 grep --color=auto glusters:

/var/log/cinder/volume.log:
2015-09-02 10:11:30.036 18766 DEBUG cinder.volume.drivers.glusterfs [-] Available shares: [u'10.35.160.6:/gluster_volumes/lkuchlan01'] _ensure_shares_mounted /usr/lib/python2.7/site-packages/cinder/volume/drivers/glusterfs.py

Comment 20 errata-xmlrpc 2015-09-10 11:48:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-1758.html


Note You need to log in before you can comment on or make changes to this bug.