Bug 990073 - [RHS-RHOS] openstack-cinder-volume fails to mount RHS client with updated RHS server and volume
[RHS-RHOS] openstack-cinder-volume fails to mount RHS client with updated RHS...
Status: CLOSED UPSTREAM
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder (Show other bugs)
3.0
Unspecified Unspecified
medium Severity medium
: Upstream M3
: 7.0 (Kilo)
Assigned To: Eric Harney
Dafna Ron
https://blueprints.launchpad.net/cind...
upstream_milestone_kilo-3 upstream_de...
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-07-30 06:33 EDT by Anush Shetty
Modified: 2016-04-26 23:51 EDT (History)
15 users (show)

See Also:
Fixed In Version:
Doc Type: Known Issue
Doc Text:
The Block Storage GlusterFS volume driver configuration does not provide a way to move a GlusterFS server from one address to another. As a result, attempting to do so will result in Block Storage using the new address as a new GlusterFS server, while still trying to access volume data at the old location. Workaround: Update the SQL DB's cinder.volume table provider_location field to point to the desired location for volumes you are moving.
Story Points: ---
Clone Of:
Environment:
virt rhos cinder integration
Last Closed: 2014-08-26 15:38:24 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Sosreport for openstack cinder (12.85 MB, application/x-xz)
2013-07-30 06:33 EDT, Anush Shetty
no flags Details

  None (edit)
Description Anush Shetty 2013-07-30 06:33:28 EDT
Created attachment 780536 [details]
Sosreport for openstack cinder

Description of problem: After updating the /etc/cinder/shares.conf with the new RHS volume credentials and restarting the openstack-cinder services, we see that still the old RHS volume credentails are being used while mounting the client. 


Version-Release number of selected component (if applicable):

Cinder: openstack-cinder-2013.1.2-3.el6ost.noarch
RHS: glusterfs-3.4.0.12rhs.beta6-1.el6rhs.x86_64


How reproducible: Consistently 

Consistently

Steps to Reproduce:
1. Create a new 2x2 Distributed-Replicate RHS volume

2. Umount old RHS cinder volume mount.

3. Updated /etc/cinder/shares.conf with the new RHS credentials
   # cat /etc/cinder/shares.conf
   10.70.37.49:cinder-vol

4. Restart cinder services,
   for i in api scheduler volume; do sudo service openstack-cinder-${i} start; done

Actual results:

openstack-cinder-volume tries mounting with the old RHS volume credentials.

The update RHS server is 10.70.37.49 which is what is mentioned in  /etc/cinder/shares.conf, but the mount command as seen in /var/log/cinder/volume.log is using the old server 10.70.37.66 to mount the RHS volume.

Expected results:

Should mount with the updated RHS volume server credentials

Additional info:
# gluster volume info cinder-vol
 
Volume Name: cinder-vol
Type: Distributed-Replicate
Volume ID: 50f8fa6d-50e6-4212-af3a-ef5b49cb3e94
Status: Created
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.37.49:/cinder4/s1
Brick2: 10.70.37.120:/cinder4/s1
Brick3: 10.70.37.132:/cinder4/s1
Brick4: 10.70.37.208:/cinder4/s1
Options Reconfigured:
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
storage.owner-gid: 165
storage.owner-uid: 165

#gluster peer status
Number of Peers: 3

Hostname: 10.70.37.120
Uuid: 2e35a7ec-6c4b-4942-8470-40649851dfdf
State: Peer in Cluster (Connected)

Hostname: 10.70.37.132
Uuid: d92d9ce8-4d1b-4d35-982b-5b25e55eafa1
State: Peer in Cluster (Connected)

Hostname: 10.70.37.208
Uuid: d7f6783a-d5db-453c-bd59-c5a4e8769935
State: Peer in Cluster (Connected)



# date
Tue Jul 30 12:53:12 IST 2013

# for i in api scheduler volume; do sudo service openstack-cinder-${i} start; done

#tail /var/log/cinder/volume.log

Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t glusterfs 10.70.37.66:cinder-vol /var/lib/cinder/volumes/cf55327cba40506e44b37f45f55af5e7
Exit code: 1
Stdout: 'Mount failed. Please check the log file for more details.\n'
Stderr: ''
2013-07-30 12:07:13     INFO [cinder.service] Child 30642 exited with status 2
2013-07-30 12:07:13     INFO [cinder.service] _wait_child 1
2013-07-30 12:07:13     INFO [cinder.service] wait wrap.failed True
2013-07-30 12:53:20     INFO [cinder.service] Starting 1 workers
2013-07-30 12:53:20     INFO [cinder.service] Started child 24501
2013-07-30 12:53:20    AUDIT [cinder.service] Starting cinder-volume node (version 2013.1.2)
2013-07-30 12:53:20    DEBUG [cinder.utils] Running cmd (subprocess): mount.glusterfs
2013-07-30 12:53:20    DEBUG [cinder.utils] Result was 1
2013-07-30 12:53:21    DEBUG [cinder.utils] backend <module 'cinder.db.sqlalchemy.api' from '/usr/lib/python2.6/site-packages/cinder/db/sqlalchemy/api.pyc'>
2013-07-30 12:53:21    DEBUG [cinder.volume.manager] Re-exporting 5 volumes
2013-07-30 12:53:21    DEBUG [cinder.utils] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf stat /var/lib/cinder/volumes/cf55327cba40506e44b37f45f55af5e7
2013-07-30 12:53:21    DEBUG [cinder.utils] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t glusterfs 10.70.37.66:cinder-vol /var/lib/cinder/volumes/cf55327cba40506e44b37f45f55af5e7
2013-07-30 12:53:24    DEBUG [cinder.utils] Result was 1
2013-07-30 12:53:24    ERROR [cinder.service] Unhandled exception
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/cinder/service.py", line 227, in _start_child
    self._child_process(wrap.server)
  File "/usr/lib/python2.6/site-packages/cinder/service.py", line 204, in _child_process
    launcher.run_server(server)
  File "/usr/lib/python2.6/site-packages/cinder/service.py", line 95, in run_server
    server.start()
  File "/usr/lib/python2.6/site-packages/cinder/service.py", line 355, in start
    self.manager.init_host()
  File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 149, in init_host
    self.driver.ensure_export(ctxt, volume)
  File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 128, in ensure_export
    self._ensure_share_mounted(volume['provider_location'])
  File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 189, in _ensure_share_mounted
    self._mount_glusterfs(glusterfs_share, mount_path, ensure=True)
  File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 256, in _mount_glusterfs
    self._execute(*command, run_as_root=True)
  File "/usr/lib/python2.6/site-packages/cinder/utils.py", line 190, in execute
    cmd=' '.join(cmd))
ProcessExecutionError: Unexpected error while running command.
Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t glusterfs 10.70.37.66:cinder-vol /var/lib/cinder/volumes/cf55327cba40506e44b37f45f55af5e7
Exit code: 1
Stdout: 'Mount failed. Please check the log file for more details.\n'
Stderr: ''
2013-07-30 12:53:24     INFO [cinder.service] Child 24501 exited with status 2
2013-07-30 12:53:24     INFO [cinder.service] _wait_child 1
2013-07-30 12:53:24     INFO [cinder.service] wait wrap.failed True
Comment 2 Anush Shetty 2013-07-30 06:52:49 EDT
Correction-

Description of problem: After updating the /etc/cinder/shares.conf with the new RHS volume details and restarting the openstack-cinder services, we see that still the old RHS volume server is being used while mounting the client. 

Additional info in "Steps to Reproduce":

2. Umount old RHS cinder volume mount.

  # cat /etc/cinder/shares.conf
   10.70.37.66:cinder-vol

   #/var/log/cinder/volume.log-20130728

    2013-07-25 15:57:30    DEBUG [cinder.utils] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t glusterfs 10.70.37.66:cinder-vol /var/lib/cinder/volumes/cf55327cba40506e44b37f45f55af5e7

   #umount /var/lib/nova/mnt/cf55327cba40506e44b37f45f55af5e7
   #umount /var/lib/cinder/volumes/cf55327cba40506e44b37f45f55af5e7
Comment 3 Eric Harney 2013-08-16 09:59:04 EDT
You need to change the line in shares.conf from
10.70.37.66:cinder-vol
to
10.70.37.66:/cinder-vol

which is the same format as mount -t glusterfs on the command line would take.  Probably not a bug.
Comment 4 Gowrishankar Rajaiyan 2013-08-19 01:44:40 EDT
GlusterFS can be mounted using both "10.70.37.66:cinder-vol" and "10.70.37.66:/cinder-vol"
Comment 6 Ayal Baron 2013-08-20 05:52:20 EDT
(In reply to Gowrishankar Rajaiyan from comment #4)
> GlusterFS can be mounted using both "10.70.37.66:cinder-vol" and
> "10.70.37.66:/cinder-vol"

but does it work properly if you add the '/'? i.e. is it just that cinder does not support a path without the '/' or is it something else?
Comment 7 Gowrishankar Rajaiyan 2013-08-20 07:17:56 EDT
[root@rhs-hpc-srv1 ~]# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sdb1            1149638772   1540024 1089700492   1% /
tmpfs                 24697036         0  24697036   0% /dev/shm
/dev/sdf1             30774208    215992  28994980   1% /boot
/dev/sdc1            1149638772    444828 1090795688   1% /var
10.70.43.44:/glance-vol
                     314382336    343168 314039168   1% /var/lib/glance/images
10.70.43.44:cinder-vol
                     314382336    343168 314039168   1% /var/lib/cinder/volumes/1d12e17a168a458a2db39ca37ee302fd


It works fine. Problem is when you update /etc/cinder/shares.conf with another IP.
Comment 8 Divya 2013-09-10 08:00:38 EDT
Eric,

This bug has been identified as a known issue for Big Bend release. Please provide CCFR information in the Doc Text field.
Comment 9 Eric Harney 2013-09-10 13:39:32 EDT
(In reply to Divya from comment #8)
> This bug has been identified as a known issue for Big Bend release. Please
> provide CCFR information in the Doc Text field.

This bug is still under investigation.
Comment 10 Eric Harney 2013-09-10 13:41:41 EDT
(In reply to Anush Shetty from comment #2)
> Correction-
> 
> Description of problem: After updating the /etc/cinder/shares.conf with the
> new RHS volume details and restarting the openstack-cinder services, we see
> that still the old RHS volume server is being used while mounting the
> client. 
> 
...
>     2013-07-25 15:57:30    DEBUG [cinder.utils] Running cmd (subprocess):
> sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t glusterfs
> 10.70.37.66:cinder-vol
> /var/lib/cinder/volumes/cf55327cba40506e44b37f45f55af5e7
> 

I suspect that what you may be seeing here is Cinder's attempt to remount the share based on the volume's provider_location stored in the database.  This db field is probably not updated after you have changed the shares config file.
Comment 11 Deepak C Shetty 2014-03-26 05:05:26 EDT
Eric,
    I was reading thru this bug and was wondering that supporting the ability to change the shares.conf (old gluster volume replaced with new) should be allowed only when all the cinder volumes served by the old gluster volume are not attached to any instance and if they are... cinder should reject the change and revert back to old shares.conf

In general, this applies to any usecase where the backend is changed after using it once

Note You need to log in before you can comment on or make changes to this bug.