Bug 1724366 - pybind: luminous volume client breaks against nautilus cluster
Summary: pybind: luminous volume client breaks against nautilus cluster
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 3.2
Hardware: All
OS: All
high
low
Target Milestone: z1
: 3.3
Assignee: Ram Raja
QA Contact: subhash
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-26 21:49 UTC by Ram Raja
Modified: 2019-10-22 13:29 UTC (History)
6 users (show)

Fixed In Version: RHEL: ceph-12.2.12-55.el7cp Ubuntu: ceph_12.2.12-50redhat1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-22 13:29:00 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 40182 0 None None None 2019-06-26 21:49:37 UTC
Red Hat Product Errata RHBA-2019:3173 0 None None None 2019-10-22 13:29:14 UTC

Description Ram Raja 2019-06-26 21:49:38 UTC
Description of problem:
Due to the removal of the 'ceph mds dump' command in nautilus (14.x), a luminous (12.x or RHCS 3.x) ceph_volume_client does not work against a nautilus cluster. This breaks some version combinations of openstack cloud and ceph.

Here's a log fragment of manila:

```
2019-05-23 09:56:50.763 INFO manila.share.drivers.cephfs.driver [req-34c1c009-cf00-48e8-ab3a-e19ea4bc8df8 None None] [CEPHFSNATIVE1}] Ceph client found, connecting...
2019-05-23 09:56:50.802 INFO ceph_volume_client [req-34c1c009-cf00-48e8-ab3a-e19ea4bc8df8 None None] evict clients with auth_name=manila
2019-05-23 09:56:50.872 ERROR manila.share.manager [req-34c1c009-cf00-48e8-ab3a-e19ea4bc8df8 None None] Error encountered during initialization of driver CephFSDriver.24.218@cephfsnative1: Error: command is obsolete; please check usage and/or man page
2019-05-23 09:56:50.872 TRACE manila.share.manager Traceback (most recent call last):
2019-05-23 09:56:50.872 TRACE manila.share.manager   File "/usr/lib/python2.7/site-packages/manila/share/manager.py", line 305, in _driver_setup
2019-05-23 09:56:50.872 TRACE manila.share.manager     self.driver.do_setup(ctxt)
2019-05-23 09:56:50.872 TRACE manila.share.manager   File "/usr/lib/python2.7/site-packages/manila/share/drivers/cephfs/driver.py", line 144, in do_setup
2019-05-23 09:56:50.872 TRACE manila.share.manager     ceph_vol_client=self.volume_client)
2019-05-23 09:56:50.872 TRACE manila.share.manager   File "/usr/lib/python2.7/site-packages/manila/share/drivers/cephfs/driver.py", line 216, in volume_client
2019-05-23 09:56:50.872 TRACE manila.share.manager     self._volume_client.connect(premount_evict=premount_evict)
2019-05-23 09:56:50.872 TRACE manila.share.manager   File "/usr/lib/python2.7/site-packages/ceph_volume_client.py", line 474, in connect
2019-05-23 09:56:50.872 TRACE manila.share.manager     self.evict(premount_evict)
2019-05-23 09:56:50.872 TRACE manila.share.manager   File "/usr/lib/python2.7/site-packages/ceph_volume_client.py", line 399, in evict
2019-05-23 09:56:50.872 TRACE manila.share.manager     mds_map = self._rados_command("mds dump", {})
2019-05-23 09:56:50.872 TRACE manila.share.manager   File "/usr/lib/python2.7/site-packages/ceph_volume_client.py", line 1340, in _rados_command
2019-05-23 09:56:50.872 TRACE manila.share.manager     raise rados.Error(outs)
2019-05-23 09:56:50.872 TRACE manila.share.manager Error: command is obsolete; please check usage and/or man page
```
We'd want luminous' ceph_volume_client to also work with later upstream stable release nautilus.


Version-Release number of selected component (if applicable):


How reproducible: always


Steps to Reproduce:
1. setup luminous cluster.

2. connect luminous's (RHCS 3.x) ceph_volume_client to luminous cluster. This should work.
Test this by running the following in the python interpreter of client node with ceph 'admin' keyring and ceph.conf at default locations. Note: the client node should've `python-cephfs` pacakge.
```
>>> import ceph_volume_client
>>> vc = ceph_volume_client.CephFSVolumeClient('admin', '/etc/ceph/ceph.conf', 'ceph')
>>> vc.connect(premount_evict='admin')
>>> vc.disconnect()

```


3. connect luminous's ceph_volume_client with upstream nautilus cluster. This will break. But this should also work.

Actual results: luminous's (RHCS 3.x) ceph_volume_client doesn't work with nautilus (latest upstream stable) cluster. 


Expected results: luminous's (RHCS 3.x) ceph_volume_client works with nautilus cluster (latest upstream stable) cluster.

Comment 1 Giridhar Ramaraju 2019-08-05 13:10:06 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 2 Giridhar Ramaraju 2019-08-05 13:11:14 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 3 Giridhar Ramaraju 2019-08-20 06:58:00 UTC
Level setting the severity of this defect to "High" with a bulk update. Pls refine it to a more closure value, as defined by the severity definition in https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity

Comment 11 errata-xmlrpc 2019-10-22 13:29:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3173


Note You need to log in before you can comment on or make changes to this bug.