Bug 1724366
| Summary: | pybind: luminous volume client breaks against nautilus cluster | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Ram Raja <rraja> |
| Component: | CephFS | Assignee: | Ram Raja <rraja> |
| Status: | CLOSED ERRATA | QA Contact: | subhash <vpoliset> |
| Severity: | low | Docs Contact: | |
| Priority: | high | ||
| Version: | 3.2 | CC: | ceph-eng-bugs, ceph-qe-bugs, pdonnell, sweil, tchandra, tserlin |
| Target Milestone: | z1 | ||
| Target Release: | 3.3 | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
| Fixed In Version: | RHEL: ceph-12.2.12-55.el7cp Ubuntu: ceph_12.2.12-50redhat1 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-10-22 13:29:00 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. Regards, Giri Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. Regards, Giri Level setting the severity of this defect to "High" with a bulk update. Pls refine it to a more closure value, as defined by the severity definition in https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:3173 |
Description of problem: Due to the removal of the 'ceph mds dump' command in nautilus (14.x), a luminous (12.x or RHCS 3.x) ceph_volume_client does not work against a nautilus cluster. This breaks some version combinations of openstack cloud and ceph. Here's a log fragment of manila: ``` 2019-05-23 09:56:50.763 INFO manila.share.drivers.cephfs.driver [req-34c1c009-cf00-48e8-ab3a-e19ea4bc8df8 None None] [CEPHFSNATIVE1}] Ceph client found, connecting... 2019-05-23 09:56:50.802 INFO ceph_volume_client [req-34c1c009-cf00-48e8-ab3a-e19ea4bc8df8 None None] evict clients with auth_name=manila 2019-05-23 09:56:50.872 ERROR manila.share.manager [req-34c1c009-cf00-48e8-ab3a-e19ea4bc8df8 None None] Error encountered during initialization of driver CephFSDriver.24.218@cephfsnative1: Error: command is obsolete; please check usage and/or man page 2019-05-23 09:56:50.872 TRACE manila.share.manager Traceback (most recent call last): 2019-05-23 09:56:50.872 TRACE manila.share.manager File "/usr/lib/python2.7/site-packages/manila/share/manager.py", line 305, in _driver_setup 2019-05-23 09:56:50.872 TRACE manila.share.manager self.driver.do_setup(ctxt) 2019-05-23 09:56:50.872 TRACE manila.share.manager File "/usr/lib/python2.7/site-packages/manila/share/drivers/cephfs/driver.py", line 144, in do_setup 2019-05-23 09:56:50.872 TRACE manila.share.manager ceph_vol_client=self.volume_client) 2019-05-23 09:56:50.872 TRACE manila.share.manager File "/usr/lib/python2.7/site-packages/manila/share/drivers/cephfs/driver.py", line 216, in volume_client 2019-05-23 09:56:50.872 TRACE manila.share.manager self._volume_client.connect(premount_evict=premount_evict) 2019-05-23 09:56:50.872 TRACE manila.share.manager File "/usr/lib/python2.7/site-packages/ceph_volume_client.py", line 474, in connect 2019-05-23 09:56:50.872 TRACE manila.share.manager self.evict(premount_evict) 2019-05-23 09:56:50.872 TRACE manila.share.manager File "/usr/lib/python2.7/site-packages/ceph_volume_client.py", line 399, in evict 2019-05-23 09:56:50.872 TRACE manila.share.manager mds_map = self._rados_command("mds dump", {}) 2019-05-23 09:56:50.872 TRACE manila.share.manager File "/usr/lib/python2.7/site-packages/ceph_volume_client.py", line 1340, in _rados_command 2019-05-23 09:56:50.872 TRACE manila.share.manager raise rados.Error(outs) 2019-05-23 09:56:50.872 TRACE manila.share.manager Error: command is obsolete; please check usage and/or man page ``` We'd want luminous' ceph_volume_client to also work with later upstream stable release nautilus. Version-Release number of selected component (if applicable): How reproducible: always Steps to Reproduce: 1. setup luminous cluster. 2. connect luminous's (RHCS 3.x) ceph_volume_client to luminous cluster. This should work. Test this by running the following in the python interpreter of client node with ceph 'admin' keyring and ceph.conf at default locations. Note: the client node should've `python-cephfs` pacakge. ``` >>> import ceph_volume_client >>> vc = ceph_volume_client.CephFSVolumeClient('admin', '/etc/ceph/ceph.conf', 'ceph') >>> vc.connect(premount_evict='admin') >>> vc.disconnect() ``` 3. connect luminous's ceph_volume_client with upstream nautilus cluster. This will break. But this should also work. Actual results: luminous's (RHCS 3.x) ceph_volume_client doesn't work with nautilus (latest upstream stable) cluster. Expected results: luminous's (RHCS 3.x) ceph_volume_client works with nautilus cluster (latest upstream stable) cluster.