Bug 1693944 - [RBD] - nautilus: cannot open images against luminous cluster
Summary: [RBD] - nautilus: cannot open images against luminous cluster
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RBD
Version: 4.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 4.0
Assignee: Jason Dillaman
QA Contact: Gopi
URL:
Whiteboard:
Depends On: 1661283
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-29 06:23 UTC by Vasishta
Modified: 2020-01-31 12:46 UTC (History)
9 users (show)

Fixed In Version: ceph-14.2.1-385.g4ae8136.el8cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-01-31 12:45:57 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 38834 0 None None None 2019-03-29 12:38:30 UTC
Red Hat Product Errata RHBA-2020:0312 0 None None None 2020-01-31 12:46:10 UTC

Description Vasishta 2019-03-29 06:23:49 UTC
Description of problem:
Unable to open images from nautilus client for luminous cluster

Version-Release number of selected component (if applicable):
14.2.0

How reproducible:
Always

Steps to Reproduce:
1. From nautilus client for luminous cluster, try to access an image


Actual results:
$ sudo rbd bench --io-type write --io-total 10M from_rhcs4_client/image_1
2019-03-29 06:22:04.050 7f99ce7fc700 -1 librbd::image::RefreshRequest: failed to retrieve group: (95) Operation not supported
2019-03-29 06:22:04.051 7f99cdffb700 -1 librbd::image::OpenRequest: failed to refresh image: (95) Operation not supported
2019-03-29 06:22:04.051 7f99cdffb700 -1 librbd::ImageState: 0x55e72c570570 failed to open image: (95) Operation not supported
rbd: error opening image image_1: (95) Operation not supported


Expected results:
client should be able to open image

Comment 1 Jeffrey C. Ollie 2019-05-05 23:00:26 UTC
I'm having similar problems, but with Kubernetes PVCs after upgrading one of my Kubernetes nodes to Fedora 30.

Comment 2 Giridhar Ramaraju 2019-08-05 13:06:24 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 3 Giridhar Ramaraju 2019-08-05 13:09:04 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 8 Gopi 2019-12-24 06:00:33 UTC
Verified bug on 3.x with 4.0 client version and it's working as expected.
[root@f23-h33-000-6018r ceph]# rbd bench --io-type write g_data/g_image --cluster site-a -n client.site-a
bench  type write io_size 4096 io_threads 16 bytes 1073741824 pattern sequential
  SEC       OPS   OPS/SEC   BYTES/SEC
    1     15104  15014.82  61500696.55
    2     28080  14033.89  57482829.19
    3     40368  13398.74  54881225.32
    4     51680  12811.83  52477255.70
    5     65168  12905.10  52859293.32
    6     75312  12060.84  49401181.77
    7     85744  11434.40  46835322.02
    8     94784  10771.12  44118525.22
    9    105584  10830.57  44361994.82
   10    117088  10493.07  42979633.39
   11    126320   9964.39  40814161.59

3.x cluster details:
ceph version 12.2.12-83.el7cp (4b34b893d114fdf8fcbf17368a3702cbfcf668d6) luminous (stable)
ceph-mon-12.2.12-83.el7cp.x86_64
ceph-ansible-3.2.36-1.el7cp.noarch

4.0 client details:
ceph version 14.2.4-91.el8cp (23607558df3b077b6190cdf96cd8d9043aa2a1c5) nautilus (stable)

Comment 11 errata-xmlrpc 2020-01-31 12:45:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0312


Note You need to log in before you can comment on or make changes to this bug.