BZ is going to 4.1z1, why is this one still on assigned?
Hi Humble, Please suggest a test scenario to verify this bug. I couldn't find it in the RHCS bug #1838931 I just tried these commands. # ceph fs subvolume snapshot ls ocs-storagecluster-cephfilesystem csi-vol-52e33199-fe84-11ea-8199-0a580a81020e --group_name csi [ { "name": "csi-snap-b9a577ab-fe84-11ea-8199-0a580a81020e" } ] # ceph fs subvolume snapshot info ocs-storagecluster-cephfilesystem csi-vol-52e33199-fe84-11ea-8199-0a580a81020e csi-snap-b9a577ab-fe84-11ea-8199-0a580a81020e --group_name csi { "created_at": "2020-09-24 16:40:59.587143", "data_pool": "ocs-storagecluster-cephfilesystem-data0", "has_pending_clones": "no", "protected": "yes", "size": 1073741824 }
(In reply to Jilju Joy from comment #6) > Hi Humble, > > Please suggest a test scenario to verify this bug. I couldn't find it in the > RHCS bug #1838931 > > I just tried these commands. > > # ceph fs subvolume snapshot ls ocs-storagecluster-cephfilesystem > csi-vol-52e33199-fe84-11ea-8199-0a580a81020e --group_name csi > [ > { > "name": "csi-snap-b9a577ab-fe84-11ea-8199-0a580a81020e" > } > ] > > # ceph fs subvolume snapshot info ocs-storagecluster-cephfilesystem > csi-vol-52e33199-fe84-11ea-8199-0a580a81020e > csi-snap-b9a577ab-fe84-11ea-8199-0a580a81020e --group_name csi > { > "created_at": "2020-09-24 16:40:59.587143", > "data_pool": "ocs-storagecluster-cephfilesystem-data0", > "has_pending_clones": "no", > "protected": "yes", > "size": 1073741824 > } This is good enough. More or less, the verifiation could have been done on CephFS bug. Also is above output coming from a RHCS 4.1.z2 cluster? if not, can you capture that as well?
(In reply to Humble Chirammal from comment #8) > (In reply to Jilju Joy from comment #6) > > Hi Humble, > > > > Please suggest a test scenario to verify this bug. I couldn't find it in the > > RHCS bug #1838931 > > > > I just tried these commands. > > > > # ceph fs subvolume snapshot ls ocs-storagecluster-cephfilesystem > > csi-vol-52e33199-fe84-11ea-8199-0a580a81020e --group_name csi > > [ > > { > > "name": "csi-snap-b9a577ab-fe84-11ea-8199-0a580a81020e" > > } > > ] > > > > # ceph fs subvolume snapshot info ocs-storagecluster-cephfilesystem > > csi-vol-52e33199-fe84-11ea-8199-0a580a81020e > > csi-snap-b9a577ab-fe84-11ea-8199-0a580a81020e --group_name csi > > { > > "created_at": "2020-09-24 16:40:59.587143", > > "data_pool": "ocs-storagecluster-cephfilesystem-data0", > > "has_pending_clones": "no", > > "protected": "yes", > > "size": 1073741824 > > } > > > This is good enough. More or less, the verifiation could have been done on > CephFS bug. > > Also is above output coming from a RHCS 4.1.z2 cluster? if not, can you > capture that as well? The above output was taken from : # ceph version ceph version 14.2.8-91.el8cp (75b4845da7d469665bd48d1a49badcc3677bf5cd) nautilus (stable) ocs-operator.v4.6.0-98.ci Cluster version is 4.6.0-0.nightly-2020-09-24-074159
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5605