In Ceph CSI, there are cases where an existing subvolume needs to be inspected for a match to an incoming request. There are also possible future cases where we may need to list all CSI subvolumes and return its metadata to form the CSI context. A couple of examples are given below, - An interrupted CreateVolume call in CSI can result in needing to inspect the already created subvolume to see if its size matchs the request - An interrupted Createvolume call in CSI can result in needing to determine the already created subvolumes data pool extended attribute, to address topology response on a subsequent request for the same Filing this issue to request a "subvolume info" extension to the CLI, possibly along the lines below, ceph fs subvolume info <vol_name> <subvol_name> [--group_name <subvol_group_name> <--format [json|???]>] The returned metadata needs to contain, - Size (or quota set) - Data pool (if set, or default etc.) - CreatedAt timestamp - Any other attributes, to be future safe Also, a similar request info request is required for snapshots, to fetch information along the same lines as above. Hence, consider this request to add the info extension to "ceph fs subvolume snapshot" series of commands as well. Further, would also like to understand how far back, in terms of Ceph versions, this can be backported once addressed, to determine both upstream and downstream version support for the same.
Why are we creating an RFE BZ (this is certainly an RFE)? This planning/tracking should happen in Jira, afaict. @Shyam? (Apart from that, i don't think this is 4.4 material, since the ceph version to be used for 4.4 is probably locked at least for RFEs at this point?)
(In reply to Michael Adam from comment #4) > Why are we creating an RFE BZ (this is certainly an RFE)? > This planning/tracking should happen in Jira, afaict. Here is where this is tracked in Jira: https://issues.redhat.com/browse/RHSTOR-833 Further, there is an open cephfs tracker upstream for the same: https://tracker.ceph.com/issues/44277 The Ceph RFE bug was initially created to track the request with the CephFS team, but the upstream tracker is deemed good enough for the same. > > @Shyam? Do we need to close this and use Jira for the enhancement? Are there other Ceph enhancements being tracked in Jira? (I do not want to be the only one filing away a jira ticket for Ceph) > > (Apart from that, i don't think this is 4.4 material, since the ceph version > to be used for 4.4 is probably locked at least for RFEs at this point?)
This doesn't seem to be properly triaged.
I think this is the ceph bug that this is tracking: BZ #1835216 This is ON_QA for RHCS 4.1 z1. Remember that the sole purpose of the (In reply to Shyamsundar from comment #5) > > Here is where this is tracked in Jira: > https://issues.redhat.com/browse/RHSTOR-833 > > Further, there is an open cephfs tracker upstream for the same: > https://tracker.ceph.com/issues/44277 > > The Ceph RFE bug was initially created to track the request with the CephFS > team, but the upstream tracker is deemed good enough for the same. The sole purpose of the ceph component of OCS is to track BZs in RHCS. > Do we need to close this and use Jira for the enhancement? Are there other > Ceph enhancements being tracked in Jira? (I do not want to be the only one > filing away a jira ticket for Ceph) Well, the ceph change is part of the items needed for the snapshot jira epic. If people want a BZ for it, for tracking the change in RHCS, we can do it. Approving for 4.6 since this is where we'll be landing snapshot and consuming this ceph release.
The Ceph bug is already ON_QA. Moving to POST just to indicate it's in good progress and we should move to ON_QA as soon as we take that Ceph build.
@shyam these are the attributes we see with OCS -ocs-operator.v4.6.0-562.ci sh-4.4# ceph version ceph version 14.2.8-91.el8cp (75b4845da7d469665bd48d1a49badcc3677bf5cd) nautilus (stable) sh-4.4# ceph fs subvolume info ocs-storagecluster-cephfilesystem csi-vol-4e8e34ab-f80d-11ea-a26d-0a580a800211 --group_name csi { "atime": "2020-09-16 11:11:03", "bytes_pcent": "0.13", "bytes_quota": 107374182400, "bytes_used": 135263372, "created_at": "2020-09-16 11:11:03", "ctime": "2020-09-16 11:44:47", "data_pool": "ocs-storagecluster-cephfilesystem-data0", "gid": 0, "mode": 16895, "mon_addrs": [ "172.30.30.71:6789", "172.30.17.112:6789", "172.30.54.109:6789" ], "mtime": "2020-09-16 11:44:47", "path": "/volumes/csi/csi-vol-4e8e34ab-f80d-11ea-a26d-0a580a800211/a072d544-0f37-4502-bbd3-ec0a18895375", "pool_namespace": "", "type": "subvolume", "uid": 0 } I still do not see the requested "- Size (or quota set)" ...can you please have a look? Also, can you explicitly specify the commands we need to test before moving the BZ to verified? The ceph bug doesnt have details of all the commands which got this change. @Jilju, can you also try snapshot for CephFS volume and check the command output as well ? "ceph fs subvolume snapshot"
(In reply to Neha Berry from comment #11) > "bytes_quota": 107374182400, > I still do not see the requested "- Size (or quota set)" ...can you please > have a look? Also, can you explicitly specify the commands we need to test > before moving the BZ to verified? The ceph bug doesnt have details of all > the commands which got this change. The above field reflects the quota.
(In reply to Shyamsundar from comment #12) > (In reply to Neha Berry from comment #11) > > "bytes_quota": 107374182400, > > > I still do not see the requested "- Size (or quota set)" ...can you please > > have a look? Also, can you explicitly specify the commands we need to test > > before moving the BZ to verified? The ceph bug doesnt have details of all > > the commands which got this change. > > The above field reflects the quota. ack. In that case, are we good with all the requirements ? Also, is there any other command one needs to verify apart from the "ceph fs subvolume snapshot" info ? Let us know. if not, then once Jilju verifies the snapshot info, we would mark this BZ as verified.
(In reply to Neha Berry from comment #13) > (In reply to Shyamsundar from comment #12) > > (In reply to Neha Berry from comment #11) > > > "bytes_quota": 107374182400, > > > > > I still do not see the requested "- Size (or quota set)" ...can you please > > > have a look? Also, can you explicitly specify the commands we need to test > > > before moving the BZ to verified? The ceph bug doesnt have details of all > > > the commands which got this change. > > > > The above field reflects the quota. > > ack. In that case, are we good with all the requirements ? As it stands for OCS/Ceph-CSI the needed fields are present. > > Also, is there any other command one needs to verify apart from the "ceph fs > subvolume snapshot" info ? Let us know. For snapshots, we have the size, created_at, data_pool, and has_pending_clones in the info return, which are again sufficient for Ceph-CSI to function. > > if not, then once Jilju verifies the snapshot info, we would mark this BZ as > verified.
(In reply to Shyamsundar from comment #14) > (In reply to Neha Berry from comment #13) > > (In reply to Shyamsundar from comment #12) > > > (In reply to Neha Berry from comment #11) > > > > "bytes_quota": 107374182400, > > > > > > > I still do not see the requested "- Size (or quota set)" ...can you please > > > > have a look? Also, can you explicitly specify the commands we need to test > > > > before moving the BZ to verified? The ceph bug doesnt have details of all > > > > the commands which got this change. > > > > > > The above field reflects the quota. > > > > ack. In that case, are we good with all the requirements ? > > As it stands for OCS/Ceph-CSI the needed fields are present. > > > > > Also, is there any other command one needs to verify apart from the "ceph fs > > subvolume snapshot" info ? Let us know. > > For snapshots, we have the size, created_at, data_pool, and > has_pending_clones in the info return, which are again sufficient for > Ceph-CSI to function. These parameters mentioned above are present in the snapshot info. Moving this bug to verified. # ceph fs subvolume snapshot ls ocs-storagecluster-cephfilesystem csi-vol-52e33199-fe84-11ea-8199-0a580a81020e --group_name csi [ { "name": "csi-snap-b9a577ab-fe84-11ea-8199-0a580a81020e" } ] # ceph fs subvolume snapshot info ocs-storagecluster-cephfilesystem csi-vol-52e33199-fe84-11ea-8199-0a580a81020e csi-snap-b9a577ab-fe84-11ea-8199-0a580a81020e --group_name csi { "created_at": "2020-09-24 16:40:59.587143", "data_pool": "ocs-storagecluster-cephfilesystem-data0", "has_pending_clones": "no", "protected": "yes", "size": 1073741824 } Tested in version: # ceph version ceph version 14.2.8-91.el8cp (75b4845da7d469665bd48d1a49badcc3677bf5cd) nautilus (stable) ocs-operator.v4.6.0-98.ci Cluster version is 4.6.0-0.nightly-2020-09-24-074159 > > > > > if not, then once Jilju verifies the snapshot info, we would mark this BZ as > > verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5605