Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1950644

Summary: [Ceph Dashboard] Creation of snapshot for subvol's folder is failing with 500 Internal error
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Amarnath <amk>
Component: Ceph-DashboardAssignee: Pere Diaz Bou <pdiazbou>
Status: CLOSED ERRATA QA Contact: Amarnath <amk>
Severity: high Docs Contact: Ranjini M N <rmandyam>
Priority: medium    
Version: 5.0CC: agunn, ceph-eng-bugs, epuertat, pdiazbou, rmandyam, sangadi, tserlin, vashastr, vereddy
Target Milestone: ---   
Target Release: 5.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-16.2.6-2.el8cp Doc Type: Known Issue
Doc Text:
.Users cannot create snapshots of subvolumes on the {storage-product} Dashboard With this release, users cannot create snapshots of the subvolumes on the {storage-product} Dashboard. If the user creates a snapshot of the subvolumes on the dashboard, the user gets a 500 error instead of a more descriptive error message.
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-04-04 10:20:36 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1959686    
Attachments:
Description Flags
Create_Snapshot
none
After Enabling Debug none

Description Amarnath 2021-04-17 11:06:42 UTC
Created attachment 1772730 [details]
Create_Snapshot

Description of problem:
Creation of snapshot for subvol's folder is failing with 500 Internal error


Version-Release number of selected component (if applicable):
ceph version 16.2.0-8.el8cp (f869f8bf2b6e9c3886e94267d378de5d9d57bb61) pacific (stable)


How reproducible:


Steps to Reproduce:
1.Create subvol and create a snapshot
2.
3.

Actual results:
Snapshot Creation is failing on subvol's folder
"/volumes/_nogroup/subvol_1/ca90f125-0cdf-4868-b12a-c5c54a1701bf"

Expected results:
If it is expected to fail, It should fail with a proper error message.

It would be better if we disable the create snapshot button for these volumes.
If these volumes does not support creation of snapshot

Additional info:

Comment 1 RHEL Program Management 2021-04-17 11:06:46 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 3 Amarnath 2021-04-23 11:00:59 UTC
Created attachment 1774762 [details]
After Enabling Debug

Hi Ernesto,

I tried enabling Debug for the dashboard but couldn't able to find much info in the error.

please find the attachments

Comment 4 Amarnath 2021-06-10 12:14:55 UTC
Tested on [cephuser@ceph-amk-snap-1623323935158-node8-client ~]$ ceph version
ceph version 16.2.0-47.el8cp (1b33de73b715c8970931dfeb7b6c543d1af7267e) pacific (stable)

Still seeing same issue

Comment 6 Amarnath 2021-06-24 08:59:55 UTC
Please find the CLI part of it 


[root@ceph-tier0-amar-1624507936499-node7-client ~]# ceph fs subvolume create cephfs subvol_1 --size 5368706371
[root@ceph-tier0-amar-1624507936499-node7-client ~]# 
[root@ceph-tier0-amar-1624507936499-node7-client ~]# 
[root@ceph-tier0-amar-1624507936499-node7-client ~]# ceph fs subvolume create ^Cphfs subvol_1 --size 5368706371

[root@ceph-tier0-amar-1624507936499-node7-client mnt]# mkdir mount_fs
[root@ceph-tier0-amar-1624507936499-node7-client mnt]# ceph-fuse /mnt/mount_fs/
2021-06-24T04:53:23.382-0400 7fce4b004200 -1 init, newargv = 0x556ee7584ac0 newargc=15
ceph-fuse[22666]: starting ceph client
ceph-fuse[22666]: starting fuse
[root@ceph-tier0-amar-1624507936499-node7-client mnt]# cd /mnt/mount_fs/
[root@ceph-tier0-amar-1624507936499-node7-client mount_fs]# ls -lrt
total 1
drwxr-xr-x. 3 root root 127 Jun 24 04:52 volumes
[root@ceph-tier0-amar-1624507936499-node7-client mount_fs]# cd volumes/_nogroup/subvol_1/
[root@ceph-tier0-amar-1624507936499-node7-client subvol_1]# ls -lrt
total 1
drwxr-xr-x. 2 root root 0 Jun 24 04:52 14e107ba-1dca-4bd4-9119-b47dc568ee07
[root@ceph-tier0-amar-1624507936499-node7-client subvol_1]# mkdir test
[root@ceph-tier0-amar-1624507936499-node7-client subvol_1]# vi test/test_1
[root@ceph-tier0-amar-1624507936499-node7-client subvol_1]# 
[root@ceph-tier0-amar-1624507936499-node7-client subvol_1]# cd 14e107ba-1dca-4bd4-9119-b47dc568ee07/
[root@ceph-tier0-amar-1624507936499-node7-client 14e107ba-1dca-4bd4-9119-b47dc568ee07]# ls -lrt
total 0
[root@ceph-tier0-amar-1624507936499-node7-client 14e107ba-1dca-4bd4-9119-b47dc568ee07]# cp ../test/test_1 
cp: missing destination file operand after '../test/test_1'
Try 'cp --help' for more information.
[root@ceph-tier0-amar-1624507936499-node7-client 14e107ba-1dca-4bd4-9119-b47dc568ee07]# cp ../test/test_1 .
[root@ceph-tier0-amar-1624507936499-node7-client 14e107ba-1dca-4bd4-9119-b47dc568ee07]# ls -lrt
total 1
-rw-r--r--. 1 root root 11 Jun 24 04:54 test_1
[root@ceph-tier0-amar-1624507936499-node7-client 14e107ba-1dca-4bd4-9119-b47dc568ee07]# mkdir test
[root@ceph-tier0-amar-1624507936499-node7-client 14e107ba-1dca-4bd4-9119-b47dc568ee07]# ls -lrt
total 1
-rw-r--r--. 1 root root 11 Jun 24 04:54 test_1
drwxr-xr-x. 2 root root  0 Jun 24 04:54 test
[root@ceph-tier0-amar-1624507936499-node7-client 14e107ba-1dca-4bd4-9119-b47dc568ee07]# mkdir .snap/snap_1  --- > inside uuid folder i am not permitted to create it
mkdir: cannot create directory ‘.snap/snap_1’: Operation not permitted
[root@ceph-tier0-amar-1624507936499-node7-client 14e107ba-1dca-4bd4-9119-b47dc568ee07]# cd ..
[root@ceph-tier0-amar-1624507936499-node7-client subvol_1]# mkdir .snap/snap_1  -->on subvolume i am able to create it 
[root@ceph-tier0-amar-1624507936499-node7-client subvol_1]# 

Dashboard should either throw the error saying no permissions or should disable the button

Comment 12 Amarnath 2022-01-11 07:31:31 UTC
Create snapshot button is getting disabled for not valid folders

Same has been captured in the above attachment.

Verified on below builds
[root@ceph-amk5-1-1t24l3-node7 ~]# ceph fs subvolume create cephfs subvol_1 --size 5368706371
[root@ceph-amk5-1-1t24l3-node7 ~]# ceph version
ceph version 16.2.7-18.el8cp (86cfa49b08da370afb3f98be618f2c3c1eae71fb) pacific (stable)
[root@ceph-amk5-1-1t24l3-node7 ~]# ceph versions
{
    "mon": {
        "ceph version 16.2.7-18.el8cp (86cfa49b08da370afb3f98be618f2c3c1eae71fb) pacific (stable)": 3
    },
    "mgr": {
        "ceph version 16.2.7-18.el8cp (86cfa49b08da370afb3f98be618f2c3c1eae71fb) pacific (stable)": 2
    },
    "osd": {
        "ceph version 16.2.7-18.el8cp (86cfa49b08da370afb3f98be618f2c3c1eae71fb) pacific (stable)": 12
    },
    "mds": {
        "ceph version 16.2.7-18.el8cp (86cfa49b08da370afb3f98be618f2c3c1eae71fb) pacific (stable)": 3
    },
    "overall": {
        "ceph version 16.2.7-18.el8cp (86cfa49b08da370afb3f98be618f2c3c1eae71fb) pacific (stable)": 20
    }
}
[root@ceph-amk5-1-1t24l3-node7 ~]#

Comment 14 Pere Diaz Bou 2022-02-16 11:41:11 UTC
Hi Amarnath,

What are those invalid folders that you are talking about, an screenshot would help me a lot indentifying this problem if possible. 

Thanks for the report.

Comment 16 errata-xmlrpc 2022-04-04 10:20:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1174