Bug 2064850 - [CephFS-NFS] - Dashboard is allowing to create nfs cluster with same name twice
Summary: [CephFS-NFS] - Dashboard is allowing to create nfs cluster with same name twice
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Dashboard
Version: 5.1
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: 6.0
Assignee: Aashish sharma
QA Contact: Amarnath
Masauso Lungu
URL:
Whiteboard:
Depends On:
Blocks: 2126050
TreeView+ depends on / blocked
 
Reported: 2022-03-16 18:44 UTC by Amarnath
Modified: 2023-03-20 18:56 UTC (History)
9 users (show)

Fixed In Version: ceph-17.2.3-2.el9cp
Doc Type: Bug Fix
Doc Text:
.Validation is required when creating a new service name Previously, there was no validation when creating a new service on the ceph Dashboard, as a result, users were allowed to create a new service with an existing name. This would overwrite existing services and cause the user to lose a current running service on the hosts. With this fix, validation is required before creating a new service on the dashboard and using an existing service name is not possible.
Clone Of:
Environment:
Last Closed: 2023-03-20 18:56:12 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 47261 0 None open mgr/dashboard: Show error on creating service with duplicate service id 2022-07-25 11:13:53 UTC
Red Hat Issue Tracker RHCEPH-3801 0 None None None 2022-03-16 19:09:55 UTC
Red Hat Issue Tracker RHCSDASH-705 0 None None None 2022-03-25 11:25:17 UTC
Red Hat Product Errata RHBA-2023:1360 0 None None None 2023-03-20 18:56:36 UTC

Description Amarnath 2022-03-16 18:44:49 UTC
Description of problem:
Dashboard is allowing to create nfs cluster with same name twice

Problem Statement:

I have nfs cluster with name cephfs-nfs.
When i tried to create with the same name again with different hosts.
It is allowing and updating the exiting service.

Usually in CLI we see error saying the cluster already exits
[ceph: root@ceph-snap-fail-amk-01m9k1-node1-installer /]# ceph nfs cluster create cephfs-nfs ceph-snap-fail-amk-01m9k1-node4
cephfs-nfs cluster already exists
[ceph: root@ceph-snap-fail-amk-01m9k1-node1-installer /]# ceph nfs cluster create cephfs-nfs ceph-snap-fail-amk-01m9k1-node5
cephfs-nfs cluster already exists

Impact : 
User might end up losing the current running service on the hosts


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 16 Aashish sharma 2022-10-07 05:45:38 UTC
Hi Masauso,

Doc text looks good to me.

Thanks

Comment 30 errata-xmlrpc 2023-03-20 18:56:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:1360


Note You need to log in before you can comment on or make changes to this bug.