Description of problem: Dashboard is allowing to create nfs cluster with same name twice Problem Statement: I have nfs cluster with name cephfs-nfs. When i tried to create with the same name again with different hosts. It is allowing and updating the exiting service. Usually in CLI we see error saying the cluster already exits [ceph: root@ceph-snap-fail-amk-01m9k1-node1-installer /]# ceph nfs cluster create cephfs-nfs ceph-snap-fail-amk-01m9k1-node4 cephfs-nfs cluster already exists [ceph: root@ceph-snap-fail-amk-01m9k1-node1-installer /]# ceph nfs cluster create cephfs-nfs ceph-snap-fail-amk-01m9k1-node5 cephfs-nfs cluster already exists Impact : User might end up losing the current running service on the hosts Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Hi Masauso, Doc text looks good to me. Thanks
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:1360