Description of problem: I've tried to create cluster but task failed. No host has been marked as assigned to any cluster. So I've tried to create cluster with different name again and it failed with: ERROR cluster.go:86 CreateCluster]#033[0m admin:3a53bed1-afa2-489c-9d87-09e524038efc-Nodes [_host1_] already participating in a cluster. New Cluster cannot be created using the node But _host1_ is not assigned according UI. Version-Release number of selected component (if applicable): server: ceph-ansible-1.0.5-19.el7scon.noarch ceph-installer-1.0.11-1.el7scon.noarch rhscon-ceph-0.0.20-1.el7scon.x86_64 rhscon-core-0.0.21-1.el7scon.x86_64 rhscon-ui-0.0.34-1.el7scon.noarch monitor with calamari: calamari-server-1.4.0-0.12.rc15.el7cp.x86_64 ceph-base-10.2.1-13.el7cp.x86_64 ceph-common-10.2.1-13.el7cp.x86_64 ceph-mon-10.2.1-13.el7cp.x86_64 ceph-selinux-10.2.1-13.el7cp.x86_64 libcephfs1-10.2.1-13.el7cp.x86_64 python-cephfs-10.2.1-13.el7cp.x86_64 rhscon-agent-0.0.9-1.el7scon.noarch Steps to Reproduce: 1. try to create cluster but task failed 2. try to create another cluster with same nodes because they ae marked as unassigned Actual results: It is possible to start task for creating cluster with nodes which seems unassigned. The task failed because nodes are already assigned. Expected results: It will be possible to create cluster from nodes which are marked as unassigned.
This actually depends on at what stage the cluster creation has failed. If cluster creation failed while storage bits installation or just before OSD configuration on node, we would need to revert the clusterid field populated in node and then it could be used in another cluster. If the cluster creation failed at a stage when OSD creation was in progress, some of the disks might be already partitioned and this node ideally we should not use in another cluster before cleaning the disks properly.
Moving this to 3.0 and created a doc BZ#1349458 for getting a troubleshooting section added in documentation.
This product is EOL now