Description of problem: I would like to get some information about this tracker [1]. One of my customer has hit this issue during purging their RHCS 6 cluster. Overall their cluster was already purged but one OSD node having 48 disks + 4 NVMes and each NVMe will be split into 12 namespaces. That means that ceph-volume sees 96 disks on the node. the error is the same as mentioned in the upstream tracker that appeared when running the cephadm-purge-cluster.yml playbook. attached a file with the snippet of the error. [ASK]: As per the upstream tracker this issue is already resolved and is backported to pacific as well thus would like to know why this error is again coming in RHCS 6. AS of now I have shared with them the steps to manually clean that node, but would like to know the possibility of this issue in future releases and how we can fix it. [1] https://tracker.ceph.com/issues/52745 Version-Release number of selected component (if applicable): RHCS 6.0