Bug 2232087

Summary: [cee/sd][cephadm] cephadm: Call fails if output too long
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Milind <milverma>
Component: CephadmAssignee: Adam King <adking>
Status: ASSIGNED --- QA Contact: Mohit Bisht <mobisht>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 6.0CC: adking, cephqe-warriors
Target Milestone: ---   
Target Release: 6.1z2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Milind 2023-08-15 09:05:59 UTC
Description of problem:

I would like to get some information about this tracker [1]. One of my customer has hit this issue during purging their RHCS 6 cluster. Overall their cluster was already purged but one OSD node having 48 disks + 4 NVMes and each NVMe will be split into 12 namespaces. That means that ceph-volume sees 96 disks on the node. the error is the same as mentioned in the upstream tracker that appeared when running the cephadm-purge-cluster.yml playbook. attached a file with the snippet of the error. 

[ASK]: As per the upstream tracker this issue is already resolved and is backported to pacific as well thus would like to know why this error is again coming in RHCS 6. AS of now I have shared with them the steps to manually clean that node, but would like to know the possibility of this issue in future releases and how we can fix it.


[1] https://tracker.ceph.com/issues/52745

Version-Release number of selected component (if applicable):
RHCS 6.0