Bug 2232087 - [cee/sd][cephadm] cephadm: Call fails if output too long
Summary: [cee/sd][cephadm] cephadm: Call fails if output too long
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 6.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 6.1z2
Assignee: Adam King
QA Contact: Mohit Bisht
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks: 2235257
TreeView+ depends on / blocked
 
Reported: 2023-08-15 09:05 UTC by Milind
Modified: 2024-07-23 05:36 UTC (History)
5 users (show)

Fixed In Version: ceph-17.2.6-142
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-10-12 16:34:36 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-7215 0 None None None 2023-08-15 09:13:20 UTC
Red Hat Product Errata RHSA-2023:5693 0 None None None 2023-10-12 16:35:38 UTC

Description Milind 2023-08-15 09:05:59 UTC
Description of problem:

I would like to get some information about this tracker [1]. One of my customer has hit this issue during purging their RHCS 6 cluster. Overall their cluster was already purged but one OSD node having 48 disks + 4 NVMes and each NVMe will be split into 12 namespaces. That means that ceph-volume sees 96 disks on the node. the error is the same as mentioned in the upstream tracker that appeared when running the cephadm-purge-cluster.yml playbook. attached a file with the snippet of the error. 

[ASK]: As per the upstream tracker this issue is already resolved and is backported to pacific as well thus would like to know why this error is again coming in RHCS 6. AS of now I have shared with them the steps to manually clean that node, but would like to know the possibility of this issue in future releases and how we can fix it.


[1] https://tracker.ceph.com/issues/52745

Version-Release number of selected component (if applicable):
RHCS 6.0

Comment 7 errata-xmlrpc 2023-10-12 16:34:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 6.1 security, enhancement, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:5693


Note You need to log in before you can comment on or make changes to this bug.