Bug 2090396 - ceph health detail failing during cephadm adoption
Summary: ceph health detail failing during cephadm adoption
Keywords:
Status: CLOSED DUPLICATE of bug 2080242
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 5.2
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 5.3z1
Assignee: Guillaume Abrioux
QA Contact: Vivek Das
URL:
Whiteboard:
Depends On:
Blocks: 1820257
TreeView+ depends on / blocked
 
Reported: 2022-05-25 16:41 UTC by Francesco Pantano
Modified: 2022-09-20 12:25 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-09-20 12:25:06 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-4383 0 None None None 2022-05-25 17:00:08 UTC

Description Francesco Pantano 2022-05-25 16:41:09 UTC
Description of problem:

After the Ceph cluster is upgraded from RHCS 4 to RHCS 5 (16.2.7-100.el8cp) using ceph-ansible rolling_update playbook [1], the cephadm_adopt playbook part is executed and it fails running the "ceph health detail" [3] command against the adopted cluster with the following:

[root@controller-0 ~]# cephadm shell ceph health detail                                                                                                                                                           
Inferring fsid f98c3e4c-e0c9-418e-99b9-10dcfb8bf4a0                                                                                                                                                               
Using recent ceph image undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhceph@sha256:90e4316d65f4a76fea307705d9b0e4706f05e10a63bf041dbee379c8711db115                                                            
HEALTH_WARN failed to probe daemons or devices                                                                                                                                                                    
[WRN] CEPHADM_REFRESH_FAILED: failed to probe daemons or devices                                                                                                                                                  
    host ceph-0.redhat.local `cephadm ceph-volume` failed: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/f98c3e4c-e0c9-418e-99b9-10dcfb8bf4a0/mon.ceph-0/config                     
ERROR: [Errno 2] No such file or directory: '/var/lib/ceph/f98c3e4c-e0c9-418e-99b9-10dcfb8bf4a0/mon.ceph-0/config'                                                                                                
    host ceph-2.redhat.local `cephadm ceph-volume` failed: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/f98c3e4c-e0c9-418e-99b9-10dcfb8bf4a0/mon.ceph-2/config                     
ERROR: [Errno 2] No such file or directory: '/var/lib/ceph/f98c3e4c-e0c9-418e-99b9-10dcfb8bf4a0/mon.ceph-2/config'                                                                                                
    host ceph-1.redhat.local `cephadm ceph-volume` failed: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/f98c3e4c-e0c9-418e-99b9-10dcfb8bf4a0/mon.ceph-1/config                     
ERROR: [Errno 2] No such file or directory: '/var/lib/ceph/f98c3e4c-e0c9-418e-99b9-10dcfb8bf4a0/mon.ceph-1/config'                                                                                                
[root@controller-0 ~]#



[1] https://github.com/ceph/ceph-ansible/blob/v6.0.26/infrastructure-playbooks/rolling_update.yml
[2] https://github.com/ceph/ceph-ansible/blob/v6.0.26/infrastructure-playbooks/cephadm-adopt.yml
[3] https://github.com/ceph/ceph-ansible/blob/master/infrastructure-playbooks/cephadm-adopt.yml#L94

Comment 6 Guillaume Abrioux 2022-09-20 12:25:06 UTC

*** This bug has been marked as a duplicate of bug 2080242 ***


Note You need to log in before you can comment on or make changes to this bug.