Bug 2006415

Summary: [cee/sd][ceph-ansible] cepadm-adopt.yml playbook fails at: TASK [manage nodes with cephadm]
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Lijo Stephen Thomas <lithomas>
Component: Ceph-AnsibleAssignee: Guillaume Abrioux <gabrioux>
Status: CLOSED ERRATA QA Contact: Manasa <mgowri>
Severity: medium Docs Contact: Ranjini M N <rmandyam>
Priority: medium    
Version: 5.0CC: agunn, aschoen, ceph-eng-bugs, gabrioux, gmeno, gsitlani, mhackett, mmuench, nthomas, rlepaksh, rmandyam, sunnagar, tserlin, vereddy, ykaul
Target Milestone: ---   
Target Release: 5.1   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-ansible-6.0.19-1.el8cp Doc Type: Bug Fix
Doc Text:
.The `cephadm-adopt` playbook uses the IP address in the public network Previously, the `cephadm-adopt` playbook would fail to manage nodes where the IP address was not in the subnet in the default route. With this release, the playbook uses the IP address from the public network instead of the subnet and manages the nodes as expected.
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-04-04 10:21:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2031073    

Description Lijo Stephen Thomas 2021-09-21 17:15:29 UTC
Description of problem:
=======================
cepadm-adopt.yml playbook fails at: TASK [manage nodes with cephadm] with below error:

Error on mon node:
  stderr: 'Error EINVAL: Host mon1 (10.x.x.x) failed check(s): [''hostname "mon1.example.com" does not match expected hostname "mon1"'']'

Error on osd node: The task tries to connect to 
  stderr: |-
    Error EINVAL: Failed to connect to bashful (192.x.x.x).
    Please make sure that the host is reachable and accepts connections using the cephadm SSH key





Version-Release number of selected component (if applicable):
=============================================================
Upgrade from RHCS 4.2z2 async to RHCS 5.0


How reproducible:


Steps to Reproduce:
===================
1. Configure RHCS 4.2z2 async with public n/w and cluster n/w
2. Have default route set to cluster n/w
2. Execute rolling_update to RHCS 5.0
3. Run cephadm-adopt.yml for takeover.


Actual results:
===============
Playbook fails to takeover


Expected results:
=================
Playbook should complete without any errors.

Comment 2 Sebastian Wagner 2021-09-22 06:12:11 UTC
The host name relates to https://bugzilla.redhat.com/show_bug.cgi?id=1997083

Comment 17 errata-xmlrpc 2022-04-04 10:21:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1174