Description of problem: # ansible-playbook infrastructure-playbooks/cephadm-adopt.yml fails on TASK for all nodes: TASK [manage nodes with cephadm] Tuesday 24 August 2021 07:48:38 -0400 (0:00:01.838) 0:00:59.454 ******** fatal: [mons-0.siterdub.lab.rdu2.cee.redhat.com -> mons-0.siterdub.lab.rdu2.cee.redhat.com]: FAILED! => changed=false cmd: - podman - run - --rm - --net=host - -v - /etc/ceph:/etc/ceph:z - -v - /var/lib/ceph:/var/lib/ceph:z - -v - /var/run/ceph:/var/run/ceph:z - --entrypoint=ceph - docker-registry.upshift.redhat.com/ceph/ceph-5.0-rhel-8:latest - --cluster - ceph - orch - host - add - mons-0 - 10.10.95.151 - mgrs - mons delta: '0:00:06.332408' end: '2021-08-24 07:48:44.971645' msg: non-zero return code rc: 22 start: '2021-08-24 07:48:38.639237' stderr: 'Error EINVAL: Host mons-0 (10.10.95.151) failed check(s): [''hostname "mons-0.siterdub.lab.rdu2.cee.redhat.com" does not match expected hostname "mons-0"'']' stderr_lines: <omitted> stdout: '' stdout_lines: <omitted> ------ on node mons-0: # hostname mons-0.siterdub.lab.rdu2.cee.redhat.com from ansible: # ansible mons-0 -m setup | grep -e hostname -e mons-0 [WARNING]: While constructing a mapping from /usr/share/ceph-ansible/group_vars/all.yml, line 1, column 1, found a duplicate dict key (containerized_deployment). Using last defined value only. mons-0 | SUCCESS => { "ansible_fqdn": "mons-0.siterdub.lab.rdu2.cee.redhat.com", "ansible_hostname": "mons-0", "ansible_nodename": "mons-0.siterdub.lab.rdu2.cee.redhat.com", Version-Release number of selected component (if applicable): # rpm -qa | grep ansible ansible-2.9.25-1.el8ae.noarch ceph-ansible-6.0.11.1-1.el8cp.noarch ceph version 16.2.0-117.el8cp How reproducible: always Steps to Reproduce: 1. following upgrade RHCS 4->5 upgrade path 2. in section CONVERTING THE STORAGE CLUSTER TO USING CEPHADM 3. ansible-playbook infrastructure-playbooks/cephadm-adopt.ym Actual results: Expected results: Additional info:
This needs to be thoroughly tested.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:4105