Description of problem: The upgrade from 4.3 to 5.2 is failing. Version-Release number of selected component (if applicable): ceph version 14.2.22-110.el8cp to ceph version 16.2.8-50.el8cp How reproducible: 1/1 Steps to Reproduce: 1. Created ceph cluster with 14.2.22-110.el8cp ceph -s cluster: id: adb77a34-8fbe-476f-b74a-a4cfd3b4a80d health: HEALTH_WARN 1 pools have too few placement groups mons are allowing insecure global_id reclaim services: mon: 3 daemons, quorum ceph-amk-upgrade-02m3hq-node3,ceph-amk-upgrade-02m3hq-node1-installer,ceph-amk-upgrade-02m3hq-node2 (age 2h) mgr: ceph-amk-upgrade-02m3hq-node1-installer(active, since 2h), standbys: ceph-amk-upgrade-02m3hq-node2 mds: cephfs:1 {0=ceph-amk-upgrade-02m3hq-node6=up:active} 2 up:standby osd: 16 osds: 16 up (since 2h), 16 in (since 2h) data: pools: 3 pools, 80 pgs objects: 22 objects, 2.2 KiB usage: 16 GiB used, 224 GiB / 240 GiB avail pgs: 80 active+clean 2. Cephfs Status ceph fs status cephfs - 0 clients ====== +------+--------+-------------------------------+---------------+-------+-------+ | Rank | State | MDS | Activity | dns | inos | +------+--------+-------------------------------+---------------+-------+-------+ | 0 | active | ceph-amk-upgrade-02m3hq-node6 | Reqs: 0 /s | 10 | 13 | +------+--------+-------------------------------+---------------+-------+-------+ +-----------------+----------+-------+-------+ | Pool | type | used | avail | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 1536k | 70.6G | | cephfs_data | data | 0 | 70.6G | +-----------------+----------+-------+-------+ +-------------------------------+ | Standby MDS | +-------------------------------+ | ceph-amk-upgrade-02m3hq-node5 | | ceph-amk-upgrade-02m3hq-node4 | +-------------------------------+ MDS version: ceph version 14.2.22-110.el8cp (2e0d97dbe192cca7419bbf3f8ee6b7abb42965c4) nautilus (stable) 3. Triggered Rolling Update and script has passed with out any issues 4. Triggered cephadm-adopt.yaml. This has made the cluster go to HEALTH_ERR State. Failure observed for cephadm-adopt.yaml fatal: [ceph-amk-upgrade-02m3hq-node6 -> ceph-amk-upgrade-02m3hq-node1-installer]: FAILED! => changed=false attempts: 20 cmd: - cephadm - shell - -k - /etc/ceph/ceph.client.admin.keyring - --fsid - adb77a34-8fbe-476f-b74a-a4cfd3b4a80d - -- - ceph - pg - stat - --format - json delta: '0:00:01.638930' end: '2022-06-21 13:46:19.697391' invocation: module_args: _raw_params: cephadm shell -k /etc/ceph/ceph.client.admin.keyring --fsid adb77a34-8fbe-476f-b74a-a4cfd3b4a80d -- ceph pg stat --format json _uses_shell: false argv: null chdir: null creates: null executable: null removes: null stdin: null stdin_add_newline: true strip_empty_ends: true warn: true rc: 0 start: '2022-06-21 13:46:18.058461' stderr: Inferring config /var/lib/ceph/adb77a34-8fbe-476f-b74a-a4cfd3b4a80d/mon.ceph-amk-upgrade-02m3hq-node1-installer/config stderr_lines: <omitted> stdout: |2- {"pg_ready":true,"pg_summary":{"num_pg_by_state":[{"name":"active+undersized","num":13},{"name":"active+undersized+degraded","num":3},{"name":"active+clean","num":65}],"num_pgs":81,"num_bytes":207736883,"total_bytes":257630928896,"total_avail_bytes":255725993984,"total_used_bytes":1904934912,"total_used_raw_bytes":1904934912,"degraded_objects":9,"degraded_total":237,"degraded_ratio":0.037974683544303799} } stdout_lines: <omitted> NO MORE HOSTS LEFT ************************************************************************************************************************************************************************************************ PLAY RECAP ******************************************************************************************************************************************************************************************************** ceph-amk-upgrade-02m3hq-node1-installer : ok=44 changed=9 unreachable=0 failed=0 skipped=30 rescued=0 ignored=0 ceph-amk-upgrade-02m3hq-node2 : ok=43 changed=13 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0 ceph-amk-upgrade-02m3hq-node3 : ok=33 changed=10 unreachable=0 failed=0 skipped=23 rescued=0 ignored=0 ceph-amk-upgrade-02m3hq-node4 : ok=21 changed=2 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0 ceph-amk-upgrade-02m3hq-node5 : ok=30 changed=7 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0 ceph-amk-upgrade-02m3hq-node6 : ok=29 changed=7 unreachable=0 failed=1 skipped=22 rescued=0 ignored=0 localhost : ok=1 changed=1 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Node Installer IP Address : 10.67.24.81(cephuser/cephuser) Actual results: Expected results: Additional info:
Please find the detailed steps followed in the below document https://docs.google.com/document/d/1PE-mQ4cjKsQno3e5v9M7JmdYnVPnEI35Eoq6KKVqJDs/edit#heading=h.bjzcx0uh0ayv Cluster details : ceph-upgrade_bz-SM6YU5-node7 10.0.211.236 ceph-upgrade_bz-SM6YU5-node6 10.0.209.155 ceph-upgrade_bz-SM6YU5-node5 10.0.208.98 ceph-upgrade_bz-SM6YU5-node4 10.0.210.190 ceph-upgrade_bz-SM6YU5-node3 10.0.210.22 ceph-upgrade_bz-SM6YU5-node2 10.0.208.91 ceph-upgrade_bz-SM6YU5-node1-installer 10.0.209.72 Username : cephuser/cephuser
Hi All, After running cephadm-adopt.yaml the OSD on the node is going down and adoption has stopped. [root@ceph-upgrade-bz-sm6yu5-node7 ~]# ceph orch ps NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID mgr.ceph-upgrade-bz-sm6yu5-node1-installer ceph-upgrade-bz-sm6yu5-node1-installer running (4h) 2m ago 4h 396M - 16.2.8-50.el8cp 7efaa9b8d90d 254c94706e5b mgr.ceph-upgrade-bz-sm6yu5-node2 ceph-upgrade-bz-sm6yu5-node2 running (4h) 2m ago 4h 449M - 16.2.8-50.el8cp 7efaa9b8d90d e70d2cc56777 mon.ceph-upgrade-bz-sm6yu5-node1-installer ceph-upgrade-bz-sm6yu5-node1-installer running (4h) 2m ago 4h 741M 2048M 16.2.8-50.el8cp 7efaa9b8d90d 147264706cce mon.ceph-upgrade-bz-sm6yu5-node2 ceph-upgrade-bz-sm6yu5-node2 running (4h) 2m ago 4h 956M 2048M 16.2.8-50.el8cp 7efaa9b8d90d 6e9d3c00e6cd mon.ceph-upgrade-bz-sm6yu5-node3 ceph-upgrade-bz-sm6yu5-node3 running (4h) 2m ago 4h 749M 2048M 16.2.8-50.el8cp 7efaa9b8d90d 76cf32e36c10 osd.1 ceph-upgrade-bz-sm6yu5-node2 running (4h) 2m ago 4h 417M 4096M 16.2.8-50.el8cp 7efaa9b8d90d 0c2e110a64ce osd.13 ceph-upgrade-bz-sm6yu5-node2 running (4h) 2m ago 4h 370M 4096M 16.2.8-50.el8cp 7efaa9b8d90d fe92eb639707 osd.4 ceph-upgrade-bz-sm6yu5-node2 error 2m ago 4h - 4096M <unknown> <unknown> <unknown> osd.8 ceph-upgrade-bz-sm6yu5-node2 running (4h) 2m ago 4h 414M 4096M 16.2.8-50.el8cp 7efaa9b8d90d 38671d373d58 MDS service is still with ceph-ansible only [root@ceph-upgrade-bz-sm6yu5-node7 ~]# ceph fs status cephfs - 1 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active ceph-upgrade-bz-sm6yu5-node6 Reqs: 39 /s 394 171 151 156 POOL TYPE USED AVAIL cephfs_metadata metadata 751M 78.3G cephfs_data data 261M 78.5G STANDBY MDS ceph-upgrade-bz-sm6yu5-node5 ceph-upgrade-bz-sm6yu5-node4 MDS version: ceph version 16.2.8-50.el8cp (1b09a4ab3b8c809da02c3afdc4abdc92b1a97934) pacific (stable) [root@ceph-upgrade-bz-sm6yu5-node7 ~]# Regards, Amarnath
Sure Guillaume. I will delete the setup. ok @Teoman. I will try again. Regards, Amarnath
Apologies there was a typo in the above comment We will not delete the setup Guillaume Regards, Amarnath
Hi Adam, We haven't changed anything as part of the testing. Below are the workloads which we ran on Ceph File system. 1. Mounted the root path of the ceph file system 2. Triggered Smallfile IO workload which creates 100MB of data. 3. Deleted a the IOs which ran in the above step and Triggered DD for 100 MB Step 2 -3 has been repeated continuously, As when we triggered the upgrade process. Regards, Amarnath
Hi Neha, I have tried it 3 times. 3 out of 3 I have hit the issue either in rolling_update.yaml or cephadm-adopt.yaml playbooks. I have used cephci for creating the setup and ran continuous IOs while an upgrade is going on. Regards, Amarnath
Hi Neha, Sure Neha, we can plan to test the upgrade with the patch fixes. Could you please help with the details when the patch is available Regards, Amarnath
(In reply to Amarnath from comment #16) > Hi Neha, > > Sure Neha, we can plan to test the upgrade with the patch fixes. > Could you please help with the details when the patch is available We can make a pacific PR available to you, would you be able to test it before we merge it? Given that you have a reproducer, it will be perfect to use it for validation. > > Regards, > Amarnath
Hi Thomas, Rolling Update playbook has been completed successfully and ceph has been upgraded to [root@ceph-amk-upgrade-bxnpyt-node7 test]# ceph versions { "mon": { "ceph version 16.2.8-65.0.TEST.bz2099828.el8cp (d2e66ff1cd54b8bd9071b67ab458f87b5d88ff15) pacific (stable)": 3 }, "mgr": { "ceph version 16.2.8-65.0.TEST.bz2099828.el8cp (d2e66ff1cd54b8bd9071b67ab458f87b5d88ff15) pacific (stable)": 2 }, "osd": { "ceph version 16.2.8-65.0.TEST.bz2099828.el8cp (d2e66ff1cd54b8bd9071b67ab458f87b5d88ff15) pacific (stable)": 16 }, "mds": { "ceph version 16.2.8-65.0.TEST.bz2099828.el8cp (d2e66ff1cd54b8bd9071b67ab458f87b5d88ff15) pacific (stable)": 3 }, "overall": { "ceph version 16.2.8-65.0.TEST.bz2099828.el8cp (d2e66ff1cd54b8bd9071b67ab458f87b5d88ff15) pacific (stable)": 24 } } Cephadm-adopt.yaml playbook has failed with the below error TASK [ceph-common : enable red hat storage tools repository] ****************************************************************************************************************************************************** task path: /usr/share/ceph-ansible/roles/ceph-common/tasks/installs/prerequisite_rhcs_cdn_install.yml:2 Friday 01 July 2022 07:26:02 -0400 (0:00:00.145) 0:00:10.897 *********** fatal: [ceph-amk-upgrade-bxnpyt-node1-installer]: FAILED! => msg: |- The conditional check 'mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names' failed. The error was: template error while templating string: expected token 'end of statement block', got 'of'. String: {% if mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names %} True {% else %} False {% endif %} The error appears to be in '/usr/share/ceph-ansible/roles/ceph-common/tasks/installs/prerequisite_rhcs_cdn_install.yml': line 2, column 3, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: --- - name: enable red hat storage tools repository ^ here fatal: [ceph-amk-upgrade-bxnpyt-node3]: FAILED! => msg: |- The conditional check 'mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names' failed. The error was: template error while templating string: expected token 'end of statement block', got 'of'. String: {% if mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names %} True {% else %} False {% endif %} The error appears to be in '/usr/share/ceph-ansible/roles/ceph-common/tasks/installs/prerequisite_rhcs_cdn_install.yml': line 2, column 3, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: --- - name: enable red hat storage tools repository ^ here fatal: [ceph-amk-upgrade-bxnpyt-node2]: FAILED! => msg: |- The conditional check 'mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names' failed. The error was: template error while templating string: expected token 'end of statement block', got 'of'. String: {% if mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names %} True {% else %} False {% endif %} The error appears to be in '/usr/share/ceph-ansible/roles/ceph-common/tasks/installs/prerequisite_rhcs_cdn_install.yml': line 2, column 3, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: --- - name: enable red hat storage tools repository ^ here fatal: [ceph-amk-upgrade-bxnpyt-node5]: FAILED! => msg: |- The conditional check 'mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names' failed. The error was: template error while templating string: expected token 'end of statement block', got 'of'. String: {% if mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names %} True {% else %} False {% endif %} The error appears to be in '/usr/share/ceph-ansible/roles/ceph-common/tasks/installs/prerequisite_rhcs_cdn_install.yml': line 2, column 3, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: --- - name: enable red hat storage tools repository ^ here fatal: [ceph-amk-upgrade-bxnpyt-node4]: FAILED! => msg: |- The conditional check 'mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names' failed. The error was: template error while templating string: expected token 'end of statement block', got 'of'. String: {% if mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names %} True {% else %} False {% endif %} The error appears to be in '/usr/share/ceph-ansible/roles/ceph-common/tasks/installs/prerequisite_rhcs_cdn_install.yml': line 2, column 3, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: --- - name: enable red hat storage tools repository ^ here fatal: [ceph-amk-upgrade-bxnpyt-node6]: FAILED! => msg: |- The conditional check 'mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names' failed. The error was: template error while templating string: expected token 'end of statement block', got 'of'. String: {% if mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names %} True {% else %} False {% endif %} The error appears to be in '/usr/share/ceph-ansible/roles/ceph-common/tasks/installs/prerequisite_rhcs_cdn_install.yml': line 2, column 3, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: --- - name: enable red hat storage tools repository ^ here setup Details : Installer Node : 10.0.211.249 username : cephuser password : cephuser Regards, Amarnath
Ok Thomas, I will deploy this builds and test it Regards, Amarnath
Hi Guillaume, Can I get the builds with the fix? To complete one round of testing. Regards, Amarnath
Hi Thomas, I rolling_update went fine but cephadm-adopt is failing. I am hitting issue mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=2099828#c25 Do i need to update ceph-ansible as mentioned in the c27. I don't see that step in documentation https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html-single/upgrade_guide/index#converting-the-storage-cluster-to-using-cephadm_upgrade [cephuser@ceph-amk-upgrade-g44qe9-node1-installer ceph-ansible]$ rpm -qa | grep ceph-ansible ceph-ansible-6.0.25.7-1.el8cp.noarch TASK [ceph-common : enable red hat storage tools repository] ****************************************************************************************************************************************************** task path: /usr/share/ceph-ansible/roles/ceph-common/tasks/installs/prerequisite_rhcs_cdn_install.yml:2 Wednesday 03 August 2022 13:09:51 -0400 (0:00:00.189) 0:00:22.285 ****** fatal: [ceph-amk-upgrade-g44qe9-node1-installer]: FAILED! => msg: |- The conditional check 'mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names' failed. The error was: template error while templating string: expected token 'end of statement block', got 'of'. String: {% if mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names %} True {% else %} False {% endif %} The error appears to be in '/usr/share/ceph-ansible/roles/ceph-common/tasks/installs/prerequisite_rhcs_cdn_install.yml': line 2, column 3, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: --- - name: enable red hat storage tools repository ^ here fatal: [ceph-amk-upgrade-g44qe9-node2]: FAILED! => msg: |- The conditional check 'mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names' failed. The error was: template error while templating string: expected token 'end of statement block', got 'of'. String: {% if mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names %} True {% else %} False {% endif %} The error appears to be in '/usr/share/ceph-ansible/roles/ceph-common/tasks/installs/prerequisite_rhcs_cdn_install.yml': line 2, column 3, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: --- - name: enable red hat storage tools repository ^ here fatal: [ceph-amk-upgrade-g44qe9-node3]: FAILED! => msg: |- The conditional check 'mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names' failed. The error was: template error while templating string: expected token 'end of statement block', got 'of'. String: {% if mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names %} True {% else %} False {% endif %} The error appears to be in '/usr/share/ceph-ansible/roles/ceph-common/tasks/installs/prerequisite_rhcs_cdn_install.yml': line 2, column 3, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: --- - name: enable red hat storage tools repository ^ here fatal: [ceph-amk-upgrade-g44qe9-node4]: FAILED! => msg: |- The conditional check 'mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names' failed. The error was: template error while templating string: expected token 'end of statement block', got 'of'. String: {% if mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names %} True {% else %} False {% endif %} The error appears to be in '/usr/share/ceph-ansible/roles/ceph-common/tasks/installs/prerequisite_rhcs_cdn_install.yml': line 2, column 3, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: --- - name: enable red hat storage tools repository ^ here fatal: [ceph-amk-upgrade-g44qe9-node5]: FAILED! => msg: |- The conditional check 'mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names' failed. The error was: template error while templating string: expected token 'end of statement block', got 'of'. String: {% if mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names %} True {% else %} False {% endif %} The error appears to be in '/usr/share/ceph-ansible/roles/ceph-common/tasks/installs/prerequisite_rhcs_cdn_install.yml': line 2, column 3, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: --- - name: enable red hat storage tools repository ^ here fatal: [ceph-amk-upgrade-g44qe9-node6]: FAILED! => msg: |- The conditional check 'mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names' failed. The error was: template error while templating string: expected token 'end of statement block', got 'of'. String: {% if mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names %} True {% else %} False {% endif %} The error appears to be in '/usr/share/ceph-ansible/roles/ceph-common/tasks/installs/prerequisite_rhcs_cdn_install.yml': line 2, column 3, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: --- - name: enable red hat storage tools repository ^ here fatal: [ceph-amk-upgrade-g44qe9-node7]: FAILED! => msg: |- The conditional check 'mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names' failed. The error was: template error while templating string: expected token 'end of statement block', got 'of'. String: {% if mon_group_name in group_names of osd_group_name in group_names or mgr_group_name in group_names or rgw_group_name in group_names or mds_group_name in group_names or nfs_group_name in group_names or iscsi_gw_group_name in group_names or client_group_name in group_names or rbdmirror_group_name in group_names or monitoring_group_name in group_names %} True {% else %} False {% endif %} The error appears to be in '/usr/share/ceph-ansible/roles/ceph-common/tasks/installs/prerequisite_rhcs_cdn_install.yml': line 2, column 3, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: --- - name: enable red hat storage tools repository ^ here NO MORE HOSTS LEFT ************************************************************************************************************************************************************************************************ PLAY RECAP ******************************************************************************************************************************************************************************************************** ceph-amk-upgrade-g44qe9-node1-installer : ok=13 changed=1 unreachable=0 failed=1 skipped=13 rescued=0 ignored=0 ceph-amk-upgrade-g44qe9-node2 : ok=10 changed=0 unreachable=0 failed=1 skipped=12 rescued=0 ignored=0 ceph-amk-upgrade-g44qe9-node3 : ok=10 changed=0 unreachable=0 failed=1 skipped=12 rescued=0 ignored=0 ceph-amk-upgrade-g44qe9-node4 : ok=10 changed=0 unreachable=0 failed=1 skipped=12 rescued=0 ignored=0 ceph-amk-upgrade-g44qe9-node5 : ok=10 changed=0 unreachable=0 failed=1 skipped=12 rescued=0 ignored=0 ceph-amk-upgrade-g44qe9-node6 : ok=10 changed=0 unreachable=0 failed=1 skipped=12 rescued=0 ignored=0 ceph-amk-upgrade-g44qe9-node7 : ok=10 changed=0 unreachable=0 failed=1 skipped=12 rescued=0 ignored=0 localhost : ok=1 changed=1 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Regards, Amarnath
Hi Guillaume, Thanks for the update. After updating the ceph-ansible update worked fine. for QE testing i should have added the latest repos to /etc/yu.repos.d folder and should have updated of ceph-ansible. I have just subscribed and triggered dnf update subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms dnf update ansible ceph-ansible [root@ceph-amk-upgrade-g44qe9-node1-installer yum.repos.d]# rpm -qa | grep ceph-ansible ceph-ansible-6.0.25.7-1.el8cp.noarch [root@ceph-amk-upgrade-g44qe9-node1-installer yum.repos.d]# yum install ceph-ansible --nogpgcheck Updating Subscription Management repositories. created by dnf config-manager from http://download.eng.bos.redhat.com/rhel-8/composes/auto/ceph-5.2-rhel-8/RHCEPH-5.2-RHEL-8-20220802.ci.0/compose/Tools/x86_64/os/ 294 kB/s | 40 kB 00:00 Package ceph-ansible-6.0.25.7-1.el8cp.noarch is already installed. Dependencies resolved. =================================================================================================================================================================================================================== Package Architecture Version Repository Size =================================================================================================================================================================================================================== Upgrading: ceph-ansible noarch 6.0.27.9-1.el8cp download.eng.bos.redhat.com_rhel-8_composes_auto_ceph-5.2-rhel-8_RHCEPH-5.2-RHEL-8-20220802.ci.0_compose_Tools_x86_64_os_ 235 k Transaction Summary =================================================================================================================================================================================================================== Upgrade 1 Package Total download size: 235 k Is this ok [y/N]: y Downloading Packages: ceph-ansible-6.0.27.9-1.el8cp.noarch.rpm 1.1 MB/s | 235 kB 00:00 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 1.1 MB/s | 235 kB 00:00 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Upgrading : ceph-ansible-6.0.27.9-1.el8cp.noarch 1/2 Cleanup : ceph-ansible-6.0.25.7-1.el8cp.noarch 2/2 Verifying : ceph-ansible-6.0.27.9-1.el8cp.noarch 1/2 Verifying : ceph-ansible-6.0.25.7-1.el8cp.noarch 2/2 Installed products updated. Upgraded: ceph-ansible-6.0.27.9-1.el8cp.noarch Complete! [root@ceph-amk-upgrade-g44qe9-node1-installer yum.repos.d]# exit logout [cephuser@ceph-amk-upgrade-g44qe9-node1-installer ~]$ cd /usr/share/ceph-ansible/ [cephuser@ceph-amk-upgrade-g44qe9-node1-installer ceph-ansible]$ ANSIBLE_STDOUT_CALLBACK=debug;ansible-playbook -e ireallymeanit=yes -vvvv -i hosts infrastructure-playbooks/cephadm-adopt.yml ok: [ceph-amk-upgrade-g44qe9-node1-installer] => changed=false cmd: - cephadm - shell - -k - /etc/ceph/ceph.client.admin.keyring - --fsid - 4fa12ba9-f5d6-4152-b93b-62c5989ee7bb - -- - ceph - orch - ps - --refresh delta: '0:00:02.195815' end: '2022-08-03 14:17:32.562114' invocation: module_args: _raw_params: cephadm shell -k /etc/ceph/ceph.client.admin.keyring --fsid 4fa12ba9-f5d6-4152-b93b-62c5989ee7bb -- ceph orch ps --refresh _uses_shell: false argv: null chdir: null creates: null executable: null removes: null stdin: null stdin_add_newline: true strip_empty_ends: true warn: true rc: 0 start: '2022-08-03 14:17:30.366299' stderr: Inferring config /var/lib/ceph/4fa12ba9-f5d6-4152-b93b-62c5989ee7bb/mon.ceph-amk-upgrade-g44qe9-node1-installer/config stderr_lines: <omitted> stdout: |- NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID crash.ceph-amk-upgrade-g44qe9-node1-installer ceph-amk-upgrade-g44qe9-node1-installer running (42s) 23s ago 42s 6975k - 16.2.8-84.el8cp 2852aa7f3de0 10d0e019ec29 crash.ceph-amk-upgrade-g44qe9-node2 ceph-amk-upgrade-g44qe9-node2 running (36s) 23s ago 36s 6991k - 16.2.8-84.el8cp 2852aa7f3de0 fc4564acaa38 crash.ceph-amk-upgrade-g44qe9-node3 ceph-amk-upgrade-g44qe9-node3 running (45s) 23s ago 45s 6995k - 16.2.8-84.el8cp 2852aa7f3de0 69f33d21de04 crash.ceph-amk-upgrade-g44qe9-node4 ceph-amk-upgrade-g44qe9-node4 running (32s) 22s ago 32s 7000k - 16.2.8-84.el8cp 2852aa7f3de0 5a795757dda5 crash.ceph-amk-upgrade-g44qe9-node5 ceph-amk-upgrade-g44qe9-node5 running (27s) 22s ago 26s 6983k - 16.2.8-84.el8cp 2852aa7f3de0 28a1fae67d80 crash.ceph-amk-upgrade-g44qe9-node6 ceph-amk-upgrade-g44qe9-node6 running (39s) 22s ago 39s 6987k - 16.2.8-84.el8cp 2852aa7f3de0 41e9c94ba86f crash.ceph-amk-upgrade-g44qe9-node7 ceph-amk-upgrade-g44qe9-node7 running (29s) 14s ago 29s 6995k - 16.2.8-84.el8cp 2852aa7f3de0 529fe6739be4 mds.cephfs.ceph-amk-upgrade-g44qe9-node3.iwehzj ceph-amk-upgrade-g44qe9-node3 running (2m) 23s ago 2m 47.1M - 16.2.8-84.el8cp 2852aa7f3de0 65c835440e3b mds.cephfs.ceph-amk-upgrade-g44qe9-node4.bfzcsg ceph-amk-upgrade-g44qe9-node4 running (2m) 22s ago 2m 14.1M - 16.2.8-84.el8cp 2852aa7f3de0 ae80356edaf7 mds.cephfs.ceph-amk-upgrade-g44qe9-node5.ketcnx ceph-amk-upgrade-g44qe9-node5 running (2m) 22s ago 2m 14.0M - 16.2.8-84.el8cp 2852aa7f3de0 53252621a850 mds.cephfs.ceph-amk-upgrade-g44qe9-node6.tvihot ceph-amk-upgrade-g44qe9-node6 running (2m) 22s ago 2m 19.5M - 16.2.8-84.el8cp 2852aa7f3de0 dcba81ceee5f mds.cephfs.ceph-amk-upgrade-g44qe9-node7.rvylkh ceph-amk-upgrade-g44qe9-node7 running (2m) 14s ago 2m 14.5M - 16.2.8-84.el8cp 2852aa7f3de0 8afd76032558 mgr.ceph-amk-upgrade-g44qe9-node1-installer ceph-amk-upgrade-g44qe9-node1-installer running (6m) 23s ago 6m 446M - 16.2.8-84.el8cp 2852aa7f3de0 2616fc94fa39 mgr.ceph-amk-upgrade-g44qe9-node2 ceph-amk-upgrade-g44qe9-node2 running (6m) 23s ago 6m 388M - 16.2.8-84.el8cp 2852aa7f3de0 b570a8c1043d mon.ceph-amk-upgrade-g44qe9-node1-installer ceph-amk-upgrade-g44qe9-node1-installer running (7m) 23s ago 6m 118M 2048M 16.2.8-84.el8cp 2852aa7f3de0 ca1a0769ca9f mon.ceph-amk-upgrade-g44qe9-node2 ceph-amk-upgrade-g44qe9-node2 running (6m) 23s ago 6m 109M 2048M 16.2.8-84.el8cp 2852aa7f3de0 37efe24edc15 mon.ceph-amk-upgrade-g44qe9-node3 ceph-amk-upgrade-g44qe9-node3 running (6m) 23s ago 6m 121M 2048M 16.2.8-84.el8cp 2852aa7f3de0 5bd99dd8eb88 nfs.ceph-amk-upgrade-g44qe9-node6.0.0.ceph-amk-upgrade-g44qe9-node6.edffwi ceph-amk-upgrade-g44qe9-node6 *:2049 running (74s) 22s ago 74s 68.6M - 3.5 2852aa7f3de0 8f390b375668 nfs.ceph-amk-upgrade-g44qe9-node7.0.0.ceph-amk-upgrade-g44qe9-node7.isxihs ceph-amk-upgrade-g44qe9-node7 *:2049 running (54s) 14s ago 54s 64.3M - 3.5 2852aa7f3de0 f114d538d11f node-exporter.ceph-amk-upgrade-g44qe9-node1-installer ceph-amk-upgrade-g44qe9-node1-installer *:9100 starting - - - - <unknown> <unknown> <unknown> node-exporter.ceph-amk-upgrade-g44qe9-node3 ceph-amk-upgrade-g44qe9-node3 *:9100 starting - - - - <unknown> <unknown> <unknown> node-exporter.ceph-amk-upgrade-g44qe9-node6 ceph-amk-upgrade-g44qe9-node6 *:9100 starting - - - - <unknown> <unknown> <unknown> osd.0 ceph-amk-upgrade-g44qe9-node6 running (3m) 22s ago 117s 331M 4096M 16.2.8-84.el8cp 2852aa7f3de0 b94afd735d9c osd.1 ceph-amk-upgrade-g44qe9-node4 running (5m) 22s ago 2m 256M 4096M 16.2.8-84.el8cp 2852aa7f3de0 1b44f806e94b osd.10 ceph-amk-upgrade-g44qe9-node4 running (5m) 22s ago 2m 297M 4096M 16.2.8-84.el8cp 2852aa7f3de0 83833decad30 osd.11 ceph-amk-upgrade-g44qe9-node5 running (4m) 22s ago 2m 124M 4096M 16.2.8-84.el8cp 2852aa7f3de0 16168debcf47 osd.2 ceph-amk-upgrade-g44qe9-node5 running (4m) 22s ago 2m 191M 4096M 16.2.8-84.el8cp 2852aa7f3de0 4a3f67d507df osd.3 ceph-amk-upgrade-g44qe9-node6 running (3m) 22s ago 115s 314M 4096M 16.2.8-84.el8cp 2852aa7f3de0 aadbb1ff8bd0 osd.4 ceph-amk-upgrade-g44qe9-node4 running (5m) 22s ago 2m 199M 4096M 16.2.8-84.el8cp 2852aa7f3de0 6d0ec009c652 osd.5 ceph-amk-upgrade-g44qe9-node5 running (4m) 22s ago 2m 44.0M 4096M 16.2.8-84.el8cp 2852aa7f3de0 ef87c2b0c163 osd.6 ceph-amk-upgrade-g44qe9-node6 running (3m) 22s ago 113s 225M 4096M 16.2.8-84.el8cp 2852aa7f3de0 358fd4d2ee04 osd.7 ceph-amk-upgrade-g44qe9-node4 running (5m) 22s ago 2m 259M 4096M 16.2.8-84.el8cp 2852aa7f3de0 142d437e1795 osd.8 ceph-amk-upgrade-g44qe9-node5 running (4m) 22s ago 2m 212M 4096M 16.2.8-84.el8cp 2852aa7f3de0 1a5de6f98508 osd.9 ceph-amk-upgrade-g44qe9-node6 running (3m) 22s ago 111s 281M 4096M 16.2.8-84.el8cp 2852aa7f3de0 3168f1f9ca94 stdout_lines: <omitted> TASK [inform users about cephadm] ********************************************************************************************************************************************************************************* task path: /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:1544 Wednesday 03 August 2022 14:17:32 -0400 (0:00:02.488) 0:09:55.901 ****** ok: [ceph-amk-upgrade-g44qe9-node1-installer] => msg: |- This Ceph cluster is now managed by cephadm. Any new changes to the cluster need to be achieved by using the cephadm CLI and you don't need to use ceph-ansible playbooks anymore. META: ran handlers META: ran handlers PLAY RECAP ******************************************************************************************************************************************************************************************************** ceph-amk-upgrade-g44qe9-node1-installer : ok=66 changed=14 unreachable=0 failed=0 skipped=26 rescued=0 ignored=0 ceph-amk-upgrade-g44qe9-node2 : ok=32 changed=11 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0 ceph-amk-upgrade-g44qe9-node3 : ok=34 changed=13 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0 ceph-amk-upgrade-g44qe9-node4 : ok=50 changed=21 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0 ceph-amk-upgrade-g44qe9-node5 : ok=40 changed=17 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0 ceph-amk-upgrade-g44qe9-node6 : ok=49 changed=21 unreachable=0 failed=0 skipped=23 rescued=0 ignored=0 ceph-amk-upgrade-g44qe9-node7 : ok=60 changed=32 unreachable=0 failed=0 skipped=24 rescued=0 ignored=0 localhost : ok=1 changed=1 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Wednesday 03 August 2022 14:17:32 -0400 (0:00:00.023) 0:09:55.924 ****** =============================================================================== ceph-common : enable red hat storage tools repository ----------------------------------------------------------------------------------------------------------------------------------------------------- 71.24s /usr/share/ceph-ansible/roles/ceph-common/tasks/installs/prerequisite_rhcs_cdn_install.yml:2 --------------------------------------------------------------------------------------------------------------------- adopt osd daemon ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 32.57s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:814 ------------------------------------------------------------------------------------------------------------------------------------------- waiting for clean pgs... ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 28.13s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:845 ------------------------------------------------------------------------------------------------------------------------------------------- manage nodes with cephadm - ipv4 -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 11.97s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:377 ------------------------------------------------------------------------------------------------------------------------------------------- stop and disable ceph-mds systemd service ----------------------------------------------------------------------------------------------------------------------------------------------------------------- 10.99s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:931 ------------------------------------------------------------------------------------------------------------------------------------------- set cephadm as orchestrator backend ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- 10.69s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:252 ------------------------------------------------------------------------------------------------------------------------------------------- install cephadm ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 10.67s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:205 ------------------------------------------------------------------------------------------------------------------------------------------- install cephadm requirements ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 8.40s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:198 ------------------------------------------------------------------------------------------------------------------------------------------- copy the client.admin keyring ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 7.49s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:413 ------------------------------------------------------------------------------------------------------------------------------------------- adopt mon daemon ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 6.70s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:573 ------------------------------------------------------------------------------------------------------------------------------------------- disable pg autoscale on pools ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 6.63s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:741 ------------------------------------------------------------------------------------------------------------------------------------------- add ceph label for core component -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 6.52s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:389 ------------------------------------------------------------------------------------------------------------------------------------------- adopt grafana daemon --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 6.37s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:1412 ------------------------------------------------------------------------------------------------------------------------------------------ gather and delegate facts ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 6.16s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:64 -------------------------------------------------------------------------------------------------------------------------------------------- stop and disable ceph-nfs systemd service ------------------------------------------------------------------------------------------------------------------------------------------------------------------ 5.96s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:1122 ------------------------------------------------------------------------------------------------------------------------------------------ adopt mgr daemon ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 5.86s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:618 ------------------------------------------------------------------------------------------------------------------------------------------- ceph-nfs : create rgw nfs user "cephnfs" ------------------------------------------------------------------------------------------------------------------------------------------------------------------- 5.59s /usr/share/ceph-ansible/roles/ceph-nfs/tasks/create_rgw_nfs_user.yml:2 ------------------------------------------------------------------------------------------------------------------------------------------- re-enable pg autoscale on pools ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 5.17s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:868 ------------------------------------------------------------------------------------------------------------------------------------------- adopt prometheus daemon ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 4.65s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:1385 ------------------------------------------------------------------------------------------------------------------------------------------ unset osd flags -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 4.48s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:883 ------------------------------------------------------------------------------------------------------------------------------------------- [cephuser@ceph-amk-upgrade-g44qe9-node1-installer ceph-ansible]$ Regards, Amarnath
Hi Akash, While running cephadm-adopt.yaml after rolling-update.yaml. We are seeing issue with the repos. It is not able to find cephadm packages on the nodes than the installer node. We are adding the subscriptions to only installer nodes as part of https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html-single/upgrade_guide/index#the-upgrade-process_upgrade 1.9 upgrade process. After adding the repo files to all the nodes cephadm-adopt.yaml succeeded. I feel we need to add the step for adding repos to all the nodes in the cluster. fatal: [ceph-amk-upgrade-rc-8wmtul-node4]: FAILED! => changed=false attempts: 3 failures: - No package cephadm available. invocation: module_args: allow_downgrade: false autoremove: false bugfix: false conf_file: null disable_excludes: null disable_gpg_check: false disable_plugin: [] disablerepo: [] download_dir: null download_only: false enable_plugin: [] enablerepo: [] exclude: [] install_repoquery: true install_weak_deps: true installroot: / list: null lock_timeout: 30 name: - cephadm releasever: null security: false skip_broken: false state: null update_cache: false update_only: false validate_certs: true msg: Failed to install some of the specified packages rc: 1 results: [] NO MORE HOSTS LEFT ************************************************************************************************************************************************************************************************ PLAY RECAP ******************************************************************************************************************************************************************************************************** ceph-amk-upgrade-rc-8wmtul-node1-installer : ok=17 changed=2 unreachable=0 failed=0 skipped=20 rescued=0 ignored=0 ceph-amk-upgrade-rc-8wmtul-node2 : ok=12 changed=0 unreachable=0 failed=1 skipped=19 rescued=0 ignored=0 ceph-amk-upgrade-rc-8wmtul-node3 : ok=12 changed=0 unreachable=0 failed=1 skipped=19 rescued=0 ignored=0 ceph-amk-upgrade-rc-8wmtul-node4 : ok=12 changed=0 unreachable=0 failed=1 skipped=19 rescued=0 ignored=0 ceph-amk-upgrade-rc-8wmtul-node5 : ok=12 changed=0 unreachable=0 failed=1 skipped=19 rescued=0 ignored=0 ceph-amk-upgrade-rc-8wmtul-node6 : ok=12 changed=0 unreachable=0 failed=1 skipped=19 rescued=0 ignored=0 ceph-amk-upgrade-rc-8wmtul-node7 : ok=12 changed=0 unreachable=0 failed=1 skipped=19 rescued=0 ignored=0 localhost : ok=1 changed=1 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 Friday 05 August 2022 13:40:48 -0400 (0:00:34.553) 0:01:45.902 ********* =============================================================================== ceph-common : enable red hat storage tools repository ----------------------------------------------------------------------------------------------------------------------------------------------------- 44.12s /usr/share/ceph-ansible/roles/ceph-common/tasks/installs/prerequisite_rhcs_cdn_install.yml:2 --------------------------------------------------------------------------------------------------------------------- install cephadm ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 34.55s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:205 ------------------------------------------------------------------------------------------------------------------------------------------- gather and delegate facts --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 14.12s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:64 -------------------------------------------------------------------------------------------------------------------------------------------- install cephadm requirements ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 5.14s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:198 ------------------------------------------------------------------------------------------------------------------------------------------- get the ceph version --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.85s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:114 ------------------------------------------------------------------------------------------------------------------------------------------- check pools have an application enabled -------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.17s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:93 -------------------------------------------------------------------------------------------------------------------------------------------- set mgr/cephadm/no_five_one_rgw ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.96s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:168 ------------------------------------------------------------------------------------------------------------------------------------------- ceph-common : purge yum cache ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 0.55s /usr/share/ceph-ansible/roles/ceph-common/tasks/installs/configure_redhat_repository_installation.yml:19 --------------------------------------------------------------------------------------------------------- ceph-facts : check if podman binary is present ------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.44s /usr/share/ceph-ansible/roles/ceph-facts/tasks/container_binary.yml:2 -------------------------------------------------------------------------------------------------------------------------------------------- check if it is atomic host --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.36s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:130 ------------------------------------------------------------------------------------------------------------------------------------------- ceph-common : include redhat_rhcs_repository.yml ----------------------------------------------------------------------------------------------------------------------------------------------------------- 0.15s /usr/share/ceph-ansible/roles/ceph-common/tasks/installs/configure_redhat_repository_installation.yml:6 ---------------------------------------------------------------------------------------------------------- gather facts ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.14s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:56 -------------------------------------------------------------------------------------------------------------------------------------------- ceph-common : include prerequisite_rhcs_cdn_install.yml ---------------------------------------------------------------------------------------------------------------------------------------------------- 0.13s /usr/share/ceph-ansible/roles/ceph-common/tasks/installs/redhat_rhcs_repository.yml:2 ---------------------------------------------------------------------------------------------------------------------------- ceph-common : include installs/configure_redhat_repository_installation.yml -------------------------------------------------------------------------------------------------------------------------------- 0.12s /usr/share/ceph-ansible/roles/ceph-common/tasks/configure_repository.yml:5 --------------------------------------------------------------------------------------------------------------------------------------- ceph-facts : convert grafana-server group name if exist ---------------------------------------------------------------------------------------------------------------------------------------------------- 0.11s /usr/share/ceph-ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml:2 --------------------------------------------------------------------------------------------------------------------------- ceph-common : include installs/configure_suse_repository_installation.yml ---------------------------------------------------------------------------------------------------------------------------------- 0.11s /usr/share/ceph-ansible/roles/ceph-common/tasks/configure_repository.yml:27 -------------------------------------------------------------------------------------------------------------------------------------- pulling registry-proxy.engineering.redhat.com/rh-osbs/rhceph:5-268 image ----------------------------------------------------------------------------------------------------------------------------------- 0.11s /usr/share/ceph-ansible/infrastructure-playbooks/cephadm-adopt.yml:150 ------------------------------------------------------------------------------------------------------------------------------------------- ceph-common : include installs/configure_redhat_local_installation.yml ------------------------------------------------------------------------------------------------------------------------------------- 0.10s /usr/share/ceph-ansible/roles/ceph-common/tasks/configure_repository.yml:9 --------------------------------------------------------------------------------------------------------------------------------------- ceph-common : update apt cache if cache_valid_time has expired --------------------------------------------------------------------------------------------------------------------------------------------- 0.10s /usr/share/ceph-ansible/roles/ceph-common/tasks/configure_repository.yml:20 -------------------------------------------------------------------------------------------------------------------------------------- ceph-common : include redhat_dev_repository.yml ------------------------------------------------------------------------------------------------------------------------------------------------------------ 0.10s /usr/share/ceph-ansible/roles/ceph-common/tasks/installs/configure_redhat_repository_installation.yml:10 --------------------------------------------------------------------------------------------------------- [cephuser@ceph-amk-upgrade-rc-8wmtul-node1-installer ceph-ansible]$ sudo -i [root@ceph-amk-upgrade-rc-8wmtul-node1-installer ~]# scp -r /etc/yum.repos.d/download.eng.bos.redhat.com_rhel-8_composes_auto_ceph-5.2-rhel-8_RHCEPH-5.2-RHEL-8-20220803.ci.0_compose_Tools_x86_64_os.repo ceph-amk-upgrade-rc-8wmtul-node2:/etc/yum.repos.d/ The authenticity of host 'ceph-amk-upgrade-rc-8wmtul-node2 (10.0.208.213)' can't be established. ECDSA key fingerprint is SHA256:0b10q2b83QDGtVcJo3FOaJSRWYuZ64Q6mDWq3FGwFz8. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added 'ceph-amk-upgrade-rc-8wmtul-node2,10.0.208.213' (ECDSA) to the list of known hosts. root@ceph-amk-upgrade-rc-8wmtul-node2's password: download.eng.bos.redhat.com_rhel-8_composes_auto_ceph-5.2-rhel-8_RHCEPH-5.2-RHEL-8-20220803.ci.0_compose_Tools_x86_64_os.repo 100% 432 657.9KB/s 00:00 [root@ceph-amk-upgrade-rc-8wmtul-node1-installer ~]# scp -r /etc/yum.repos.d/download.eng.bos.redhat.com_rhel-8_composes_auto_ceph-5.2-rhel-8_RHCEPH-5.2-RHEL-8-20220803.ci.0_compose_Tools_x86_64_os.repo ceph-amk-upgrade-rc-8wmtul-node3:/etc/yum.repos.d/ The authenticity of host 'ceph-amk-upgrade-rc-8wmtul-node3 (10.0.211.176)' can't be established. ECDSA key fingerprint is SHA256:2tpuUXvB/WOKU3Z4pcRbFNGgZ7Db/sVATzAXGyAe3AI. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added 'ceph-amk-upgrade-rc-8wmtul-node3,10.0.211.176' (ECDSA) to the list of known hosts. root@ceph-amk-upgrade-rc-8wmtul-node3's password: download.eng.bos.redhat.com_rhel-8_composes_auto_ceph-5.2-rhel-8_RHCEPH-5.2-RHEL-8-20220803.ci.0_compose_Tools_x86_64_os.repo 100% 432 858.6KB/s 00:00 [root@ceph-amk-upgrade-rc-8wmtul-node1-installer ~]# scp -r /etc/yum.repos.d/download.eng.bos.redhat.com_rhel-8_composes_auto_ceph-5.2-rhel-8_RHCEPH-5.2-RHEL-8-20220803.ci.0_compose_Tools_x86_64_os.repo ceph-amk-upgrade-rc-8wmtul-node4:/etc/yum.repos.d/ The authenticity of host 'ceph-amk-upgrade-rc-8wmtul-node4 (10.0.208.74)' can't be established. ECDSA key fingerprint is SHA256:wM6ACcgTKDnfT5daTTGPqJ+alzWoSgyB8wPIHSNOkKk. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added 'ceph-amk-upgrade-rc-8wmtul-node4,10.0.208.74' (ECDSA) to the list of known hosts. root@ceph-amk-upgrade-rc-8wmtul-node4's password: download.eng.bos.redhat.com_rhel-8_composes_auto_ceph-5.2-rhel-8_RHCEPH-5.2-RHEL-8-20220803.ci.0_compose_Tools_x86_64_os.repo 100% 432 791.7KB/s 00:00 [root@ceph-amk-upgrade-rc-8wmtul-node1-installer ~]# scp -r /etc/yum.repos.d/download.eng.bos.redhat.com_rhel-8_composes_auto_ceph-5.2-rhel-8_RHCEPH-5.2-RHEL-8-20220803.ci.0_compose_Tools_x86_64_os.repo ceph-amk-upgrade-rc-8wmtul-node5:/etc/yum.repos.d/ The authenticity of host 'ceph-amk-upgrade-rc-8wmtul-node5 (10.0.208.46)' can't be established. ECDSA key fingerprint is SHA256:IhtbMt8bKskvj3fvT7tq9a+C2vEw7mlJcd8U+RIy/C8. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added 'ceph-amk-upgrade-rc-8wmtul-node5,10.0.208.46' (ECDSA) to the list of known hosts. root@ceph-amk-upgrade-rc-8wmtul-node5's password: download.eng.bos.redhat.com_rhel-8_composes_auto_ceph-5.2-rhel-8_RHCEPH-5.2-RHEL-8-20220803.ci.0_compose_Tools_x86_64_os.repo 100% 432 822.8KB/s 00:00 [root@ceph-amk-upgrade-rc-8wmtul-node1-installer ~]# scp -r /etc/yum.repos.d/download.eng.bos.redhat.com_rhel-8_composes_auto_ceph-5.2-rhel-8_RHCEPH-5.2-RHEL-8-20220803.ci.0_compose_Tools_x86_64_os.repo ceph-amk-upgrade-rc-8wmtul-node5:/etc/yum.repos.d/ root@ceph-amk-upgrade-rc-8wmtul-node5's password: download.eng.bos.redhat.com_rhel-8_composes_auto_ceph-5.2-rhel-8_RHCEPH-5.2-RHEL-8-20220803.ci.0_compose_Tools_x86_64_os.repo 100% 432 759.5KB/s 00:00 [root@ceph-amk-upgrade-rc-8wmtul-node1-installer ~]# scp -r /etc/yum.repos.d/download.eng.bos.redhat.com_rhel-8_composes_auto_ceph-5.2-rhel-8_RHCEPH-5.2-RHEL-8-20220803.ci.0_compose_Tools_x86_64_os.repo ceph-amk-upgrade-rc-8wmtul-node6:/etc/yum.repos.d/ The authenticity of host 'ceph-amk-upgrade-rc-8wmtul-node6 (10.0.211.25)' can't be established. ECDSA key fingerprint is SHA256:WZ7Z4k13Cb9OVrmIB8OeEuX7JGOgfBqWmMgWI371eig. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added 'ceph-amk-upgrade-rc-8wmtul-node6,10.0.211.25' (ECDSA) to the list of known hosts. root@ceph-amk-upgrade-rc-8wmtul-node6's password: download.eng.bos.redhat.com_rhel-8_composes_auto_ceph-5.2-rhel-8_RHCEPH-5.2-RHEL-8-20220803.ci.0_compose_Tools_x86_64_os.repo 100% 432 471.1KB/s 00:00 [root@ceph-amk-upgrade-rc-8wmtul-node1-installer ~]# scp -r /etc/yum.repos.d/download.eng.bos.redhat.com_rhel-8_composes_auto_ceph-5.2-rhel-8_RHCEPH-5.2-RHEL-8-20220803.ci.0_compose_Tools_x86_64_os.repo ceph-amk-upgrade-rc-8wmtul-node7:/etc/yum.repos.d/ The authenticity of host 'ceph-amk-upgrade-rc-8wmtul-node7 (10.0.208.197)' can't be established. ECDSA key fingerprint is SHA256:mv+KhsOPKGUQprkKM59tb356tTSSl1kDYG05xj0/Tys. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added 'ceph-amk-upgrade-rc-8wmtul-node7,10.0.208.197' (ECDSA) to the list of known hosts. root@ceph-amk-upgrade-rc-8wmtul-node7's password: download.eng.bos.redhat.com_rhel-8_composes_auto_ceph-5.2-rhel-8_RHCEPH-5.2-RHEL-8-20220803.ci.0_compose_Tools_x86_64_os.repo 100% 432 642.1KB/s 00:00 [root@ceph-amk-upgrade-rc-8wmtul-node1-installer ~]# Regards, Amarnath
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5997
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 365 days