Description of problem: [RHCS 5] Tracking omap format change https://github.com/ceph/ceph/pull/33401/files
For 5.0 we'll turn off the format change by default, and document how to perform it with ceph-ansible.
Moving the code change and tracking bug to verified state based on the automation regression run. [ceph: root@magna045 /]# ceph -s cluster: id: 4185a64c-71b0-11eb-bae4-002590fbc342 health: HEALTH_OK services: mon: 5 daemons, quorum magna045,magna049,magna046,magna048,magna047 (age 13d) mgr: magna045.pwohab(active, since 13d), standbys: magna046.ajvpup osd: 35 osds: 35 up (since 7h), 35 in (since 13d) data: pools: 1 pools, 1 pgs objects: 40 objects, 0 B usage: 596 MiB used, 100 TiB / 100 TiB avail pgs: 1 active+clean [ceph: root@magna045 /]# ceph version ceph version 16.1.0-100.el8cp (fd37c928e824870f3b214b12828a3d8f9d1ebbc1) pacific (rc) [ceph: root@magna045 /]#
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294