Description of problem (please be detailed as possible and provide log snippests): On node reboot during OCP upgrade OSD is CLBO. bluestore(/var/lib/ceph/osd/ceph-4/block) _read_bdev_label bad crc on label, expected 2712758112 != actual 2707884733 Version of all relevant components (if applicable): ODF v4.12.1 "ceph version 16.2.10-138.el8cp (a63ae467c8e1f7503ea3855893f1e5ca189a71b9) pacific (stable)" Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Need dedicated ceph-bluestore-tool recover-superblock/repair-superblock command Is there any workaround available to the best of your knowledge? No Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? Unknown Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
/cases/03592854/0010-must-gather-osd4err.tar.gz/must-gather/registry-redhat-io-odf4-ocs-must-gather-rhel8-sha256-004a8d2b06150a8e0781b6734672388372938123cc3273fc84e8385fe300ea10/namespaces/openshift-storage/pods/rook-ceph-osd-4-7b57db5f95-c5497/activate/activate/logs/current.log 2023-08-21T14:29:08.041108150Z failed to read label for /var/lib/ceph/osd/ceph-4/block: (5) Input/output error 2023-08-21T14:29:08.041213859Z 2023-08-21T14:29:08.040+0000 7fb004eb9540 -1 bluestore(/var/lib/ceph/osd/ceph-4/block) _read_bdev_label bad crc on label, expected 2712758112 != actual 2707884733