Description of problem (please be detailed as possible and provide log snippests): Must Gather, rados snapshot api return Error Version of all relevant components (if applicable): OCP Version:4.9.17 ODF Version:4.9.2-11 Platform:Vmware Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Is there any workaround available to the best of your knowledge? Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: 1.Run mg command: oc adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather:latest-4.9 2.Check the content of rados snapshot files: error getting omap keys ocs-storagecluster-cephfilesystem-metadata/csi.snap.50c21d7f-8316-11ec-8075-0a580a800222: (2) No such file or directory command terminated with exit code 1 http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j-010vu1cs33s-t1/j-010vu1cs33s-t1_20220131T214940/logs/failed_testcase_ocs_logs_1643668843/test_must_gather%5bCEPH%5d_ocs_logs/ocs_must_gather/quay-io-rhceph-dev-ocs-must-gather-sha256-d0257233e55ce2d9f0fd9bbf8ac08d42817989ed8a8dc75d5f8238a960fbcf95/ceph/logs/gather-rados-csi.snap.snapshot-6f8a55ce-2c25-4774-9540-8b635af9ffb3-debug.log http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j-010vu1cs33s-t1/j-010vu1cs33s-t1_20220131T214940/logs/failed_testcase_ocs_logs_1643668843/test_must_gather%5bCEPH%5d_ocs_logs/ocs_must_gather/quay-io-rhceph-dev-ocs-must-gather-sha256-d0257233e55ce2d9f0fd9bbf8ac08d42817989ed8a8dc75d5f8238a960fbcf95/ceph/logs/gather-rados-csi.snap.snapshot-43a5d3b9-53b3-4735-b62d-a82ebc9eae78-debug.log OCS + OCP MG: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j-010vu1cs33s-t1/j-010vu1cs33s-t1_20220131T214940/logs/failed_testcase_ocs_logs_1643668843/test_must_gather%5bCEPH%5d_ocs_logs/ Actual results: Expected results: Additional info: The development team need a live cluster to debug this issue.
Can't be fixed before dev freeze and not a blocker.
Not reproducible, please reopen if this is seen again. Would be greate if we can have the cluster as well when this issue is seen.