Description of problem: Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: Actual results: Expected results:
Hey Vidushi, I tried to reproduce it in the upstream latest and downstream 7.1 branches and I see the expected behaviour. The issue wasn't reproduced on my end. Thanks, Shreyansh.
Created attachment 2037079 [details] Coredump Attached the coredump generated in the pluto test env cluster. (gdb) bt #0 0x00007eff5e85a94c in _IO_new_fclose (fp=0xef) at iofclose.c:67 #1 0x3dff35fb9a06f400 in ?? () #2 0x00005558f6f64e90 in ?? () #3 0x0000000000000000 in ?? ()
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Ceph Storage 7.1 security, bug fix, and enhancement updates), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2025:4664