Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2421370

Summary: RBD mirror operation dumping core
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Chaitanya <cdommeti>
Component: RBD-MirrorAssignee: Ilya Dryomov <idryomov>
Status: CLOSED ERRATA QA Contact: Chaitanya <cdommeti>
Severity: high Docs Contact:
Priority: unspecified    
Version: 9.0CC: ceph-eng-bugs, cephqe-warriors, jcaratza, sangadi, tserlin
Target Milestone: ---   
Target Release: 9.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-20.1.0-133 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2026-01-29 07:04:36 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Chaitanya 2025-12-11 10:59:30 UTC
Description of problem:

Seeing core dump on the host where rbd-mirror daemon is running on the secondary site, when some of the mirror operations are running , especially on the journal mirror pool. Seeing this when the automated test is running. Unable to identify manually, the exact command that is leading to the core dump. 

The steps from the automation log when it dumped core are:

2025-12-11 05:03:14,526 - cephci - run:802 - INFO - Running test test_expand_or_shrink_img_at_secondary.py
2025-12-11 05:03:14,526 - cephci - test_expand_or_shrink_img_at_secondary:99 - INFO - Starting RBD mirroring test case - 9500
2025-12-11 05:03:14,528 - cephci - ceph:1630 - INFO - Execute ceph osd pool create rep_pool_sEFsKwPzEz 64 64 --cluster ceph on 10.0.64.11
2025-12-11 05:03:14,529 - cephci - ceph:1630 - INFO - Execute ceph osd pool create rep_pool_sEFsKwPzEz 64 64 --cluster ceph on 10.0.65.194
2025-12-11 05:03:15,534 - cephci - ceph:1660 - INFO - Execution of ceph osd pool create rep_pool_sEFsKwPzEz 64 64 --cluster ceph on 10.0.65.194 took 1.003845 seconds
2025-12-11 05:03:15,535 - cephci - rbd_mirror_utils:76 - INFO - Output of command ceph osd pool create rep_pool_sEFsKwPzEz 64 64 --cluster ceph:
2025-12-11 05:03:15,536 - cephci - ceph:1630 - INFO - Execute rbd pool init rep_pool_sEFsKwPzEz --cluster ceph on 10.0.65.194
2025-12-11 05:03:16,535 - cephci - ceph:1660 - INFO - Execution of ceph osd pool create rep_pool_sEFsKwPzEz 64 64 --cluster ceph on 10.0.64.11 took 2.005763 seconds
2025-12-11 05:03:16,536 - cephci - rbd_mirror_utils:76 - INFO - Output of command ceph osd pool create rep_pool_sEFsKwPzEz 64 64 --cluster ceph:
2025-12-11 05:03:16,538 - cephci - ceph:1630 - INFO - Execute rbd pool init rep_pool_sEFsKwPzEz --cluster ceph on 10.0.64.11
2025-12-11 05:03:18,542 - cephci - ceph:1660 - INFO - Execution of rbd pool init rep_pool_sEFsKwPzEz --cluster ceph on 10.0.65.194 took 3.005431 seconds
2025-12-11 05:03:18,543 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd pool init rep_pool_sEFsKwPzEz --cluster ceph:
2025-12-11 05:03:19,544 - cephci - ceph:1660 - INFO - Execution of rbd pool init rep_pool_sEFsKwPzEz --cluster ceph on 10.0.64.11 took 3.005589 seconds
2025-12-11 05:03:19,545 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd pool init rep_pool_sEFsKwPzEz --cluster ceph:
2025-12-11 05:03:22,536 - cephci - ceph:1630 - INFO - Execute rbd create rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM --size 1G --cluster ceph on 10.0.64.11
2025-12-11 05:03:23,541 - cephci - ceph:1660 - INFO - Execution of rbd create rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM --size 1G --cluster ceph on 10.0.64.11 took 1.004029 seconds
2025-12-11 05:03:23,541 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd create rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM --size 1G --cluster ceph:
2025-12-11 05:03:23,543 - cephci - ceph:1630 - INFO - Execute rbd feature enable rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM journaling --cluster ceph on 10.0.64.11
2025-12-11 05:03:24,548 - cephci - ceph:1660 - INFO - Execution of rbd feature enable rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM journaling --cluster ceph on 10.0.64.11 took 1.003871 seconds
2025-12-11 05:03:24,548 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd feature enable rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM journaling --cluster ceph:
2025-12-11 05:03:24,550 - cephci - ceph:1630 - INFO - Execute rbd mirror pool enable rep_pool_sEFsKwPzEz pool --cluster ceph on 10.0.64.11
2025-12-11 05:03:25,555 - cephci - ceph:1660 - INFO - Execution of rbd mirror pool enable rep_pool_sEFsKwPzEz pool --cluster ceph on 10.0.64.11 took 1.004452 seconds
2025-12-11 05:03:25,556 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd mirror pool enable rep_pool_sEFsKwPzEz pool --cluster ceph:
2025-12-11 05:03:25,558 - cephci - ceph:1630 - INFO - Execute rbd mirror pool enable rep_pool_sEFsKwPzEz pool --cluster ceph on 10.0.65.194
2025-12-11 05:03:26,562 - cephci - ceph:1660 - INFO - Execution of rbd mirror pool enable rep_pool_sEFsKwPzEz pool --cluster ceph on 10.0.65.194 took 1.003329 seconds
2025-12-11 05:03:26,562 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd mirror pool enable rep_pool_sEFsKwPzEz pool --cluster ceph:
2025-12-11 05:03:26,564 - cephci - ceph:1630 - INFO - Execute rbd mirror pool peer bootstrap create --site-name ceph-rbd1 rep_pool_sEFsKwPzEz > /root/bootstrap_token <masked> --cluster ceph on 10.0.64.11
2025-12-11 05:03:27,569 - cephci - ceph:1660 - INFO - Execution of rbd mirror pool peer bootstrap create --site-name ceph-rbd1 rep_pool_sEFsKwPzEz > /root/bootstrap_token <masked> --cluster ceph on 10.0.64.11 took 1.003954 seconds
2025-12-11 05:03:27,570 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd mirror pool peer bootstrap create --site-name ceph-rbd1 rep_pool_sEFsKwPzEz > /root/bootstrap_token <masked> --cluster ceph:
2025-12-11 05:03:27,571 - cephci - ceph:1630 - INFO - Execute cat /root/bootstrap_token <masked> on 10.0.64.11
2025-12-11 05:03:28,576 - cephci - ceph:1660 - INFO - Execution of cat /root/bootstrap_token <masked> on 10.0.64.11 took 1.004168 seconds
2025-12-11 05:03:28,598 - paramiko.transport.sftp - sftp:169 - INFO - [chan 3] Opened sftp connection (server version 3)
2025-12-11 05:03:28,600 - cephci - ceph:1630 - INFO - Execute rbd mirror pool peer bootstrap import --site-name ceph-rbd2 rep_pool_sEFsKwPzEz /root/bootstrap_token <masked> --cluster ceph on 10.0.65.194
2025-12-11 05:03:29,604 - cephci - ceph:1660 - INFO - Execution of rbd mirror pool peer bootstrap import --site-name ceph-rbd2 rep_pool_sEFsKwPzEz /root/bootstrap_token <masked> --cluster ceph on 10.0.65.194 took 1.002985 seconds
2025-12-11 05:03:29,605 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd mirror pool peer bootstrap import --site-name ceph-rbd2 rep_pool_sEFsKwPzEz /root/bootstrap_token <masked> --cluster ceph:
2025-12-11 05:03:29,607 - cephci - ceph:1630 - INFO - Execute rbd mirror pool info rep_pool_sEFsKwPzEz --format=json --cluster ceph on 10.0.64.11
2025-12-11 05:03:30,612 - cephci - ceph:1660 - INFO - Execution of rbd mirror pool info rep_pool_sEFsKwPzEz --format=json --cluster ceph on 10.0.64.11 took 1.004708 seconds
2025-12-11 05:03:30,612 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd mirror pool info rep_pool_sEFsKwPzEz --format=json --cluster ceph: {"mode":"pool","site_name":"ceph-rbd1","mirror_uuid":"23d36a32-8be4-465c-ae63-f88a38f0f8ca","remote_namespace":"","peers":[{"uuid":"4828b146-5476-4504-80a5-4e4b788c3f3f","direction":"rx-tx","site_name":"ceph-rbd2","mirror_uuid":"","client_name":"client.rbd-mirror-peer"}]}

2025-12-11 05:03:30,614 - cephci - ceph:1630 - INFO - Execute rbd mirror pool info rep_pool_sEFsKwPzEz --format=json --cluster ceph on 10.0.65.194
2025-12-11 05:03:31,617 - cephci - ceph:1660 - INFO - Execution of rbd mirror pool info rep_pool_sEFsKwPzEz --format=json --cluster ceph on 10.0.65.194 took 1.00249 seconds
2025-12-11 05:03:31,617 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd mirror pool info rep_pool_sEFsKwPzEz --format=json --cluster ceph: {"mode":"pool","site_name":"ceph-rbd2","mirror_uuid":"94a872b7-d77c-4426-af74-cfd118e85ce7","remote_namespace":"","peers":[{"uuid":"a20aadfe-3545-4773-bdf4-59cb0bc937d4","direction":"rx-tx","site_name":"ceph-rbd1","mirror_uuid":"","client_name":"client.rbd-mirror-peer"}]}

2025-12-11 05:03:31,618 - cephci - rbd_mirror_utils:294 - INFO - Peers were successfully added
2025-12-11 05:03:51,638 - cephci - ceph:1630 - INFO - Execute rbd mirror pool status rep_pool_sEFsKwPzEz --format=json --cluster ceph on 10.0.64.11
2025-12-11 05:03:52,643 - cephci - ceph:1660 - INFO - Execution of rbd mirror pool status rep_pool_sEFsKwPzEz --format=json --cluster ceph on 10.0.64.11 took 1.003996 seconds
2025-12-11 05:03:52,643 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd mirror pool status rep_pool_sEFsKwPzEz --format=json --cluster ceph: {"summary":{"health":"WARNING","daemon_health":"WARNING","image_health":"OK","group_health":"OK","image_states":{"replaying":1},"group_states":{}}}

2025-12-11 05:03:52,644 - cephci - rbd_mirror_utils:399 - INFO - Health of rep_pool_sEFsKwPzEz pool in ceph cluster: WARNING
2025-12-11 05:04:12,662 - cephci - ceph:1630 - INFO - Execute rbd mirror pool status rep_pool_sEFsKwPzEz --format=json --cluster ceph on 10.0.64.11
2025-12-11 05:04:13,667 - cephci - ceph:1660 - INFO - Execution of rbd mirror pool status rep_pool_sEFsKwPzEz --format=json --cluster ceph on 10.0.64.11 took 1.004074 seconds
2025-12-11 05:04:13,667 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd mirror pool status rep_pool_sEFsKwPzEz --format=json --cluster ceph: {"summary":{"health":"OK","daemon_health":"OK","image_health":"OK","group_health":"OK","image_states":{"replaying":1},"group_states":{}}}

2025-12-11 05:04:13,668 - cephci - rbd_mirror_utils:399 - INFO - Health of rep_pool_sEFsKwPzEz pool in ceph cluster: OK
2025-12-11 05:04:33,689 - cephci - ceph:1630 - INFO - Execute rbd mirror pool status rep_pool_sEFsKwPzEz --format=json --cluster ceph on 10.0.65.194
2025-12-11 05:04:34,694 - cephci - ceph:1660 - INFO - Execution of rbd mirror pool status rep_pool_sEFsKwPzEz --format=json --cluster ceph on 10.0.65.194 took 1.004178 seconds
2025-12-11 05:04:34,695 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd mirror pool status rep_pool_sEFsKwPzEz --format=json --cluster ceph: {"summary":{"health":"OK","daemon_health":"OK","image_health":"OK","group_health":"OK","image_states":{"replaying":1},"group_states":{}}}

2025-12-11 05:04:34,695 - cephci - rbd_mirror_utils:399 - INFO - Health of rep_pool_sEFsKwPzEz pool in ceph cluster: OK
2025-12-11 05:04:34,697 - cephci - ceph:1630 - INFO - Execute rbd bench --io-type write --io-threads 16 --io-total 1G rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM --cluster ceph on 10.0.64.11
2025-12-11 05:04:45,042 - cephci - ceph:1660 - INFO - Execution of rbd bench --io-type write --io-threads 16 --io-total 1G rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM --cluster ceph on 10.0.64.11 took 10.344406 seconds
2025-12-11 05:04:45,043 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd bench --io-type write --io-threads 16 --io-total 1G rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM --cluster ceph: bench  type write io_size 4096 io_threads 16 bytes 1073741824 pattern sequential
  SEC       OPS   OPS/SEC   BYTES/SEC
    1     29936   29892.3   117 MiB/s
    2     57184     28600   112 MiB/s
    3     86752   28884.2   113 MiB/s
    4    116640   29156.8   114 MiB/s
    5    141920   28375.9   111 MiB/s
    6    172048   27941.8   109 MiB/s
    7    198336   28230.4   110 MiB/s
    8    229136   28499.6   111 MiB/s
    9    258800   28437.7   111 MiB/s
elapsed: 9   ops: 262144   ops/sec: 28556   bytes/sec: 112 MiB/s

2025-12-11 05:06:05,119 - cephci - ceph:1630 - INFO - Execute rbd mirror image status rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM --format=json --cluster ceph on 10.0.65.194
2025-12-11 05:06:05,122 - cephci - ceph:1630 - INFO - Execute rbd mirror image status rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM --format=json --cluster ceph on 10.0.64.11
2025-12-11 05:06:06,127 - cephci - ceph:1660 - INFO - Execution of rbd mirror image status rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM --format=json --cluster ceph on 10.0.65.194 took 1.0031 seconds
2025-12-11 05:06:06,128 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd mirror image status rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM --format=json --cluster ceph: {"name":"rep_image_sILslFSGjM","global_id":"6cf75771-087e-40dd-9420-0600bc3a2c93","state":"up+replaying","description":"replaying, {\"bytes_per_second\":950820.0,\"entries_behind_primary\":243628,\"entries_per_second\":229.67,\"non_primary_position\":{\"entry_tid\":18517,\"object_number\":5,\"tag_tid\":2},\"primary_position\":{\"entry_tid\":262145,\"object_number\":65,\"tag_tid\":2},\"seconds_until_synced\":1060}","daemon_service":{"service_id":"26098","instance_id":"26404","daemon_id":"ceph-rbd2-cd-reg1-nm8agb-node5.gpbdmq","hostname":"ceph-rbd2-cd-reg1-nm8agb-node5"},"last_update":"2025-12-11 10:06:02","peer_sites":[{"site_name":"ceph-rbd1","mirror_uuid":"23d36a32-8be4-465c-ae63-f88a38f0f8ca","state":"up+stopped","description":"local image is primary","last_update":"2025-12-11 10:05:55"}]}

2025-12-11 05:06:06,129 - cephci - rbd_mirror_utils:430 - INFO - State of image rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM : up+replaying,                             waiting for up+replaying
2025-12-11 05:06:06,130 - cephci - ceph:1660 - INFO - Execution of rbd mirror image status rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM --format=json --cluster ceph on 10.0.64.11 took 1.004663 seconds
2025-12-11 05:06:06,130 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd mirror image status rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM --format=json --cluster ceph: {"name":"rep_image_sILslFSGjM","global_id":"6cf75771-087e-40dd-9420-0600bc3a2c93","state":"up+stopped","description":"local image is primary","daemon_service":{"service_id":"24180","instance_id":"33463","daemon_id":"ceph-rbd1-cd-reg1-nm8agb-node5.wtwexi","hostname":"ceph-rbd1-cd-reg1-nm8agb-node5"},"last_update":"2025-12-11 10:05:55","peer_sites":[{"site_name":"ceph-rbd2","mirror_uuid":"94a872b7-d77c-4426-af74-cfd118e85ce7","state":"up+replaying","description":"replaying, {\"bytes_per_second\":950820.0,\"entries_behind_primary\":243628,\"entries_per_second\":229.67,\"non_primary_position\":{\"entry_tid\":18517,\"object_number\":5,\"tag_tid\":2},\"primary_position\":{\"entry_tid\":262145,\"object_number\":65,\"tag_tid\":2},\"seconds_until_synced\":1060}","last_update":"2025-12-11 10:06:01"}]}

2025-12-11 05:06:06,131 - cephci - rbd_mirror_utils:430 - INFO - State of image rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM : up+stopped,                             waiting for up+stopped
2025-12-11 05:06:09,125 - cephci - ceph:1630 - INFO - Execute ceph osd pool create rbd_test_data_pool_XAIrHhWMUv 12 12 erasure  --cluster ceph on 10.0.64.11
2025-12-11 05:06:09,126 - cephci - ceph:1630 - INFO - Execute ceph osd pool create rbd_test_data_pool_XAIrHhWMUv 12 12 erasure  --cluster ceph on 10.0.65.194
2025-12-11 05:06:10,131 - cephci - ceph:1660 - INFO - Execution of ceph osd pool create rbd_test_data_pool_XAIrHhWMUv 12 12 erasure  --cluster ceph on 10.0.64.11 took 1.004728 seconds
2025-12-11 05:06:10,132 - cephci - ceph:1660 - INFO - Execution of ceph osd pool create rbd_test_data_pool_XAIrHhWMUv 12 12 erasure  --cluster ceph on 10.0.65.194 took 1.004906 seconds
2025-12-11 05:06:10,133 - cephci - rbd_mirror_utils:76 - INFO - Output of command ceph osd pool create rbd_test_data_pool_XAIrHhWMUv 12 12 erasure  --cluster ceph:
2025-12-11 05:06:10,133 - cephci - rbd_mirror_utils:76 - INFO - Output of command ceph osd pool create rbd_test_data_pool_XAIrHhWMUv 12 12 erasure  --cluster ceph:
2025-12-11 05:06:10,135 - cephci - ceph:1630 - INFO - Execute rbd pool init rbd_test_data_pool_XAIrHhWMUv --cluster ceph on 10.0.64.11
2025-12-11 05:06:10,136 - cephci - ceph:1630 - INFO - Execute rbd pool init rbd_test_data_pool_XAIrHhWMUv --cluster ceph on 10.0.65.194
2025-12-11 05:06:13,143 - cephci - ceph:1660 - INFO - Execution of rbd pool init rbd_test_data_pool_XAIrHhWMUv --cluster ceph on 10.0.65.194 took 3.006169 seconds
2025-12-11 05:06:13,144 - cephci - ceph:1660 - INFO - Execution of rbd pool init rbd_test_data_pool_XAIrHhWMUv --cluster ceph on 10.0.64.11 took 3.007127 seconds
2025-12-11 05:06:13,144 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd pool init rbd_test_data_pool_XAIrHhWMUv --cluster ceph:
2025-12-11 05:06:13,145 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd pool init rbd_test_data_pool_XAIrHhWMUv --cluster ceph:
2025-12-11 05:06:13,147 - cephci - ceph:1630 - INFO - Execute ceph osd pool set rbd_test_data_pool_XAIrHhWMUv allow_ec_overwrites true --cluster ceph on 10.0.65.194
2025-12-11 05:06:13,180 - cephci - ceph:1630 - INFO - Execute ceph osd pool set rbd_test_data_pool_XAIrHhWMUv allow_ec_overwrites true --cluster ceph on 10.0.64.11
2025-12-11 05:06:14,152 - cephci - ceph:1660 - INFO - Execution of ceph osd pool set rbd_test_data_pool_XAIrHhWMUv allow_ec_overwrites true --cluster ceph on 10.0.65.194 took 1.003664 seconds
2025-12-11 05:06:14,152 - cephci - rbd_mirror_utils:76 - INFO - Output of command ceph osd pool set rbd_test_data_pool_XAIrHhWMUv allow_ec_overwrites true --cluster ceph:
2025-12-11 05:06:14,154 - cephci - ceph:1630 - INFO - Execute ceph osd pool create ec_img_pool_BVFSMAMiFm 64 64 --cluster ceph on 10.0.65.194
2025-12-11 05:06:14,193 - cephci - ceph:1660 - INFO - Execution of ceph osd pool set rbd_test_data_pool_XAIrHhWMUv allow_ec_overwrites true --cluster ceph on 10.0.64.11 took 1.012614 seconds
2025-12-11 05:06:14,194 - cephci - rbd_mirror_utils:76 - INFO - Output of command ceph osd pool set rbd_test_data_pool_XAIrHhWMUv allow_ec_overwrites true --cluster ceph:
2025-12-11 05:06:14,235 - cephci - ceph:1630 - INFO - Execute ceph osd pool create ec_img_pool_BVFSMAMiFm 64 64 --cluster ceph on 10.0.64.11
2025-12-11 05:06:15,158 - cephci - ceph:1660 - INFO - Execution of ceph osd pool create ec_img_pool_BVFSMAMiFm 64 64 --cluster ceph on 10.0.65.194 took 1.003536 seconds
2025-12-11 05:06:15,159 - cephci - rbd_mirror_utils:76 - INFO - Output of command ceph osd pool create ec_img_pool_BVFSMAMiFm 64 64 --cluster ceph:
2025-12-11 05:06:15,160 - cephci - ceph:1630 - INFO - Execute rbd pool init ec_img_pool_BVFSMAMiFm --cluster ceph on 10.0.65.194
2025-12-11 05:06:15,239 - cephci - ceph:1660 - INFO - Execution of ceph osd pool create ec_img_pool_BVFSMAMiFm 64 64 --cluster ceph on 10.0.64.11 took 1.003756 seconds
2025-12-11 05:06:15,240 - cephci - rbd_mirror_utils:76 - INFO - Output of command ceph osd pool create ec_img_pool_BVFSMAMiFm 64 64 --cluster ceph:
2025-12-11 05:06:15,268 - cephci - ceph:1630 - INFO - Execute rbd pool init ec_img_pool_BVFSMAMiFm --cluster ceph on 10.0.64.11
2025-12-11 05:06:18,166 - cephci - ceph:1660 - INFO - Execution of rbd pool init ec_img_pool_BVFSMAMiFm --cluster ceph on 10.0.65.194 took 3.005509 seconds
2025-12-11 05:06:18,167 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd pool init ec_img_pool_BVFSMAMiFm --cluster ceph:
2025-12-11 05:06:18,274 - cephci - ceph:1660 - INFO - Execution of rbd pool init ec_img_pool_BVFSMAMiFm --cluster ceph on 10.0.64.11 took 3.005725 seconds
2025-12-11 05:06:18,275 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd pool init ec_img_pool_BVFSMAMiFm --cluster ceph:
2025-12-11 05:06:21,138 - cephci - ceph:1630 - INFO - Execute rbd create ec_img_pool_BVFSMAMiFm/ec_imageUNIRBXuNKO --size 1G --data-pool rbd_test_data_pool_XAIrHhWMUv --cluster ceph on 10.0.64.11
2025-12-11 05:06:22,143 - cephci - ceph:1660 - INFO - Execution of rbd create ec_img_pool_BVFSMAMiFm/ec_imageUNIRBXuNKO --size 1G --data-pool rbd_test_data_pool_XAIrHhWMUv --cluster ceph on 10.0.64.11 took 1.004547 seconds
2025-12-11 05:06:22,144 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd create ec_img_pool_BVFSMAMiFm/ec_imageUNIRBXuNKO --size 1G --data-pool rbd_test_data_pool_XAIrHhWMUv --cluster ceph:
2025-12-11 05:06:22,146 - cephci - ceph:1630 - INFO - Execute rbd feature enable ec_img_pool_BVFSMAMiFm/ec_imageUNIRBXuNKO journaling --cluster ceph on 10.0.64.11
2025-12-11 05:06:23,151 - cephci - ceph:1660 - INFO - Execution of rbd feature enable ec_img_pool_BVFSMAMiFm/ec_imageUNIRBXuNKO journaling --cluster ceph on 10.0.64.11 took 1.004152 seconds
2025-12-11 05:06:23,151 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd feature enable ec_img_pool_BVFSMAMiFm/ec_imageUNIRBXuNKO journaling --cluster ceph:
2025-12-11 05:06:23,153 - cephci - ceph:1630 - INFO - Execute rbd mirror pool enable ec_img_pool_BVFSMAMiFm pool --cluster ceph on 10.0.64.11
2025-12-11 05:06:24,158 - cephci - ceph:1660 - INFO - Execution of rbd mirror pool enable ec_img_pool_BVFSMAMiFm pool --cluster ceph on 10.0.64.11 took 1.00409 seconds
2025-12-11 05:06:24,159 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd mirror pool enable ec_img_pool_BVFSMAMiFm pool --cluster ceph:
2025-12-11 05:06:24,160 - cephci - ceph:1630 - INFO - Execute rbd mirror pool enable ec_img_pool_BVFSMAMiFm pool --cluster ceph on 10.0.65.194
2025-12-11 05:06:25,165 - cephci - ceph:1660 - INFO - Execution of rbd mirror pool enable ec_img_pool_BVFSMAMiFm pool --cluster ceph on 10.0.65.194 took 1.003564 seconds
2025-12-11 05:06:25,165 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd mirror pool enable ec_img_pool_BVFSMAMiFm pool --cluster ceph:
2025-12-11 05:06:25,167 - cephci - ceph:1630 - INFO - Execute rbd mirror pool peer bootstrap create --site-name ceph-rbd1 ec_img_pool_BVFSMAMiFm > /root/bootstrap_token <masked> --cluster ceph on 10.0.64.11
2025-12-11 05:06:26,172 - cephci - ceph:1660 - INFO - Execution of rbd mirror pool peer bootstrap create --site-name ceph-rbd1 ec_img_pool_BVFSMAMiFm > /root/bootstrap_token <masked> --cluster ceph on 10.0.64.11 took 1.00414 seconds
2025-12-11 05:06:26,173 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd mirror pool peer bootstrap create --site-name ceph-rbd1 ec_img_pool_BVFSMAMiFm > /root/bootstrap_token <masked> --cluster ceph:
2025-12-11 05:06:26,175 - cephci - ceph:1630 - INFO - Execute cat /root/bootstrap_token <masked> on 10.0.64.11
2025-12-11 05:06:27,179 - cephci - ceph:1660 - INFO - Execution of cat /root/bootstrap_token <masked> on 10.0.64.11 took 1.004156 seconds
2025-12-11 05:06:27,203 - paramiko.transport.sftp - sftp:169 - INFO - [chan 14] Opened sftp connection (server version 3)
2025-12-11 05:06:27,207 - cephci - ceph:1630 - INFO - Execute rbd mirror pool peer bootstrap import --site-name ceph-rbd2 ec_img_pool_BVFSMAMiFm /root/bootstrap_token <masked> --cluster ceph on 10.0.65.194
2025-12-11 05:06:28,211 - cephci - ceph:1660 - INFO - Execution of rbd mirror pool peer bootstrap import --site-name ceph-rbd2 ec_img_pool_BVFSMAMiFm /root/bootstrap_token <masked> --cluster ceph on 10.0.65.194 took 1.003614 seconds
2025-12-11 05:06:28,211 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd mirror pool peer bootstrap import --site-name ceph-rbd2 ec_img_pool_BVFSMAMiFm /root/bootstrap_token <masked> --cluster ceph:
2025-12-11 05:06:28,213 - cephci - ceph:1630 - INFO - Execute rbd mirror pool info ec_img_pool_BVFSMAMiFm --format=json --cluster ceph on 10.0.64.11
2025-12-11 05:06:29,218 - cephci - ceph:1660 - INFO - Execution of rbd mirror pool info ec_img_pool_BVFSMAMiFm --format=json --cluster ceph on 10.0.64.11 took 1.003977 seconds
2025-12-11 05:06:29,218 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd mirror pool info ec_img_pool_BVFSMAMiFm --format=json --cluster ceph: {"mode":"pool","site_name":"ceph-rbd1","mirror_uuid":"cf60d4fe-b236-4095-8ea2-4e528a6bd091","remote_namespace":"","peers":[{"uuid":"d38347d6-1aae-4989-b055-aa0a45940ab0","direction":"rx-tx","site_name":"ceph-rbd2","mirror_uuid":"","client_name":"client.rbd-mirror-peer"}]}

2025-12-11 05:06:29,220 - cephci - ceph:1630 - INFO - Execute rbd mirror pool info ec_img_pool_BVFSMAMiFm --format=json --cluster ceph on 10.0.65.194
2025-12-11 05:06:30,225 - cephci - ceph:1660 - INFO - Execution of rbd mirror pool info ec_img_pool_BVFSMAMiFm --format=json --cluster ceph on 10.0.65.194 took 1.003861 seconds
2025-12-11 05:06:30,225 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd mirror pool info ec_img_pool_BVFSMAMiFm --format=json --cluster ceph: {"mode":"pool","site_name":"ceph-rbd2","mirror_uuid":"60f3fb85-0c24-481c-acd8-c50084ae2bb0","remote_namespace":"","peers":[{"uuid":"a579891f-8f8c-43a2-ba09-6f4a909261cf","direction":"rx-tx","site_name":"ceph-rbd1","mirror_uuid":"","client_name":"client.rbd-mirror-peer"}]}

2025-12-11 05:06:30,226 - cephci - rbd_mirror_utils:294 - INFO - Peers were successfully added
2025-12-11 05:06:50,248 - cephci - ceph:1630 - INFO - Execute rbd mirror pool status ec_img_pool_BVFSMAMiFm --format=json --cluster ceph on 10.0.64.11
2025-12-11 05:06:51,253 - cephci - ceph:1660 - INFO - Execution of rbd mirror pool status ec_img_pool_BVFSMAMiFm --format=json --cluster ceph on 10.0.64.11 took 1.004239 seconds
2025-12-11 05:06:51,253 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd mirror pool status ec_img_pool_BVFSMAMiFm --format=json --cluster ceph: {"summary":{"health":"WARNING","daemon_health":"WARNING","image_health":"OK","group_health":"OK","image_states":{"replaying":1},"group_states":{}}}

2025-12-11 05:06:51,254 - cephci - rbd_mirror_utils:399 - INFO - Health of ec_img_pool_BVFSMAMiFm pool in ceph cluster: WARNING
2025-12-11 05:07:11,276 - cephci - ceph:1630 - INFO - Execute rbd mirror pool status ec_img_pool_BVFSMAMiFm --format=json --cluster ceph on 10.0.64.11
2025-12-11 05:07:12,281 - cephci - ceph:1660 - INFO - Execution of rbd mirror pool status ec_img_pool_BVFSMAMiFm --format=json --cluster ceph on 10.0.64.11 took 1.004189 seconds
2025-12-11 05:07:12,281 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd mirror pool status ec_img_pool_BVFSMAMiFm --format=json --cluster ceph: {"summary":{"health":"OK","daemon_health":"OK","image_health":"OK","group_health":"OK","image_states":{"replaying":1},"group_states":{}}}

2025-12-11 05:07:12,282 - cephci - rbd_mirror_utils:399 - INFO - Health of ec_img_pool_BVFSMAMiFm pool in ceph cluster: OK
2025-12-11 05:07:32,302 - cephci - ceph:1630 - INFO - Execute rbd mirror pool status ec_img_pool_BVFSMAMiFm --format=json --cluster ceph on 10.0.65.194
2025-12-11 05:07:33,306 - cephci - ceph:1660 - INFO - Execution of rbd mirror pool status ec_img_pool_BVFSMAMiFm --format=json --cluster ceph on 10.0.65.194 took 1.003773 seconds
2025-12-11 05:07:33,307 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd mirror pool status ec_img_pool_BVFSMAMiFm --format=json --cluster ceph: {"summary":{"health":"OK","daemon_health":"OK","image_health":"OK","group_health":"OK","image_states":{"replaying":1},"group_states":{}}}

2025-12-11 05:07:33,307 - cephci - rbd_mirror_utils:399 - INFO - Health of ec_img_pool_BVFSMAMiFm pool in ceph cluster: OK
2025-12-11 05:07:33,309 - cephci - ceph:1630 - INFO - Execute rbd bench --io-type write --io-threads 16 --io-total 1G ec_img_pool_BVFSMAMiFm/ec_imageUNIRBXuNKO --cluster ceph on 10.0.64.11
2025-12-11 05:07:43,178 - cephci - ceph:1660 - INFO - Execution of rbd bench --io-type write --io-threads 16 --io-total 1G ec_img_pool_BVFSMAMiFm/ec_imageUNIRBXuNKO --cluster ceph on 10.0.64.11 took 9.868418 seconds
2025-12-11 05:07:43,179 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd bench --io-type write --io-threads 16 --io-total 1G ec_img_pool_BVFSMAMiFm/ec_imageUNIRBXuNKO --cluster ceph: bench  type write io_size 4096 io_threads 16 bytes 1073741824 pattern sequential
  SEC       OPS   OPS/SEC   BYTES/SEC
    1     32912     32928   129 MiB/s
    2     65680   32684.6   128 MiB/s
    3     96000   32005.4   125 MiB/s
    4    120800   30181.4   118 MiB/s
    5    151216   30246.4   118 MiB/s
    6    183056   30022.8   117 MiB/s
    7    214608   29809.5   116 MiB/s
    8    247104   30220.8   118 MiB/s
elapsed: 8   ops: 262144   ops/sec: 30180.1   bytes/sec: 118 MiB/s

2025-12-11 05:09:03,252 - cephci - ceph:1630 - INFO - Execute rbd mirror image status ec_img_pool_BVFSMAMiFm/ec_imageUNIRBXuNKO --format=json --cluster ceph on 10.0.65.194
2025-12-11 05:09:03,253 - cephci - ceph:1630 - INFO - Execute rbd mirror image status ec_img_pool_BVFSMAMiFm/ec_imageUNIRBXuNKO --format=json --cluster ceph on 10.0.64.11
2025-12-11 05:09:04,261 - cephci - ceph:1660 - INFO - Execution of rbd mirror image status ec_img_pool_BVFSMAMiFm/ec_imageUNIRBXuNKO --format=json --cluster ceph on 10.0.65.194 took 1.004395 seconds
2025-12-11 05:09:04,262 - cephci - ceph:1660 - INFO - Execution of rbd mirror image status ec_img_pool_BVFSMAMiFm/ec_imageUNIRBXuNKO --format=json --cluster ceph on 10.0.64.11 took 1.00458 seconds
2025-12-11 05:09:04,262 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd mirror image status ec_img_pool_BVFSMAMiFm/ec_imageUNIRBXuNKO --format=json --cluster ceph: {"name":"ec_imageUNIRBXuNKO","global_id":"5c1a0a14-13b9-4c51-aace-df3ca5d6f2fe","state":"up+replaying","description":"replaying, {\"bytes_per_second\":809922.0,\"entries_behind_primary\":246527,\"entries_per_second\":195.63,\"non_primary_position\":{\"entry_tid\":15618,\"object_number\":2,\"tag_tid\":2},\"primary_position\":{\"entry_tid\":262145,\"object_number\":65,\"tag_tid\":2},\"seconds_until_synced\":1260}","daemon_service":{"service_id":"26098","instance_id":"26552","daemon_id":"ceph-rbd2-cd-reg1-nm8agb-node5.gpbdmq","hostname":"ceph-rbd2-cd-reg1-nm8agb-node5"},"last_update":"2025-12-11 10:09:02","peer_sites":[{"site_name":"ceph-rbd1","mirror_uuid":"cf60d4fe-b236-4095-8ea2-4e528a6bd091","state":"up+stopped","description":"local image is primary","last_update":"2025-12-11 10:08:55"}]}

2025-12-11 05:09:04,263 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd mirror image status ec_img_pool_BVFSMAMiFm/ec_imageUNIRBXuNKO --format=json --cluster ceph: {"name":"ec_imageUNIRBXuNKO","global_id":"5c1a0a14-13b9-4c51-aace-df3ca5d6f2fe","state":"up+stopped","description":"local image is primary","daemon_service":{"service_id":"24180","instance_id":"33562","daemon_id":"ceph-rbd1-cd-reg1-nm8agb-node5.wtwexi","hostname":"ceph-rbd1-cd-reg1-nm8agb-node5"},"last_update":"2025-12-11 10:08:55","peer_sites":[{"site_name":"ceph-rbd2","mirror_uuid":"60f3fb85-0c24-481c-acd8-c50084ae2bb0","state":"up+replaying","description":"replaying, {\"bytes_per_second\":809922.0,\"entries_behind_primary\":246527,\"entries_per_second\":195.63,\"non_primary_position\":{\"entry_tid\":15618,\"object_number\":2,\"tag_tid\":2},\"primary_position\":{\"entry_tid\":262145,\"object_number\":65,\"tag_tid\":2},\"seconds_until_synced\":1260}","last_update":"2025-12-11 10:09:01"}]}

2025-12-11 05:09:04,264 - cephci - rbd_mirror_utils:430 - INFO - State of image ec_img_pool_BVFSMAMiFm/ec_imageUNIRBXuNKO : up+replaying,                             waiting for up+replaying
2025-12-11 05:09:04,264 - cephci - rbd_mirror_utils:430 - INFO - State of image ec_img_pool_BVFSMAMiFm/ec_imageUNIRBXuNKO : up+stopped,                             waiting for up+stopped
2025-12-11 05:09:07,257 - cephci - test_expand_or_shrink_img_at_secondary:103 - INFO - Executing test on replicated pool
2025-12-11 05:09:07,257 - cephci - test_expand_or_shrink_img_at_secondary:27 - INFO - Trying to shrink secondary image
2025-12-11 05:09:07,259 - cephci - rbd_mirror_utils:1074 - INFO - Resizing image rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM to size 10M
2025-12-11 05:09:07,261 - cephci - ceph:1630 - INFO - Execute rbd bench --io-type write --io-threads 16 --io-total 200M rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM --cluster ceph on 10.0.64.11
2025-12-11 05:09:07,262 - cephci - ceph:1630 - INFO - Execute rbd resize rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM -s 10M --allow-shrink --cluster ceph on 10.0.65.194
2025-12-11 05:09:08,266 - cephci - ceph:1660 - INFO - Execution of rbd resize rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM -s 10M --allow-shrink --cluster ceph on 10.0.65.194 took 1.003756 seconds
2025-12-11 05:09:09,304 - cephci - ceph:1660 - INFO - Execution of rbd bench --io-type write --io-threads 16 --io-total 200M rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM --cluster ceph on 10.0.64.11 took 2.041825 seconds
2025-12-11 05:09:09,305 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd bench --io-type write --io-threads 16 --io-total 200M rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM --cluster ceph: bench  type write io_size 4096 io_threads 16 bytes 209715200 pattern sequential
  SEC       OPS   OPS/SEC   BYTES/SEC
    1     27136     27152   106 MiB/s
elapsed: 1   ops: 51200   ops/sec: 27780.8   bytes/sec: 109 MiB/s

2025-12-11 05:09:11,264 - ceph.parallel - parallel:142 - ERROR - Encountered an exception during parallel execution.
Traceback (most recent call last):
  File "/home/cdommeti/cephci/tests/rbd_mirror/rbd_mirror_utils.py", line 1077, in resize_image
    self.exec_cmd(sudo=True, cmd=cmd)
  File "/home/cdommeti/cephci/tests/rbd_mirror/rbd_mirror_utils.py", line 71, in exec_cmd
    out, err = node.exec_command(
  File "/home/cdommeti/cephci/ceph/ceph.py", line 1754, in exec_command
    raise CommandFailed(
Resizing image: 0% complete...failed.ep_pool_sEFsKwPzEz/rep_image_sILslFSGjM -s 10M --allow-shrink --cluster ceph returned
rbd: resize error: (30) Read-only file system
 and code 30 on 10.0.65.194

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/cdommeti/cephci/ceph/parallel.py", line 140, in __exit__
    self._results.append(_f.result())
  File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/home/cdommeti/cephci/tests/rbd_mirror/rbd_mirror_utils.py", line 1081, in resize_image
    raise IOonSecondaryError("Detected I/O Operation on secondary")
tests.rbd.exceptions.IOonSecondaryError: Detected I/O Operation on secondary
2025-12-11 05:09:11,267 - cephci - test_expand_or_shrink_img_at_secondary:42 - INFO - Shrinking secondary image has failed as expected
2025-12-11 05:09:11,268 - cephci - test_expand_or_shrink_img_at_secondary:50 - INFO - Trying to expand secondary image
2025-12-11 05:09:11,269 - cephci - rbd_mirror_utils:1074 - INFO - Resizing image rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM to size 3G
2025-12-11 05:09:11,270 - cephci - ceph:1630 - INFO - Execute rbd bench --io-type write --io-threads 16 --io-total 200M rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM --cluster ceph on 10.0.64.11
2025-12-11 05:09:11,272 - cephci - ceph:1630 - INFO - Execute rbd resize rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM -s 3G --allow-shrink --cluster ceph on 10.0.65.194
2025-12-11 05:09:12,276 - cephci - ceph:1660 - INFO - Execution of rbd resize rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM -s 3G --allow-shrink --cluster ceph on 10.0.65.194 took 1.003803 seconds
2025-12-11 05:09:13,398 - cephci - ceph:1660 - INFO - Execution of rbd bench --io-type write --io-threads 16 --io-total 200M rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM --cluster ceph on 10.0.64.11 took 2.126373 seconds
2025-12-11 05:09:13,399 - cephci - rbd_mirror_utils:76 - INFO - Output of command rbd bench --io-type write --io-threads 16 --io-total 200M rep_pool_sEFsKwPzEz/rep_image_sILslFSGjM --cluster ceph: bench  type write io_size 4096 io_threads 16 bytes 209715200 pattern sequential
  SEC       OPS   OPS/SEC   BYTES/SEC
    1     29184     29200   114 MiB/s
elapsed: 1   ops: 51200   ops/sec: 27571.4   bytes/sec: 108 MiB/s






the error messages seen in RBD-mirro daemon log are:

 11 10:09:46 ceph-rbd2-cd-reg1-nm8agb-node5 systemd-coredump[741612]: [🡕] Process 738283 (rbd-mirror) of user 167 dumped core.

                                                                         Module libpcre2-8.so.0 from rpm pcre2-10.44-1.el10.3.x86_64
                                                                         Module libzstd.so.1 from rpm zstd-1.5.5-9.el10.x86_64
                                                                         Module libp11-kit.so.0 from rpm p11-kit-0.25.5-7.el10.x86_64
                                                                         Module libk5crypto.so.3 from rpm krb5-1.21.3-8.el10_0.x86_64
                                                                         Module libkrb5.so.3 from rpm krb5-1.21.3-8.el10_0.x86_64




[root@ceph-rbd2-cd-reg1-nm8agb-node5 ~]# coredumpctl info
           PID: 741756 (rbd-mirror)
           UID: 167 (167)
           GID: 167 (167)
        Signal: 6 (ABRT)
     Timestamp: Thu 2025-12-11 10:19:22 UTC (39min ago)
  Command Line: /usr/bin/rbd-mirror -n client.rbd-mirror.ceph-rbd2-cd-reg1-nm8agb-node5.gpbdmq -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false
    Executable: /usr/bin/rbd-mirror
 Control Group: /system.slice/system-ceph\x2d985dd6ee\x2dd4dc\x2d11f0\x2dab19\x2dfa163e67f4d1.slice/ceph-985dd6ee-d4dc-11f0-ab19-fa163e67f4d1.gpbdmq.service/libpod-payload-43e156b4af631e271826e2f4bb5c51>
          Unit: ceph-985dd6ee-d4dc-11f0-ab19-fa163e67f4d1.gpbdmq.service
         Slice: system-ceph\x2d985dd6ee\x2dd4dc\x2d11f0\x2dab19\x2dfa163e67f4d1.slice
       Boot ID: 42467087616740e1a986b30ffb107b4b
    Machine ID: 98587d78e5cb4117a738658c77442e6d
      Hostname: ceph-rbd2-cd-reg1-nm8agb-node5
       Storage: /var/lib/systemd/coredump/core.rbd-mirror.167.42467087616740e1a986b30ffb107b4b.741756.1765448362000000.zst (present)
  Size on Disk: 2.2M
       Message: Process 741756 (rbd-mirror) of user 167 dumped core.

                Module libpcre2-8.so.0 from rpm pcre2-10.44-1.el10.3.x86_64
                Module libzstd.so.1 from rpm zstd-1.5.5-9.el10.x86_64
                Module liblzma.so.5 from rpm xz-5.6.2-4.el10_0.x86_64
                Module libp11-kit.so.0 from rpm p11-kit-0.25.5-7.el10.x86_64
                Module libk5crypto.so.3 from rpm krb5-1.21.3-8.el10_0.x86_64
                Module libkrb5.so.3 from rpm krb5-1.21.3-8.el10_0.x86_64
                Module libnl-3.so.200 from rpm libnl3-3.11.0-1.el10.x86_64
                Module libnl-route-3.so.200 from rpm libnl3-3.11.0-1.el10.x86_64
                Module libxml2.so.2 from rpm libxml2-2.12.5-9.el10_0.x86_64
                Module libgnutls.so.30 from rpm gnutls-3.8.10-2.el10.x86_64
                Module libcurl.so.4 from rpm curl-8.12.1-2.el10.x86_64
                Module libz.so.1 from rpm zlib-ng-2.2.3-2.el10.x86_64
                Module libibverbs.so.1 from rpm rdma-core-57.0-2.el10.x86_64
                Module libudev.so.1 from rpm systemd-257-13.el10.x86_64
                Module libcap.so.2 from rpm libcap-2.69-7.el10.x86_64
                Module libcrypto.so.3 from rpm openssl-3.5.1-3.el10.x86_64
                Module libssl.so.3 from rpm openssl-3.5.1-3.el10.x86_64
                Stack trace of thread 57:
                #0  0x00007f84ccd7702c __handle_registered_modifier_wc (libc.so.6 + 0x60dec)

                Stack trace of thread 13:
                #0  0x00007f84ccded0bf fpathconf (libc.so.6 + 0xd6e7f)

                Stack trace of thread 3:
                #0  0x00007f84ccd7237a __printf_fp_buffer_1.isra.0 (libc.so.6 + 0x5c13a)

                Stack trace of thread 12:
                #0  0x00007f84ccded0bf fpathconf (libc.so.6 + 0xd6e7f)
                #1  0x19503ab951445300 n/a (n/a + 0x0)
                #2  0x0000000100000000 n/a (n/a + 0x0)
                ELF object binary architecture: AMD x86-64



Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 12 errata-xmlrpc 2026-01-29 07:04:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 9.0 Security and Enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2026:1536