Bug 2032764 - [RBD]- Failed to start IOs when SSD mode persistent write back cache is enabled in ceph version 16.2.7-3.el8cp
Summary: [RBD]- Failed to start IOs when SSD mode persistent write back cache is enabl...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RBD
Version: 5.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 5.1
Assignee: Ilya Dryomov
QA Contact: Preethi
URL:
Whiteboard:
Depends On:
Blocks: 1878560
TreeView+ depends on / blocked
 
Reported: 2021-12-15 07:19 UTC by Preethi
Modified: 2022-04-04 10:23 UTC (History)
5 users (show)

Fixed In Version: ceph-16.2.7-11.el8cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-04 10:23:34 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-2794 0 None None None 2021-12-15 07:25:13 UTC
Red Hat Product Errata RHSA-2022:1174 0 None None None 2022-04-04 10:23:51 UTC

Description Preethi 2021-12-15 07:19:33 UTC
Description of problem:
Failed to start IOs when SSD mode persistent write back cache is enabled in ceph version 16.2.7-3.el8cp

Version-Release number of selected component (if applicable):
16.2.3.-3

How reproducible:

1) After updating conf file to SSD mode as below (Tried from both CLI and conf file)

root@plena007 log]# cat /etc/ceph/ceph.conf
minimal ceph.conf for d6e5c458-0f10-11ec-9663-002590fc25a4
[global]
fsid = d6e5c458-0f10-11ec-9663-002590fc25a4
mon_host = [v2:10.8.128.31:3300/0,v1:10.8.128.31:6789/0]
[client]
rbd_cache = false
rbd_persistent_cache_mode = ssd
rbd_plugins = pwl_cache
rbd_persistent_cache_size = 1073741824
rbd_persistent_cache_path = /mnt/nvme/
Started Ios using rbd bench and FIO , and saw the above error

steps performed to mount -
1. Working ceph cluster
2. client node with NVMe SSD from cli
3. # ceph config set client rbd_persistent_cache_mode SSD
4. # ceph config set client rbd_plugins pwl_cache
5. RUn Ios after mounting and check the behaviour

Steps to enable DAX

mount -o dax=always /dev/pmem0 <mountpoint>
And then set rbd_persistent_cache_path to the mountpoint
rbd config global set global rbd_persistent_cache_path path
After mounting, make sure that DAX is indeed enabled
Check for something like "EXT4-fs (pmem0): DAX enabled ..." in dmesg

Actual results: IOs are failing to start after upgrading to ceph version to the latest

Below error -
[root@plena007 ubuntu]# rbd bench-write image2 --pool=test1 --io-threads=1
rbd: bench-write is deprecated, use rbd bench --io-type write ...
2021-12-14T07:25:30.666+0000 7fc3327fc700 -1 librbd::exclusive_lock::PostAcquireRequest: 0x7fc32c037000 handle_process_plugin_acquire_lock: failed to process plugins: (2) No such file or directory
rbd: failed to flush: 2021-12-14T07:25:30.669+0000 7fc3327fc700 -1 librbd::exclusive_lock::ImageDispatch: 0x7fc314002b60 handle_acquire_lock: failed to acquire exclusive lock: (2) No such file or directory
2021-12-14T07:25:30.669+0000 7fc3327fc700 -1 librbd::io::AioCompletion: 0x559cca568320 fail: (2) No such file or directory
(2) No such file or directory
bench failed: (2) No such file or directory

FIO output -
[root@plena007 ubuntu]# fio --name=test-1 --ioengine=rbd --pool=test1 --rbdname=image2 --numjobs=1 --rw=write --bs=4k --iodepth=1 --fsync=32 --runtime=480 --time_based --group_reporting --ramp_time=120
test-1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=rbd, iodepth=1
fio-3.19
Starting 1 process
fio: io_u error on file test-1.0.0: No such file or directory: write offset=0, buflen=4096
fio: pid=1197333, err=2/file:io_u.c:1803, func=io_u error, error=No such file or directory

test-1: (groupid=0, jobs=1): err= 2 (file:io_u.c:1803, func=io_u error, error=No such file or directory): pid=1197333: Tue Dec 14 07:26:47 2021
cpu : usr=0.00%, sys=0.00%, ctx=2, majf=0, minf=5
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,1,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):

Disk stats (read/write):
sda: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
[root@plena007 ubuntu]#

Expected results: IOs should run successfully and data should be written to the cache before it writes to the cluster


Additional info:
magna031 - cluster node
plena007 - client node where SSD mode is enabled

NOTE: We saw IOs running successfully when ceph version was with 16.2.6-38 in SSD mode. but cache status was not shwoing as SSD when upgraded to latest to check the same. We saw IOs were not triggering at all.



Upstream tracker - https://tracker.ceph.com/issues/53613

Comment 1 Preethi 2021-12-22 10:48:25 UTC
After setting the debug parameter rbd_PWL to 20. Captured the logs http://pastebin.test.redhat.com/1017506. Issue still seen in downstream ceph version ceph version 16.2.7-3.el8cp

[root@plena007 nvme1]# ceph config get client
WHO MASK LEVEL OPTION VALUE RO
global basic container_image registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:3a98e9409fe47c4b96e15e49fed66872c7376ba3bac30cb79367c1355c9f2bf7 *
client advanced debug_rbd 30/30
client advanced debug_rbd_pwl 20/20
global basic log_to_file true
global basic rbd_compression_hint compressible
global dev rbd_config_pool_override_update_timestamp 1636975961
client advanced rbd_persistent_cache_mode ssd *
global advanced rbd_persistent_cache_path /mnt/pmem *
client advanced rbd_plugins pwl_cache *
[root@plena007 nvme1]# cat /etc/ceph/ceph.conf
minimal ceph.conf for d6e5c458-0f10-11ec-9663-002590fc25a4
[global]
fsid = d6e5c458-0f10-11ec-9663-002590fc25a4
mon_host = [v2:10.8.128.31:3300/0,v1:10.8.128.31:6789/0]
[client]
rbd_cache = false
rbd_persistent_cache_mode = ssd
rbd_plugins = pwl_cache
rbd_persistent_cache_size = 1073741824
rbd_persistent_cache_path = /mnt/nvme1/
[root@plena007 nvme1]# NOTE: updating ceph.conf as above do not show cache type as SSD instead we see RWL , We need to set via CLI to get the cache mode/type as SSD. However, issue is still seen when IO is started.

Edit Watch

Comment 2 Preethi 2021-12-22 10:59:52 UTC
http://pastebin.test.redhat.com/1017510 (In reply to Preethi from comment #1)
> After setting the debug parameter rbd_PWL to 20. Captured the logs
> http://pastebin.test.redhat.com/1017506. Issue still seen in downstream ceph
> version ceph version 16.2.7-3.el8cp
> 
> [root@plena007 nvme1]# ceph config get client
> WHO MASK LEVEL OPTION VALUE RO
> global basic container_image
> registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:
> 3a98e9409fe47c4b96e15e49fed66872c7376ba3bac30cb79367c1355c9f2bf7 *
> client advanced debug_rbd 30/30
> client advanced debug_rbd_pwl 20/20
> global basic log_to_file true
> global basic rbd_compression_hint compressible
> global dev rbd_config_pool_override_update_timestamp 1636975961
> client advanced rbd_persistent_cache_mode ssd *
> global advanced rbd_persistent_cache_path /mnt/nvme1 *
> client advanced rbd_plugins pwl_cache *
> [root@plena007 nvme1]# cat /etc/ceph/ceph.conf
> minimal ceph.conf for d6e5c458-0f10-11ec-9663-002590fc25a4
> [global]
> fsid = d6e5c458-0f10-11ec-9663-002590fc25a4
> mon_host = [v2:10.8.128.31:3300/0,v1:10.8.128.31:6789/0]
> [client]
> rbd_cache = false
> rbd_persistent_cache_mode = ssd
> rbd_plugins = pwl_cache
> rbd_persistent_cache_size = 1073741824
> rbd_persistent_cache_path = /mnt/nvme1/
> [root@plena007 nvme1]# NOTE: updating ceph.conf as above do not show cache
> type as SSD instead we see RWL , We need to set via CLI to get the cache
> mode/type as SSD. However, issue is still seen when IO is started.
> 
> Edit Watch

http://pastebin.test.redhat.com/1017510

Comment 3 Preethi 2021-12-22 11:27:38 UTC
Tried with different cluster creating different pool and image and same issue seen
[root@plena007 nvme1]# rbd bench-write image2 --pool=test1 --io-threads=1
rbd: bench-write is deprecated, use rbd bench --io-type write ...
2021-12-22T11:25:48.210+0000 7f78a12fb700 -1 librbd::exclusive_lock::PostAcquireRequest: 0x7f789402e000 handle_process_plugin_acquire_lock: failed to process plugins: (2) No such file or directory
2021-12-22T11:25:48.218+0000 7f78a1afc700 -1 librbd::exclusive_lock::ImageDispatch: 0x7f7880002c20 handle_acquire_lock: failed to acquire exclusive lock: (2) No such file or directoryrbd: failed to flush: 
(2) No such file or directory
2021-12-22T11:25:48.218+0000 7f78a1afc700 -1 librbd::io::AioCompletion: 0x55aac58c3710 fail: (2) No such file or directory
bench failed: (2) No such file or directory
[root@plena007 nvme1]# 

NOTE: RBD create for thick provision is throwing up an error
http://pastebin.test.redhat.com/1017515

Comment 11 Preethi 2021-12-23 07:27:37 UTC
We are still able to see the Issue with latest ceph version ceph version 16.2.7-11.el8cp.

[root@plena007 nvme1]# ceph config get client
WHO     MASK  LEVEL     OPTION                                     VALUE                                                                                                                         RO
global        basic     container_image                            registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:898b39a8ce0e88868ba897c2e1617520840293c6ba281250ef2f90cdd09cf0bb  * 
client        advanced  debug_rbd                                  30/30                                                                                                                           
client        advanced  debug_rbd_pwl                              20/20                                                                                                                           
global        basic     log_to_file                                true                                                                                                                            
global        basic     rbd_compression_hint                       compressible                                                                                                                    
global        dev       rbd_config_pool_override_update_timestamp  1636975961                                                                                                                      
client        advanced  rbd_persistent_cache_mode                  ssd                                                                                                                           * 
global        advanced  rbd_persistent_cache_path                  /mnt/nvme1                                                                                                                    * 
client        advanced  rbd_plugins                                pwl_cache 

Output is copied here ---> http://pastebin.test.redhat.com/1017689

[root@plena007 nvme1]# ls
lost+found  tt.txt
[root@plena007 nvme1]#  fio --name=test-1 --ioengine=rbd --pool=test1 --rbdname=image1 --numjobs=1 --rw=write --bs=4k --iodepth=1 --fsync=32 --runtime=480 --time_based --group_reporting --ramp_time=120
test-1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=rbd, iodepth=1
fio-3.19
Starting 1 process
fio: io_u error on file test-1.0.0: No such file or directory: write offset=0, buflen=4096
fio: pid=2758896, err=2/file:io_u.c:1803, func=io_u error, error=No such file or directory

test-1: (groupid=0, jobs=1): err= 2 (file:io_u.c:1803, func=io_u error, error=No such file or directory): pid=2758896: Thu Dec 23 07:18:21 2021
  cpu          : usr=0.00%, sys=0.00%, ctx=2, majf=0, minf=4
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,1,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):

Disk stats (read/write):
  nvme1n1: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
[root@plena007 nvme1]# rbd bench-write image1 --pool=test1 --io-threads=1
rbd: bench-write is deprecated, use rbd bench --io-type write ...
2021-12-23T07:18:34.155+0000 7f80e6907200 20 librbd::asio::ContextWQ: 0x5612ef1aa920 ContextWQ: 
2021-12-23T07:18:34.155+0000 7f80e6907200 20 librbd::AsioEngine: 0x5612ef1c34b0 AsioEngine: 
2021-12-23T07:18:34.155+0000 7f80e6907200 10 librbd::ImageCtx: 0x5612ef17ea00 ImageCtx: image_name=image1, image_id=
2021-12-23T07:18:34.155+0000 7f80e6907200  5 librbd::io::Dispatcher: 0x5612ef1c69d0 register_dispatch: dispatch_layer=10
2021-12-23T07:18:34.155+0000 7f80e6907200  5 librbd::io::QueueImageDispatch: 0x5612ef1c5450 QueueImageDispatch: ictx=0x5612ef17ea00
2021-12-23T07:18:34.155+0000 7f80e6907200  5 librbd::io::Dispatcher: 0x5612ef1c69d0 register_dispatch: dispatch_layer=1
2021-12-23T07:18:34.155+0000 7f80e6907200  5 librbd::io::QosImageDispatch: 0x5612ef1c51d0 QosImageDispatch: ictx=0x5612ef17ea00
2021-12-23T07:18:34.155+0000 7f80e6907200  5 librbd::io::Dispatcher: 0x5612ef1c69d0 register_dispatch: dispatch_layer=2
2021-12-23T07:18:34.155+0000 7f80e6907200  5 librbd::io::RefreshImageDispatch: 0x5612ef1c5eb0 RefreshImageDispatch: ictx=0x5612ef17ea00
2021-12-23T07:18:34.155+0000 7f80e6907200  5 librbd::io::Dispatcher: 0x5612ef1c69d0 register_dispatch: dispatch_layer=4
2021-12-23T07:18:34.155+0000 7f80e6907200  5 librbd::io::WriteBlockImageDispatch: 0x5612eeff8c00 WriteBlockImageDispatch: ictx=0x5612ef17ea00
2021-12-23T07:18:34.155+0000 7f80e6907200  5 librbd::io::Dispatcher: 0x5612ef1c69d0 register_dispatch: dispatch_layer=7
2021-12-23T07:18:34.155+0000 7f80e6907200  5 librbd::io::Dispatcher: 0x5612ef1c3d80 register_dispatch: dispatch_layer=6
2021-12-23T07:18:34.155+0000 7f80e6907200 20 librbd::ImageState: 0x5612ef1c3d00 open
2021-12-23T07:18:34.155+0000 7f80e6907200 10 librbd::ImageState: 0x5612ef1c3d00 0x5612ef1c3d00 send_open_unlock
2021-12-23T07:18:34.155+0000 7f80e6907200 10 librbd::image::OpenRequest: 0x5612ef1c5c50 send_v2_detect_header
2021-12-23T07:18:34.156+0000 7f80c77fe700 10 librbd::image::OpenRequest: handle_v2_detect_header: r=0
2021-12-23T07:18:34.156+0000 7f80c77fe700 10 librbd::image::OpenRequest: 0x5612ef1c5c50 send_v2_get_id
2021-12-23T07:18:34.156+0000 7f80c7fff700 10 librbd::image::OpenRequest: handle_v2_get_id: r=0
2021-12-23T07:18:34.156+0000 7f80c7fff700 10 librbd::image::OpenRequest: 0x5612ef1c5c50 send_v2_get_initial_metadata
2021-12-23T07:18:34.158+0000 7f80c77fe700 10 librbd::image::OpenRequest: handle_v2_get_initial_metadata: r=0
2021-12-23T07:18:34.158+0000 7f80c77fe700 10 librbd::image::OpenRequest: 0x5612ef1c5c50 send_v2_get_create_timestamp
2021-12-23T07:18:34.158+0000 7f80c7fff700 10 librbd::image::OpenRequest: 0x5612ef1c5c50 handle_v2_get_create_timestamp: r=0
2021-12-23T07:18:34.158+0000 7f80c7fff700 10 librbd::image::OpenRequest: 0x5612ef1c5c50 send_v2_get_access_modify_timestamp
2021-12-23T07:18:34.158+0000 7f80c77fe700 10 librbd::image::OpenRequest: 0x5612ef1c5c50 handle_v2_get_access_modify_timestamp: r=0
2021-12-23T07:18:34.158+0000 7f80c77fe700 10 librbd::image::OpenRequest: 0x5612ef1c5c50 send_v2_get_data_pool
2021-12-23T07:18:34.159+0000 7f80c7fff700 10 librbd::image::OpenRequest: 0x5612ef1c5c50 handle_v2_get_data_pool: r=0
2021-12-23T07:18:34.159+0000 7f80c7fff700 10 librbd::ImageCtx: init_layout stripe_unit 4194304 stripe_count 1 object_size 4194304 prefix rbd_data.12049860d0dc03 format rbd_data.12049860d0dc03.%016llx
2021-12-23T07:18:34.159+0000 7f80c7fff700 10 librbd::image::OpenRequest: 0x5612ef1c5c50 send_refresh
2021-12-23T07:18:34.159+0000 7f80c7fff700 10 librbd::ConfigWatcher: init: 
2021-12-23T07:18:34.159+0000 7f80c7fff700 10 librbd::image::RefreshRequest: 0x7f80b8010bc0 send_v2_get_mutable_metadata
2021-12-23T07:18:34.159+0000 7f80c77fe700 10 librbd::image::RefreshRequest: 0x7f80b8010bc0 handle_v2_get_mutable_metadata: r=0
2021-12-23T07:18:34.159+0000 7f80c77fe700 10 librbd::image::RefreshRequest: 0x7f80b8010bc0 send_v2_get_parent: legacy=0
2021-12-23T07:18:34.159+0000 7f80c7fff700 10 librbd::image::RefreshRequest: 0x7f80b8010bc0 handle_v2_get_parent: r=0
2021-12-23T07:18:34.159+0000 7f80c7fff700 10 librbd::image::RefreshRequest: 0x7f80b8010bc0 send_v2_get_metadata
2021-12-23T07:18:34.159+0000 7f80c7fff700 15 librbd::image::GetMetadataRequest: 0x7f80b801aca0 metadata_list: start_key=conf_
2021-12-23T07:18:34.159+0000 7f80c77fe700 15 librbd::image::GetMetadataRequest: 0x7f80b801aca0 handle_metadata_list: r=0
2021-12-23T07:18:34.159+0000 7f80c77fe700 15 librbd::image::GetMetadataRequest: 0x7f80b801aca0 finish: r=0
2021-12-23T07:18:34.160+0000 7f80c77fe700 10 librbd::image::RefreshRequest: 0x7f80b8010bc0 handle_v2_get_metadata: r=0
2021-12-23T07:18:34.160+0000 7f80c77fe700 10 librbd::image::RefreshRequest: 0x7f80b8010bc0 send_v2_get_pool_metadata
2021-12-23T07:18:34.160+0000 7f80c77fe700 15 librbd::image::GetMetadataRequest: 0x7f80b4003380 metadata_list: start_key=conf_
2021-12-23T07:18:34.160+0000 7f80c7fff700 15 librbd::image::GetMetadataRequest: 0x7f80b4003380 handle_metadata_list: r=0
2021-12-23T07:18:34.160+0000 7f80c7fff700 15 librbd::image::GetMetadataRequest: 0x7f80b4003380 finish: r=0
2021-12-23T07:18:34.160+0000 7f80c7fff700 10 librbd::image::RefreshRequest: 0x7f80b8010bc0 handle_v2_get_pool_metadata: r=0
2021-12-23T07:18:34.160+0000 7f80c7fff700 20 librbd::ImageCtx: apply_metadata
2021-12-23T07:18:34.160+0000 7f80c7fff700  5 librbd::ImageCtx: 0x5612ef17ea00: disabling zero-copy writes
2021-12-23T07:18:34.160+0000 7f80c7fff700 10 librbd::image::RefreshRequest: 0x7f80b8010bc0 send_v2_get_group
2021-12-23T07:18:34.161+0000 7f80c77fe700 10 librbd::image::RefreshRequest: 0x7f80b8010bc0 handle_v2_get_group: r=0
2021-12-23T07:18:34.161+0000 7f80c77fe700 10 librbd::image::RefreshRequest: 0x7f80b8010bc0 send_v2_init_exclusive_lock
2021-12-23T07:18:34.161+0000 7f80c77fe700 10 librbd::ExclusiveLock: 0x7f80b4003ae0 init: features=61
2021-12-23T07:18:34.161+0000 7f80c77fe700  5 librbd::io::Dispatcher: 0x5612ef1c69d0 register_dispatch: dispatch_layer=3
2021-12-23T07:18:34.161+0000 7f80c77fe700 20 librbd::exclusive_lock::ImageDispatch: 0x7f80b4003d30 set_require_lock: direction=2, enabled=1
2021-12-23T07:18:34.161+0000 7f80c77fe700 20 librbd::io::AsyncOperation: 0x7f80b4003f20 start_op
2021-12-23T07:18:34.161+0000 7f80c77fe700 20 librbd::io::Dispatcher: 0x5612ef1c69d0 send: dispatch_spec=0x7f80b4001f80
2021-12-23T07:18:34.161+0000 7f80c77fe700 20 librbd::io::RefreshImageDispatch: 0x5612ef1c5eb0 flush: tid=1
2021-12-23T07:18:34.161+0000 7f80c77fe700 20 librbd::io::WriteBlockImageDispatch: 0x5612eeff8c00 flush: tid=1
2021-12-23T07:18:34.161+0000 7f80c77fe700 20 librbd::io::ImageDispatch: 0x5612ef1c5510 flush: 
2021-12-23T07:18:34.161+0000 7f80c77fe700 20 librbd::io::ImageRequest: 0x7f80c77f8b30 send: aio_flush: ictx=0x5612ef17ea00, completion=0x7f80b4003e10
2021-12-23T07:18:34.161+0000 7f80c77fe700 20 librbd::io::AioCompletion: 0x7f80b4003e10 set_request_count: pending=1
2021-12-23T07:18:34.161+0000 7f80c77fe700 20 librbd::io::Dispatcher: 0x5612ef1c3d80 send: dispatch_spec=0x7f80b4002070
2021-12-23T07:18:34.161+0000 7f80c77fe700 20 librbd::io::AioCompletion: 0x7f80b4003e10 complete_request: cb=1, pending=0
2021-12-23T07:18:34.161+0000 7f80c77fe700 20 librbd::io::AioCompletion: 0x7f80b4003e10 finalize: r=0
2021-12-23T07:18:34.161+0000 7f80c77fe700 10 librbd::image::RefreshRequest: 0x7f80b8010bc0 handle_v2_init_exclusive_lock: r=0
2021-12-23T07:18:34.161+0000 7f80c77fe700 10 librbd::image::RefreshRequest: 0x7f80b8010bc0 send_v2_apply
2021-12-23T07:18:34.161+0000 7f80c77fe700 20 librbd::io::AsyncOperation: 0x7f80b4003f20 finish_op
2021-12-23T07:18:34.161+0000 7f80c77fe700 10 librbd::image::RefreshRequest: 0x7f80b8010bc0 handle_v2_apply
2021-12-23T07:18:34.161+0000 7f80c77fe700 20 librbd::image::RefreshRequest: 0x7f80b8010bc0 apply
2021-12-23T07:18:34.161+0000 7f80c77fe700 10 librbd::image::OpenRequest: handle_refresh: r=0
2021-12-23T07:18:34.161+0000 7f80c77fe700 10 librbd::image::OpenRequest: send_init_plugin_registry: plugins=pwl_cache
2021-12-23T07:18:34.161+0000 7f80c77fe700  5 librbd::PluginRegistry: 0x5612ef1c35f0 init: attempting to load plugin: pwl_cache
2021-12-23T07:18:34.164+0000 7f80c77fe700  5 librbd::plugin::WriteLogImageCache: 0x7f80b4008ee0 init: 
2021-12-23T07:18:34.164+0000 7f80c77fe700 10 librbd::image::OpenRequest: handle_init_plugin_registry: r=0
2021-12-23T07:18:34.164+0000 7f80c77fe700 10 librbd::image::OpenRequest: 0x5612ef1c5c50 send_register_watch
2021-12-23T07:18:34.164+0000 7f80c77fe700 10 librbd::Watcher: 0x7f80b8008930 register_watch: 
2021-12-23T07:18:34.165+0000 7f80c7fff700 10 librbd::Watcher: 0x7f80b8008930 handle_register_watch: r=0
2021-12-23T07:18:34.165+0000 7f80c7fff700 10 librbd::image::OpenRequest: 0x5612ef1c5c50 handle_register_watch: r=0
2021-12-23T07:18:34.165+0000 7f80c7fff700  5 librbd::io::SimpleSchedulerObjectDispatch: 0x7f80b4003380 SimpleSchedulerObjectDispatch: ictx=0x5612ef17ea00
2021-12-23T07:18:34.165+0000 7f80c7fff700  5 librbd::io::SimpleSchedulerObjectDispatch: 0x7f80b4003380 init: 
2021-12-23T07:18:34.165+0000 7f80c7fff700  5 librbd::io::Dispatcher: 0x5612ef1c3d80 register_dispatch: dispatch_layer=5
2021-12-23T07:18:34.165+0000 7f80c7fff700 10 librbd::ImageState: 0x5612ef1c3d00 0x5612ef1c3d00 handle_open: r=0
2021-12-23T07:18:34.165+0000 7f80e6907200 20 librbd::api::Io flush: ictx=0x5612ef17ea00
2021-12-23T07:18:34.165+0000 7f80e6907200 20 librbd::api::Io aio_flush: ictx=0x5612ef17ea00, completion=0x5612ef018710
2021-12-23T07:18:34.165+0000 7f80e6907200 20 librbd::io::Dispatcher: 0x5612ef1c69d0 send: dispatch_spec=0x5612ef049600
2021-12-23T07:18:34.165+0000 7f80e6907200 20 librbd::io::QueueImageDispatch: 0x5612ef1c5450 flush: tid=2
2021-12-23T07:18:34.165+0000 7f80e6907200 20 librbd::io::Dispatcher: 0x5612ef1c69d0 send: dispatch_spec=0x5612ef049600
2021-12-23T07:18:34.165+0000 7f80e6907200 20 librbd::io::QosImageDispatch: 0x5612ef1c51d0 flush: tid=2
2021-12-23T07:18:34.165+0000 7f80e6907200 20 librbd::io::Dispatcher: 0x5612ef1c69d0 send: dispatch_spec=0x5612ef049600
2021-12-23T07:18:34.165+0000 7f80e6907200 20 librbd::exclusive_lock::ImageDispatch: 0x7f80b4003d30 flush: tid=2
2021-12-23T07:18:34.165+0000 7f80e6907200  5 librbd::exclusive_lock::ImageDispatch: 0x7f80b4003d30 needs_exclusive_lock: exclusive lock required: delaying IO
2021-12-23T07:18:34.165+0000 7f80e6907200 10 librbd::ManagedLock: 0x7f80b4003af8 acquire_lock: 
2021-12-23T07:18:34.165+0000 7f80e6907200 10 librbd::ManagedLock: 0x7f80b4003af8 send_acquire_lock: 
2021-12-23T07:18:34.165+0000 7f80c7fff700 10 librbd::ExclusiveLock: 0x7f80b4003ae0 pre_acquire_lock_handler
2021-12-23T07:18:34.165+0000 7f80c7fff700 10 librbd::exclusive_lock::PreAcquireRequest: 0x5612ef1c5d30 send_prepare_lock: 
2021-12-23T07:18:34.165+0000 7f80c7fff700 10 librbd::ImageState: 0x5612ef1c3d00 prepare_lock
2021-12-23T07:18:34.165+0000 7f80c7fff700 10 librbd::ImageState: 0x5612ef1c3d00 0x5612ef1c3d00 send_prepare_lock_unlock
2021-12-23T07:18:34.165+0000 7f80c7fff700 10 librbd::exclusive_lock::PreAcquireRequest: 0x5612ef1c5d30 handle_prepare_lock: r=0
2021-12-23T07:18:34.165+0000 7f80c7fff700 10 librbd::exclusive_lock::PreAcquireRequest: 0x5612ef1c5d30 send_flush_notifies: 
2021-12-23T07:18:34.166+0000 7f80c7fff700 10 librbd::exclusive_lock::PreAcquireRequest: 0x5612ef1c5d30 handle_flush_notifies: 
2021-12-23T07:18:34.166+0000 7f80c7fff700 10 librbd::ManagedLock: 0x7f80b4003af8 handle_pre_acquire_lock: r=0
2021-12-23T07:18:34.166+0000 7f80c7fff700 10 librbd::managed_lock::AcquireRequest: 0x7f80b801af50 send_get_locker: 
2021-12-23T07:18:34.166+0000 7f80c7fff700 10 librbd::managed_lock::GetLockerRequest: 0x7f80c8013c90 send_get_lockers: 
2021-12-23T07:18:34.166+0000 7f80c77fe700 10 librbd::managed_lock::GetLockerRequest: 0x7f80c8013c90 handle_get_lockers: r=0
2021-12-23T07:18:34.166+0000 7f80c77fe700 20 librbd::managed_lock::GetLockerRequest: 0x7f80c8013c90 handle_get_lockers: no lockers detected
2021-12-23T07:18:34.166+0000 7f80c77fe700 10 librbd::managed_lock::GetLockerRequest: 0x7f80c8013c90 finish: r=-2
2021-12-23T07:18:34.166+0000 7f80c77fe700 10 librbd::managed_lock::AcquireRequest: 0x7f80b801af50 handle_get_locker: r=-2
2021-12-23T07:18:34.166+0000 7f80c77fe700 20 librbd::managed_lock::AcquireRequest: 0x7f80b801af50 handle_get_locker: no lockers detected
2021-12-23T07:18:34.166+0000 7f80c77fe700 10 librbd::managed_lock::AcquireRequest: 0x7f80b801af50 send_lock: entity=client.1245602, cookie=auto 140190752549840
2021-12-23T07:18:34.167+0000 7f80c7fff700 10 librbd::managed_lock::AcquireRequest: 0x7f80b801af50 handle_lock: r=0
2021-12-23T07:18:34.167+0000 7f80c7fff700 10 librbd::ManagedLock: 0x7f80b4003af8 handle_acquire_lock: r=0
2021-12-23T07:18:34.167+0000 7f80c7fff700  5 librbd::ManagedLock: 0x7f80b4003af8 handle_acquire_lock: successfully acquired exclusive lock
2021-12-23T07:18:34.167+0000 7f80c7fff700 10 librbd::ExclusiveLock: 0x7f80b4003ae0 post_acquire_lock_handler: r=0
2021-12-23T07:18:34.167+0000 7f80c7fff700 10 librbd::exclusive_lock::PostAcquireRequest: 0x7f80c80a5000 send_open_object_map: 
2021-12-23T07:18:34.167+0000 7f80c7fff700 20 librbd::object_map::RefreshRequest: 0x7f80b801b190 send: object_count=256
2021-12-23T07:18:34.167+0000 7f80c7fff700 10 librbd::object_map::RefreshRequest: 0x7f80b801b190 send_lock: oid=rbd_object_map.12049860d0dc03
2021-12-23T07:18:34.167+0000 7f80c7fff700 10 librbd::object_map::LockRequest: 0x7f80b801bb10 send_lock: oid=rbd_object_map.12049860d0dc03
2021-12-23T07:18:34.169+0000 7f80c77fe700 10 librbd::object_map::LockRequest: 0x7f80b801bb10 handle_lock: r=0
2021-12-23T07:18:34.169+0000 7f80c77fe700 10 librbd::object_map::RefreshRequest: 0x7f80b801b190 handle_lock
2021-12-23T07:18:34.169+0000 7f80c77fe700 10 librbd::object_map::RefreshRequest: 0x7f80b801b190 send_load: oid=rbd_object_map.12049860d0dc03
2021-12-23T07:18:34.169+0000 7f80c7fff700 10 librbd::object_map::RefreshRequest: 0x7f80b801b190 handle_load: r=0
2021-12-23T07:18:34.169+0000 7f80c7fff700 20 librbd::object_map::RefreshRequest: refreshed object map: num_objs=256
2021-12-23T07:18:34.169+0000 7f80c7fff700 10 librbd::exclusive_lock::PostAcquireRequest: 0x7f80c80a5000 handle_open_object_map: r=0
2021-12-23T07:18:34.169+0000 7f80c7fff700 10 librbd::ExclusiveLock: 0x7f80b4003ae0 handle_post_acquiring_lock
2021-12-23T07:18:34.169+0000 7f80c7fff700 10 librbd::ImageState: 0x5612ef1c3d00 handle_prepare_lock_complete
2021-12-23T07:18:34.169+0000 7f80c7fff700 10 librbd::exclusive_lock::PostAcquireRequest: 0x7f80c80a5000 send_process_plugin_acquire_lock: 
2021-12-23T07:18:34.169+0000 7f80c7fff700 20 librbd::PluginRegistry: 0x5612ef1c35f0 acquired_exclusive_lock: 
2021-12-23T07:18:34.169+0000 7f80c7fff700 10 librbd::cache::pwl:InitRequest 0x7f80d001b000 get_image_cache_state: 
2021-12-23T07:18:34.169+0000 7f80c7fff700 20 librbd::cache::pwl::ImageCacheState: create_image_cache_state: image_cache_state: 
2021-12-23T07:18:34.169+0000 7f80c7fff700 20 librbd::cache::pwl::ImageCacheState: ImageCacheState: Initialize RWL cache state with config data. 
2021-12-23T07:18:34.169+0000 7f80c77fe700 10 librbd::exclusive_lock::PostAcquireRequest: 0x7f80c80a5000 handle_process_plugin_acquire_lock: r=-2
2021-12-23T07:18:34.169+0000 7f80c77fe700 -1 librbd::exclusive_lock::PostAcquireRequest: 0x7f80c80a5000 handle_process_plugin_acquire_lock: failed to process plugins: (2) No such file or directory
2021-12-23T07:18:34.169+0000 7f80c77fe700 10 librbd::exclusive_lock::PostAcquireRequest: 0x7f80c80a5000 send_process_plugin_release_lock: 
2021-12-23T07:18:34.169+0000 7f80c77fe700 20 librbd::PluginRegistry: 0x5612ef1c35f0 prerelease_exclusive_lock: 
2021-12-23T07:18:34.169+0000 7f80c77fe700  5 librbd::io::Dispatcher: 0x5612ef1c69d0 shut_down_dispatch: dispatch_layer=8
2021-12-23T07:18:34.169+0000 7f80c77fe700 10 librbd::exclusive_lock::PostAcquireRequest: 0x7f80c80a5000 handle_process_plugin_release_lock: r=0
2021-12-23T07:18:34.169+0000 7f80c77fe700 10 librbd::exclusive_lock::PostAcquireRequest: 0x7f80c80a5000 send_close_object_map: 
2021-12-23T07:18:34.169+0000 7f80c77fe700 10 librbd::object_map::UnlockRequest: 0x7f80b401b840 send_unlock: oid=rbd_object_map.12049860d0dc03
2021-12-23T07:18:34.170+0000 7f80c7fff700 10 librbd::object_map::UnlockRequest: 0x7f80b401b840 handle_unlock: r=0
2021-12-23T07:18:34.170+0000 7f80c7fff700 10 librbd::exclusive_lock::PostAcquireRequest: 0x7f80c80a5000 handle_close_object_map: r=0
2021-12-23T07:18:34.170+0000 7f80c7fff700 10 librbd::ExclusiveLock: 0x7f80b4003ae0 handle_post_acquired_lock: r=-2
2021-12-23T07:18:34.170+0000 7f80c7fff700 10 librbd::ManagedLock: 0x7f80b4003af8 handle_post_acquire_lock: r=-2
2021-12-23T07:18:34.170+0000 7f80c7fff700 10 librbd::ManagedLock: 0x7f80b4003af8 revert_to_unlock_state: r=-2
2021-12-23T07:18:34.170+0000 7f80c7fff700 10 librbd::managed_lock::ReleaseRequest: 0x7f80c8004f70 send_unlock: entity=client.1245602, cookie=auto 140190752549840
2021-12-23T07:18:34.171+0000 7f80c77fe700 10 librbd::managed_lock::ReleaseRequest: 0x7f80c8004f70 handle_unlock: r=0
2021-12-23T07:18:34.171+0000 7f80c77fe700  5 librbd::exclusive_lock::ImageDispatch: 0x7f80b4003d30 handle_acquire_lock: r=-2
2021-12-23T07:18:34.171+0000 7f80c77fe700 -1 librbd::exclusive_lock::ImageDispatch: 0x7f80b4003d30 handle_acquire_lock: failed to acquire exclusive lock: (2) No such file or directory
2021-12-23T07:18:34.171+0000 7f80c77fe700 -1 librbd::io::AioCompletion: 0x5612ef018710 fail: (2) No such file or directory
rbd: failed to flush: 2021-12-23T07:18:34.171+0000 7f80c77fe700 20 librbd::io::AioCompletion: 0x5612ef018710 complete_request: cb=1, pending=0
(2) No such file or directory
2021-12-23T07:18:34.171+0000 7f80c77fe700 20 librbd::io::AioCompletion: 0x5612ef018710 finalize: r=-2
bench failed: (2) No such file or directory
2021-12-23T07:18:34.172+0000 7f80e6907200 20 librbd::ImageState: 0x5612ef1c3d00 close
2021-12-23T07:18:34.172+0000 7f80e6907200 10 librbd::ImageState: 0x5612ef1c3d00 0x5612ef1c3d00 send_close_unlock
2021-12-23T07:18:34.172+0000 7f80e6907200 10 librbd::ConfigWatcher: shut_down: 
2021-12-23T07:18:34.172+0000 7f80e6907200 10 librbd::image::CloseRequest: 0x5612ef1c6310 send_block_image_watcher
2021-12-23T07:18:34.172+0000 7f80e6907200 10 librbd::ImageWatcher: 0x7f80b8008930 block_notifies
2021-12-23T07:18:34.172+0000 7f80e6907200  5 librbd::Watcher: 0x7f80b8008930 block_notifies: blocked_count=1
2021-12-23T07:18:34.172+0000 7f80e6907200 10 librbd::image::CloseRequest: 0x5612ef1c6310 handle_block_image_watcher: r=0
2021-12-23T07:18:34.172+0000 7f80e6907200 10 librbd::image::CloseRequest: 0x5612ef1c6310 send_shut_down_update_watchers
2021-12-23T07:18:34.172+0000 7f80e6907200 20 librbd::ImageState: 0x5612ef1c3d00 shut_down_update_watchers
2021-12-23T07:18:34.172+0000 7f80e6907200 20 librbd::ImageState: 0x5612ef04be10 ImageUpdateWatchers::shut_down
2021-12-23T07:18:34.172+0000 7f80e6907200 20 librbd::ImageState: 0x5612ef04be10 ImageUpdateWatchers::shut_down: completing shut down
2021-12-23T07:18:34.172+0000 7f80c77fe700 10 librbd::image::CloseRequest: 0x5612ef1c6310 handle_shut_down_update_watchers: r=0
2021-12-23T07:18:34.172+0000 7f80c77fe700 10 librbd::image::CloseRequest: 0x5612ef1c6310 send_flush
2021-12-23T07:18:34.172+0000 7f80c77fe700 20 librbd::io::AsyncOperation: 0x5612ef018820 start_op
2021-12-23T07:18:34.172+0000 7f80c77fe700 20 librbd::io::Dispatcher: 0x5612ef1c69d0 send: dispatch_spec=0x5612ef049600
2021-12-23T07:18:34.172+0000 7f80c77fe700 20 librbd::io::QueueImageDispatch: 0x5612ef1c5450 flush: tid=3
2021-12-23T07:18:34.172+0000 7f80c77fe700 20 librbd::io::Dispatcher: 0x5612ef1c69d0 send: dispatch_spec=0x5612ef049600
2021-12-23T07:18:34.172+0000 7f80c77fe700 20 librbd::io::QosImageDispatch: 0x5612ef1c51d0 flush: tid=3
2021-12-23T07:18:34.172+0000 7f80c77fe700 20 librbd::io::Dispatcher: 0x5612ef1c69d0 send: dispatch_spec=0x5612ef049600
2021-12-23T07:18:34.172+0000 7f80c77fe700 20 librbd::exclusive_lock::ImageDispatch: 0x7f80b4003d30 flush: tid=3
2021-12-23T07:18:34.172+0000 7f80c77fe700 20 librbd::io::RefreshImageDispatch: 0x5612ef1c5eb0 flush: tid=3
2021-12-23T07:18:34.172+0000 7f80c77fe700 20 librbd::io::WriteBlockImageDispatch: 0x5612eeff8c00 flush: tid=3
2021-12-23T07:18:34.172+0000 7f80c77fe700 20 librbd::io::ImageDispatch: 0x5612ef1c5510 flush: 
2021-12-23T07:18:34.172+0000 7f80c77fe700 20 librbd::io::ImageRequest: 0x7f80c77f88c0 send: aio_flush: ictx=0x5612ef17ea00, completion=0x5612ef018710
2021-12-23T07:18:34.172+0000 7f80c77fe700 20 librbd::io::AioCompletion: 0x5612ef018710 set_request_count: pending=1
2021-12-23T07:18:34.172+0000 7f80c77fe700 20 librbd::io::Dispatcher: 0x5612ef1c3d80 send: dispatch_spec=0x7f80b4001f80
2021-12-23T07:18:34.172+0000 7f80c77fe700 20 librbd::io::SimpleSchedulerObjectDispatch: 0x7f80b4003380 flush: 
2021-12-23T07:18:34.172+0000 7f80c77fe700 20 librbd::io::SimpleSchedulerObjectDispatch: 0x7f80b4003380 dispatch_all_delayed_requests: 
2021-12-23T07:18:34.172+0000 7f80c77fe700 20 librbd::io::Dispatcher: 0x5612ef1c3d80 send: dispatch_spec=0x7f80b4001f80
2021-12-23T07:18:34.172+0000 7f80c77fe700 20 librbd::io::AioCompletion: 0x5612ef018710 complete_request: cb=1, pending=0
2021-12-23T07:18:34.172+0000 7f80c77fe700 20 librbd::io::AioCompletion: 0x5612ef018710 finalize: r=0
2021-12-23T07:18:34.172+0000 7f80c77fe700 10 librbd::image::CloseRequest: 0x5612ef1c6310 handle_flush: r=0
2021-12-23T07:18:34.172+0000 7f80c77fe700 10 librbd::image::CloseRequest: 0x5612ef1c6310 send_shut_down_exclusive_lock
2021-12-23T07:18:34.172+0000 7f80c77fe700 10 librbd::ExclusiveLock: 0x7f80b4003ae0 shut_down
2021-12-23T07:18:34.172+0000 7f80c77fe700 10 librbd::ManagedLock: 0x7f80b4003af8 shut_down: 
2021-12-23T07:18:34.172+0000 7f80c77fe700 10 librbd::ManagedLock: 0x7f80b4003af8 send_shutdown: 
2021-12-23T07:18:34.172+0000 7f80c77fe700 20 librbd::io::AsyncOperation: 0x5612ef018820 finish_op
2021-12-23T07:18:34.172+0000 7f80c77fe700 10 librbd::ExclusiveLock: 0x7f80b4003ae0 shutdown_handler
2021-12-23T07:18:34.172+0000 7f80c77fe700  5 librbd::io::Dispatcher: 0x5612ef1c69d0 shut_down_dispatch: dispatch_layer=3
2021-12-23T07:18:34.172+0000 7f80c77fe700 10 librbd::ManagedLock: 0x7f80b4003af8 handle_shutdown: r=0
2021-12-23T07:18:34.172+0000 7f80c77fe700 10 librbd::ManagedLock: 0x7f80b4003af8 wait_for_tracked_ops: r=0
2021-12-23T07:18:34.172+0000 7f80c77fe700 10 librbd::ManagedLock: 0x7f80b4003af8 complete_shutdown: r=0
2021-12-23T07:18:34.172+0000 7f80c77fe700 10 librbd::image::CloseRequest: 0x5612ef1c6310 handle_shut_down_exclusive_lock: r=0
2021-12-23T07:18:34.172+0000 7f80c77fe700 10 librbd::image::CloseRequest: 0x5612ef1c6310 send_unregister_image_watcher
2021-12-23T07:18:34.172+0000 7f80c77fe700 10 librbd::ImageWatcher: 0x7f80b8008930 unregistering image watcher
2021-12-23T07:18:34.172+0000 7f80c77fe700 10 librbd::Watcher: 0x7f80b8008930 unregister_watch: 
2021-12-23T07:18:34.173+0000 7f80beffd700 10 librbd::image::CloseRequest: 0x5612ef1c6310 handle_unregister_image_watcher: r=0
2021-12-23T07:18:34.173+0000 7f80beffd700 10 librbd::image::CloseRequest: 0x5612ef1c6310 send_flush_readahead
2021-12-23T07:18:34.173+0000 7f80c77fe700 10 librbd::image::CloseRequest: 0x5612ef1c6310 handle_flush_readahead: r=0
2021-12-23T07:18:34.173+0000 7f80c77fe700 10 librbd::image::CloseRequest: 0x5612ef1c6310 send_shut_down_image_dispatcher
2021-12-23T07:18:34.173+0000 7f80c77fe700 20 librbd::io::AsyncOperation: 0x7f80ac001d70 start_op
2021-12-23T07:18:34.173+0000 7f80c77fe700  5 librbd::io::Dispatcher: 0x5612ef1c69d0 shut_down: 
2021-12-23T07:18:34.173+0000 7f80c77fe700 20 librbd::io::FlushTracker: 0x5612ef057f10 shut_down: 
2021-12-23T07:18:34.173+0000 7f80c77fe700 20 librbd::io::FlushTracker: 0x5612ef059000 shut_down: 
2021-12-23T07:18:34.173+0000 7f80c77fe700 20 librbd::io::AsyncOperation: 0x7f80ac001d70 finish_op
2021-12-23T07:18:34.173+0000 7f80c77fe700 10 librbd::image::CloseRequest: 0x5612ef1c6310 handle_shut_down_image_dispatcher: r=0
2021-12-23T07:18:34.173+0000 7f80c77fe700 10 librbd::image::CloseRequest: 0x5612ef1c6310 send_shut_down_object_dispatcher
2021-12-23T07:18:34.173+0000 7f80c77fe700  5 librbd::io::Dispatcher: 0x5612ef1c3d80 shut_down: 
2021-12-23T07:18:34.173+0000 7f80c77fe700  5 librbd::io::ObjectDispatch: 0x5612ef1c5dd0 shut_down: 
2021-12-23T07:18:34.173+0000 7f80c7fff700  5 librbd::io::SimpleSchedulerObjectDispatch: 0x7f80b4003380 shut_down: 
2021-12-23T07:18:34.173+0000 7f80c7fff700 20 librbd::io::FlushTracker: 0x7f80b801ae20 shut_down: 
2021-12-23T07:18:34.173+0000 7f80c7fff700 10 librbd::image::CloseRequest: 0x5612ef1c6310 handle_shut_down_object_dispatcher: r=0
2021-12-23T07:18:34.173+0000 7f80c7fff700 10 librbd::image::CloseRequest: 0x5612ef1c6310 send_flush_op_work_queue
2021-12-23T07:18:34.173+0000 7f80c7fff700 10 librbd::image::CloseRequest: 0x5612ef1c6310 handle_flush_op_work_queue: r=0
2021-12-23T07:18:34.173+0000 7f80c7fff700 10 librbd::image::CloseRequest: 0x5612ef1c6310 handle_flush_image_watcher: r=0
2021-12-23T07:18:34.173+0000 7f80c7fff700 10 librbd::ImageState: 0x5612ef1c3d00 0x5612ef1c3d00 handle_close: r=0
2021-12-23T07:18:34.173+0000 7f80beffd700 10 librbd::ImageCtx: 0x5612ef17ea00 ~ImageCtx
2021-12-23T07:18:34.173+0000 7f80beffd700 20 librbd::AsioEngine: 0x5612ef1c34b0 ~AsioEngine: 
2021-12-23T07:18:34.173+0000 7f80beffd700 20 librbd::asio::ContextWQ: 0x5612ef1aa920 ~ContextWQ: 
2021-12-23T07:18:34.173+0000 7f80beffd700 20 librbd::asio::ContextWQ: 0x5612ef1aa920 drain:

Comment 13 Preethi 2021-12-23 17:26:15 UTC
@iiya, Not sure why rbd --version reports older version. We have filed similar issue with ceph version reports older version when cluster is upgraded to the latest. Please check the below output:


[root@plena007 ubuntu]# ceph version
ceph version 16.2.7-11.el8cp (5e4c3b0921faeeffa4863d1885e0d09eb96c2cfd) pacific (stable)
[root@plena007 ubuntu]# rbd --version
ceph version 16.2.7-3.el8cp (54410e69e153d229a04fb6acc388f7e4afdd05e7) pacific (stable)
[root@plena007 ubuntu]# ceph versions
{
    "mon": {
        "ceph version 16.2.7-11.el8cp (5e4c3b0921faeeffa4863d1885e0d09eb96c2cfd) pacific (stable)": 3
    },
    "mgr": {
        "ceph version 16.2.7-11.el8cp (5e4c3b0921faeeffa4863d1885e0d09eb96c2cfd) pacific (stable)": 3
    },
    "osd": {
        "ceph version 16.2.7-11.el8cp (5e4c3b0921faeeffa4863d1885e0d09eb96c2cfd) pacific (stable)": 27
    },
    "mds": {
        "ceph version 16.2.7-11.el8cp (5e4c3b0921faeeffa4863d1885e0d09eb96c2cfd) pacific (stable)": 2
    },
    "rbd-mirror": {
        "ceph version 16.2.7-11.el8cp (5e4c3b0921faeeffa4863d1885e0d09eb96c2cfd) pacific (stable)": 1
    },
    "rgw": {
        "ceph version 16.2.7-11.el8cp (5e4c3b0921faeeffa4863d1885e0d09eb96c2cfd) pacific (stable)": 2
    },
    "tcmu-runner": {
        "ceph version 16.2.7-11.el8cp (5e4c3b0921faeeffa4863d1885e0d09eb96c2cfd) pacific (stable)": 20
    },
    "overall": {
        "ceph version 16.2.7-11.el8cp (5e4c3b0921faeeffa4863d1885e0d09eb96c2cfd) pacific (stable)": 58
    }
}
[root@plena007 ubuntu]# ceph orch ls
NAME               PORTS        RUNNING  REFRESHED  AGE  PLACEMENT                                     
alertmanager       ?:9093,9094      1/1  3m ago     3M   count:1                                       
crash                               9/9  5m ago     3M   *                                             
grafana            ?:3000           1/1  3m ago     3M   count:1                                       
iscsi.iscsi                         4/4  5m ago     10w  depressa001;depressa002;depressa003;magna006  
mds.backup                          2/2  4m ago     11w  magna031;magna032;count:2                     
mgr                                 3/3  4m ago     3M   magna031;magna032;magna006;count:3            
mon                                 3/3  4m ago     3M   magna031;magna032;magna006;count:3            
node-exporter      ?:9100           9/9  5m ago     3M   *                                             
osd                                  15  5m ago     -    <unmanaged>                                   
osd.osd_with_nvme                    12  4m ago     10w  depressa00[4-6].ceph.redhat.com               
prometheus         ?:9095           1/1  3m ago     3M   count:1                                       
rbd-mirror                          1/1  3m ago     3M   magna031                                      
rgw.foo            ?:80             2/2  4m ago     10w  count:2                                       
[root@plena007 ubuntu]# ceph orch ps
NAME                            HOST                         PORTS        STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION          IMAGE ID      CONTAINER ID  
alertmanager.magna031           magna031                     *:9093,9094  running (10h)     3m ago   3M    33.0M        -  0.21.0           c6673b49f2f3  221d5bc9fcea  
crash.depressa001               depressa001                               running (10h)     6m ago   3M    9336k        -  16.2.7-11.el8cp  03b436d17cf0  750b1c1b90e3  
crash.depressa002               depressa002                               running (10h)     6m ago   3M    17.7M        -  16.2.7-11.el8cp  03b436d17cf0  b5cd14454e12  
crash.depressa003               depressa003                               running (10h)     4m ago   3M    9185k        -  16.2.7-11.el8cp  03b436d17cf0  c390528a1277  
crash.depressa004               depressa004.ceph.redhat.com               running (10h)     4m ago  10w    7138k        -  16.2.7-11.el8cp  03b436d17cf0  c774e69ee0f0  
crash.depressa005               depressa005.ceph.redhat.com               running (10h)     4m ago  10w    7147k        -  16.2.7-11.el8cp  03b436d17cf0  9d1846b5d67d  
crash.depressa006               depressa006.ceph.redhat.com               running (10h)     4m ago  10w    7155k        -  16.2.7-11.el8cp  03b436d17cf0  5342c0670ba7  
crash.magna006                  magna006                                  running (10h)     4m ago   3M    6958k        -  16.2.7-11.el8cp  03b436d17cf0  90ee67af7d39  
crash.magna031                  magna031                                  running (10h)     3m ago   3M    6958k        -  16.2.7-11.el8cp  03b436d17cf0  dd59e4a9529e  
crash.magna032                  magna032                                  running (10h)     4m ago   3M    6979k        -  16.2.7-11.el8cp  03b436d17cf0  72da64d7c7f4  
grafana.magna031                magna031                     *:3000       running (10h)     3m ago   3M    55.5M        -  6.7.4            09cf77100f6a  17a8bb11cff6  
iscsi.iscsi.depressa001.qcacgp  depressa001                               running (10h)     6m ago  10w     130M        -  3.5              03b436d17cf0  75bfcd15ced2  
iscsi.iscsi.depressa002.ziavgj  depressa002                               running (10h)     6m ago  10w     128M        -  3.5              03b436d17cf0  2bdb06c13238  
iscsi.iscsi.depressa003.jxtadh  depressa003                               running (10h)     4m ago  10w     131M        -  3.5              03b436d17cf0  1cd8ac08e385  
iscsi.iscsi.magna006.mcqvof     magna006                                  running (10h)     4m ago  10w     128M        -  3.5              03b436d17cf0  323f8982578c  
mds.backup.magna031.ahhfqy      magna031                                  running (10h)     3m ago  11w     203M        -  16.2.7-11.el8cp  03b436d17cf0  0e306c2910d1  
mds.backup.magna032.diqyit      magna032                                  running (10h)     4m ago  11w    25.1M        -  16.2.7-11.el8cp  03b436d17cf0  d8501147faac  
mgr.magna006.vxieja             magna006                     *:8443,9283  running (10h)     4m ago   3M     399M        -  16.2.7-11.el8cp  03b436d17cf0  ba825c4ade96  
mgr.magna031.xqwypm             magna031                     *:8443,9283  running (10h)     3m ago   3M     575M        -  16.2.7-11.el8cp  03b436d17cf0  d441c58a7062  
mgr.magna032.lzjsxg             magna032                     *:8443,9283  running (10h)     4m ago   3M     395M        -  16.2.7-11.el8cp  03b436d17cf0  123c68e8317c  
mon.magna006                    magna006                                  running (10h)     4m ago   6w    1638M    2048M  16.2.7-11.el8cp  03b436d17cf0  de9293d85c78  
mon.magna031                    magna031                                  running (10h)     3m ago   3M    3306M    2048M  16.2.7-11.el8cp  03b436d17cf0  df1c8c8ae217  
mon.magna032                    magna032                                  running (10h)     4m ago   3M    1453M    2048M  16.2.7-11.el8cp  03b436d17cf0  76083a94c5b1  
node-exporter.depressa001       depressa001                  *:9100       running (10h)     6m ago   3M    37.5M        -  1.0.1            7366f9f3053e  8974e10f8df0  
node-exporter.depressa002       depressa002                  *:9100       running (10h)     6m ago   3M    38.9M        -  1.0.1            7366f9f3053e  6d0ec4adc94a  
node-exporter.depressa003       depressa003                  *:9100       running (10h)     4m ago   3M    39.3M        -  1.0.1            7366f9f3053e  0b9df435e797  
node-exporter.depressa004       depressa004.ceph.redhat.com  *:9100       running (10h)     4m ago  10w    33.6M        -  1.0.1            7366f9f3053e  3718b48c7b2b  
node-exporter.depressa005       depressa005.ceph.redhat.com  *:9100       running (10h)     4m ago  10w    34.2M        -  1.0.1            7366f9f3053e  83e5a3f5a7a6  
node-exporter.depressa006       depressa006.ceph.redhat.com  *:9100       running (10h)     4m ago  10w    33.4M        -  1.0.1            7366f9f3053e  58893323133a  
node-exporter.magna006          magna006                     *:9100       running (10h)     4m ago   3M    28.8M        -  1.0.1            7366f9f3053e  04cee8449ef0  
node-exporter.magna031          magna031                     *:9100       running (10h)     3m ago   3M    27.6M        -  1.0.1            7366f9f3053e  d98244af1b2a  
node-exporter.magna032          magna032                     *:9100       running (10h)     4m ago   3M    29.7M        -  1.0.1            7366f9f3053e  0f1ca94f6913  
osd.0                           depressa001                               running (10h)     6m ago   3M     804M    4096M  16.2.7-11.el8cp  03b436d17cf0  fe4d9fe004e2  
osd.1                           depressa001                               running (10h)     6m ago   3M    1036M    4096M  16.2.7-11.el8cp  03b436d17cf0  1b7f067b3188  
osd.10                          depressa001                               running (10h)     6m ago   3M     817M    4096M  16.2.7-11.el8cp  03b436d17cf0  1f55bebc0301  
osd.11                          depressa002                               running (10h)     6m ago   3M     902M    4096M  16.2.7-11.el8cp  03b436d17cf0  bf2364245e51  
osd.12                          depressa002                               running (10h)     6m ago   3M    1281M    4096M  16.2.7-11.el8cp  03b436d17cf0  0d9d8d757e0e  
osd.13                          depressa003                               running (10h)     4m ago   3M    1320M    4096M  16.2.7-11.el8cp  03b436d17cf0  12443338e31f  
osd.14                          depressa003                               running (10h)     4m ago   3M     679M    4096M  16.2.7-11.el8cp  03b436d17cf0  47792f971325  
osd.15                          depressa004.ceph.redhat.com               running (10h)     4m ago  10w     786M    4096M  16.2.7-11.el8cp  03b436d17cf0  ff0b1cd3bad7  
osd.16                          depressa006.ceph.redhat.com               running (10h)     4m ago  10w     530M    4096M  16.2.7-11.el8cp  03b436d17cf0  ac3c2fa6ff7c  
osd.17                          depressa005.ceph.redhat.com               running (10h)     4m ago  10w     890M    4096M  16.2.7-11.el8cp  03b436d17cf0  8987a25bf50e  
osd.18                          depressa004.ceph.redhat.com               running (10h)     4m ago  10w     768M    4096M  16.2.7-11.el8cp  03b436d17cf0  28f516e6447c  
osd.19                          depressa006.ceph.redhat.com               running (10h)     4m ago  10w     620M    4096M  16.2.7-11.el8cp  03b436d17cf0  127b5eb8d493  
osd.2                           depressa001                               running (10h)     6m ago   3M    1516M    4096M  16.2.7-11.el8cp  03b436d17cf0  8385d2b789a2  
osd.20                          depressa005.ceph.redhat.com               running (10h)     4m ago  10w    1703M    4096M  16.2.7-11.el8cp  03b436d17cf0  d5ba9616677c  
osd.21                          depressa004.ceph.redhat.com               running (10h)     4m ago  10w     658M    4096M  16.2.7-11.el8cp  03b436d17cf0  e6955131e1a4  
osd.22                          depressa006.ceph.redhat.com               running (10h)     4m ago  10w    1318M    4096M  16.2.7-11.el8cp  03b436d17cf0  154282296ee8  
osd.23                          depressa005.ceph.redhat.com               running (10h)     4m ago  10w     678M    4096M  16.2.7-11.el8cp  03b436d17cf0  01b91c8dfd21  
osd.24                          depressa004.ceph.redhat.com               running (10h)     4m ago  10w     725M    4096M  16.2.7-11.el8cp  03b436d17cf0  c97137c5ecab  
osd.25                          depressa006.ceph.redhat.com               running (10h)     4m ago  10w    1316M    4096M  16.2.7-11.el8cp  03b436d17cf0  bed9d398bf4a  
osd.26                          depressa005.ceph.redhat.com               running (10h)     4m ago  10w    1108M    4096M  16.2.7-11.el8cp  03b436d17cf0  7b496ce34fc8  
osd.3                           depressa002                               running (10h)     6m ago   3M     495M    4096M  16.2.7-11.el8cp  03b436d17cf0  a733439f6751  
osd.4                           depressa002                               running (10h)     6m ago   3M    1460M    4096M  16.2.7-11.el8cp  03b436d17cf0  70ebffcf1884  
osd.5                           depressa002                               running (10h)     6m ago   3M    1045M    4096M  16.2.7-11.el8cp  03b436d17cf0  0961414fb91b  
osd.6                           depressa003                               running (10h)     4m ago   3M     836M    4096M  16.2.7-11.el8cp  03b436d17cf0  29a5821cf20a  
osd.7                           depressa003                               running (10h)     4m ago   3M    1417M    4096M  16.2.7-11.el8cp  03b436d17cf0  d8afcfeeabef  
osd.8                           depressa003                               running (10h)     4m ago   3M    1853M    4096M  16.2.7-11.el8cp  03b436d17cf0  98274a105fbf  
osd.9                           depressa001                               running (10h)     6m ago   3M     927M    4096M  16.2.7-11.el8cp  03b436d17cf0  2f935eef856b  
prometheus.magna031             magna031                     *:9095       running (10h)     3m ago   3M     491M        -  2.22.2           7b4433e7ee3e  8ecfb7225b5e  
rbd-mirror.magna031.gjkfff      magna031                                  running (10h)     3m ago   3M    37.2M        -  16.2.7-11.el8cp  03b436d17cf0  72f838cf1bcc  
rgw.foo.magna006.pxlljx         magna006                     *:80         running (10h)     4m ago  10w    66.4M        -  16.2.7-11.el8cp  03b436d17cf0  78a870c7e78d  
rgw.foo.magna032.yelvlg         magna032                     *:80         running (10h)     4m ago  10w    66.3M        -  16.2.7-11.el8cp  03b436d17cf0  4dcafdd3b977  
[root@plena007 ubuntu]#

Comment 14 Preethi 2021-12-23 17:27:14 UTC
Ceph --version or rbd -- version reports the older ceph version before upgrade
[root@plena007 ubuntu]# ceph --version
ceph version 16.2.7-3.el8cp (54410e69e153d229a04fb6acc388f7e4afdd05e7) pacific (stable)
[root@plena007 ubuntu]# ceph version
ceph version 16.2.7-11.el8cp (5e4c3b0921faeeffa4863d1885e0d09eb96c2cfd) pacific (stable)
[root@plena007 ubuntu]#

Comment 16 Preethi 2021-12-24 05:39:08 UTC
Thanks @IIya, We are not seeing issue when upgraded to client pkgs to latest 16.2.7.11.el8cp. IO's are running and working as expected.

Comment 18 errata-xmlrpc 2022-04-04 10:23:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1174


Note You need to log in before you can comment on or make changes to this bug.