Description of problem:persistent write back cache status different from documentation [root@magna031 ubuntu]# rbd status rbd_pool/rbd_image Watchers: watcher=10.8.128.31:0/3535497977 client.18024 cookie=140103108388592 Version-Release number of selected component (if applicable): ceph version 16.2.0-4.el8cp Steps to Reproduce: 1. Have a running CEPH cluster 2. # ceph config set client rbd_persistent_cache_mode ssd 3. # ceph config set client rbd_plugins pwl_cache 4. ceph config set client rbd_persistent_cache_path /home/ubuntu/cephtest/write_back_cache 5. # ceph osd pool create rbd_pool 100 100 6. # rbd create rbd_image --size 1024 --pool rbd_pool 7. librbd write data to the image (image.write(data, 0)) 8. rbd status rbd_pool/rbd_image Actual results: [root@magna031 ubuntu]# rbd status rbd_pool/rbd_image Watchers: watcher=10.8.128.31:0/3535497977 client.18024 cookie=140103108388592 Expected results as per documentation: https://docs.ceph.com/en/latest/rbd/rbd-persistent-write-back-cache/ $ rbd status rbd/foo Watchers: none image cache state: clean: false size: 1 GiB host: sceph9 path: /tmp
*** Bug 1951984 has been marked as a duplicate of this bug. ***
Yes Ilya Will start working on this. Thanks
Moving to verified state [root@intel-purley-02 pmem]# rbd status test/image1 Watchers: watcher=10.16.160.10:0/3474854596 client.44238 cookie=140237661645648 Image cache state: {"present":"true","empty":"true","clean":"true","cache_type":"rwl","pwl_host":"intel-purley-02","pwl_path":"/mnt/pmem//rbd-pwl.test.389aa68c0527.pool","pwl_size":1073741824}
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294