Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1951982

Summary: [RBD] persistent write back cache status different from documentation
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Harish Munjulur <hmunjulu>
Component: RBDAssignee: Ilya Dryomov <idryomov>
Status: CLOSED ERRATA QA Contact: Harish Munjulur <hmunjulu>
Severity: high Docs Contact:
Priority: unspecified    
Version: 5.0CC: ceph-eng-bugs, gpatta, idryomov, vereddy
Target Milestone: ---   
Target Release: 5.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-08-30 08:29:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1878559    

Description Harish Munjulur 2021-04-21 09:20:47 UTC
Description of problem:persistent write back cache status different from documentation

[root@magna031 ubuntu]# rbd status rbd_pool/rbd_image
Watchers:
	watcher=10.8.128.31:0/3535497977 client.18024 cookie=140103108388592


Version-Release number of selected component (if applicable):
ceph version 16.2.0-4.el8cp


Steps to Reproduce:
1. Have a running CEPH cluster
2. # ceph config set client rbd_persistent_cache_mode ssd 
3. # ceph config set client rbd_plugins pwl_cache
4. ceph config set client rbd_persistent_cache_path /home/ubuntu/cephtest/write_back_cache
5. # ceph osd pool create rbd_pool 100 100
6. # rbd create rbd_image --size 1024 --pool rbd_pool
7. librbd write data to the image (image.write(data, 0))
8. rbd status rbd_pool/rbd_image


Actual results:
[root@magna031 ubuntu]# rbd status rbd_pool/rbd_image
Watchers:
	watcher=10.8.128.31:0/3535497977 client.18024 cookie=140103108388592


Expected results as per documentation:
https://docs.ceph.com/en/latest/rbd/rbd-persistent-write-back-cache/

$ rbd status rbd/foo
Watchers: none
image cache state:
clean: false  size: 1 GiB  host: sceph9  path: /tmp

Comment 2 Ilya Dryomov 2021-05-10 19:27:12 UTC
*** Bug 1951984 has been marked as a duplicate of this bug. ***

Comment 5 Harish Munjulur 2021-06-06 12:44:14 UTC
Yes Ilya Will start working on this. Thanks

Comment 6 Harish Munjulur 2021-07-21 00:01:31 UTC
Moving to verified state

[root@intel-purley-02 pmem]# rbd status test/image1
Watchers:
	watcher=10.16.160.10:0/3474854596 client.44238 cookie=140237661645648
Image cache state: {"present":"true","empty":"true","clean":"true","cache_type":"rwl","pwl_host":"intel-purley-02","pwl_path":"/mnt/pmem//rbd-pwl.test.389aa68c0527.pool","pwl_size":1073741824}

Comment 9 errata-xmlrpc 2021-08-30 08:29:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294