Bug 1951982 - [RBD] persistent write back cache status different from documentation
Summary: [RBD] persistent write back cache status different from documentation
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RBD
Version: 5.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 5.0
Assignee: Ilya Dryomov
QA Contact: Harish Munjulur
URL:
Whiteboard:
: 1951984 (view as bug list)
Depends On:
Blocks: 1878559
TreeView+ depends on / blocked
 
Reported: 2021-04-21 09:20 UTC by Harish Munjulur
Modified: 2021-08-30 08:30 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-30 08:29:49 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-1024 0 None None None 2021-08-26 17:31:47 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:30:05 UTC

Description Harish Munjulur 2021-04-21 09:20:47 UTC
Description of problem:persistent write back cache status different from documentation

[root@magna031 ubuntu]# rbd status rbd_pool/rbd_image
Watchers:
	watcher=10.8.128.31:0/3535497977 client.18024 cookie=140103108388592


Version-Release number of selected component (if applicable):
ceph version 16.2.0-4.el8cp


Steps to Reproduce:
1. Have a running CEPH cluster
2. # ceph config set client rbd_persistent_cache_mode ssd 
3. # ceph config set client rbd_plugins pwl_cache
4. ceph config set client rbd_persistent_cache_path /home/ubuntu/cephtest/write_back_cache
5. # ceph osd pool create rbd_pool 100 100
6. # rbd create rbd_image --size 1024 --pool rbd_pool
7. librbd write data to the image (image.write(data, 0))
8. rbd status rbd_pool/rbd_image


Actual results:
[root@magna031 ubuntu]# rbd status rbd_pool/rbd_image
Watchers:
	watcher=10.8.128.31:0/3535497977 client.18024 cookie=140103108388592


Expected results as per documentation:
https://docs.ceph.com/en/latest/rbd/rbd-persistent-write-back-cache/

$ rbd status rbd/foo
Watchers: none
image cache state:
clean: false  size: 1 GiB  host: sceph9  path: /tmp

Comment 2 Ilya Dryomov 2021-05-10 19:27:12 UTC
*** Bug 1951984 has been marked as a duplicate of this bug. ***

Comment 5 Harish Munjulur 2021-06-06 12:44:14 UTC
Yes Ilya Will start working on this. Thanks

Comment 6 Harish Munjulur 2021-07-21 00:01:31 UTC
Moving to verified state

[root@intel-purley-02 pmem]# rbd status test/image1
Watchers:
	watcher=10.16.160.10:0/3474854596 client.44238 cookie=140237661645648
Image cache state: {"present":"true","empty":"true","clean":"true","cache_type":"rwl","pwl_host":"intel-purley-02","pwl_path":"/mnt/pmem//rbd-pwl.test.389aa68c0527.pool","pwl_size":1073741824}

Comment 9 errata-xmlrpc 2021-08-30 08:29:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.