Bug 2120497

Summary: cephfs-top: wrong/infinitely changing wsp values
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Jos Collin <jcollin>
Component: CephFSAssignee: Jos Collin <jcollin>
Status: CLOSED ERRATA QA Contact: julpark
Severity: medium Docs Contact:
Priority: unspecified    
Version: 5.3CC: ceph-eng-bugs, cephqe-warriors, hyelloji, tserlin, vshankar
Target Milestone: ---   
Target Release: 5.3z1   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: ceph-16.2.10-101.el8cp Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 2130117 (view as bug list) Environment:
Last Closed: 2023-02-28 10:05:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2130117    

Description Jos Collin 2022-08-23 05:50:06 UTC
Description of problem:
wsp(MB/s) field in cephfs-top shows wrong and negative values changing infinitely.

Steps to reproduce:
1. Create two filesystems and mount them as client1 and client2.
2. run cephfs-top.
3. Write something to client1 only.
4. Both client1 and client2 shows changing wsp values, even if there is no IO in client2.

client1 shows changing positive values and client2 shows changing negative values. This continues infinitely since the write starts and doesn't stop when the write ends. However, this issue is not observed in 'perf stats' output.

Version-Release number of selected component (if applicable):
5.3z1

How reproducible:
Always

Steps to reproduce:
1. Create two filesystems and mount them as client1 and client2.
2. run cephfs-top.
3. Write something to client1 only.
4. Both client1 and client2 shows changing wsp values, even if there is no IO in client2.

Actual results:
wsp(MB/s) field in cephfs-top shows wrong and negative values changing infinitely.

Expected results:
wsp(MB/s) field in cephfs-top show calculated values correctly and only when there is an IO on a particular client.

Additional info:

Comment 4 Venky Shankar 2023-01-20 11:06:49 UTC
https://gitlab.cee.redhat.com/ceph/ceph/-/merge_requests/187 merged.

Comment 8 errata-xmlrpc 2023-02-28 10:05:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 5.3 Bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:0980