Bug 1398028

Summary: [RFE] rbd top
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Neil Levine <nlevine>
Component: RBDAssignee: Jason Dillaman <jdillama>
Status: CLOSED ERRATA QA Contact: Gopi <gpatta>
Severity: high Docs Contact: Bara Ancincova <bancinco>
Priority: high    
Version: 2.1CC: anharris, ceph-eng-bugs, ceph-qe-bugs, edonnell, flucifre, gpatta, jdillama, scohen, tchandra, tserlin
Target Milestone: rcKeywords: FutureFeature
Target Release: 4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-14.2.0 Doc Type: Enhancement
Doc Text:
.RBD performance monitoring and metrics gathering tools {product} {release} now incorporates new Ceph Block Device performance monitoring utilities for aggregated RBD image metrics for IOPS, throughput, and latency. Per-image RBD metrics are now available using the Ceph Manager Prometheus module, the Ceph Dashboard, and the `rbd` CLI using the `rbd perf image iostat` or `rbd perf image iotop` commands.
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-01-31 12:44:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1730176    

Description Neil Levine 2016-11-23 22:27:58 UTC
User Story:

As an admin, I want to see which RBD clients by IP address are generating the most IOPS so I can spot noisy hosts on my client network.

As an admin, I want to see which RBD clients by Virtual Machine on a hypervisor are generating the most IOPS so I can spot noisy neighbors in an OpenStack environment

Notes:

It is expected that that this will be implemented on top of ceph-mgr.

Comment 5 Giridhar Ramaraju 2019-08-05 13:05:52 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 6 Giridhar Ramaraju 2019-08-05 13:08:37 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 10 Gopi 2019-12-24 04:39:47 UTC
Verified the bug on latest ceph version and the feature is working as expected.

rbd: waiting for initial image stats

NAME     WR   RD  WR_BYTES  RD_BYTES     WR_LAT   RD_LAT  
librbd 24/s  0/s  36 MiB/s     0 B/s  171.57 ms  0.00 ns  

NAME     WR   RD  WR_BYTES  RD_BYTES     WR_LAT   RD_LAT  
librbd 34/s  0/s  56 MiB/s     0 B/s  184.74 ms  0.00 ns  
  
NAME    WR     RD   WR_BYTES   RD_BYTES    WR_LAT   RD_LAT  
librbd 0/s  160/s  252 KiB/s  4.8 MiB/s  24.78 ms  1.42 ms  

NAME    WR     RD   WR_BYTES   RD_BYTES    WR_LAT     RD_LAT  
librbd 0/s  218/s  306 KiB/s  3.2 MiB/s  31.53 ms  785.37 us  

NAME    WR     RD   WR_BYTES   RD_BYTES     WR_LAT     RD_LAT  
librbd 2/s  311/s  4.3 MiB/s  3.1 MiB/s  114.77 ms  518.63 us  

NAME     WR    RD  WR_BYTES   RD_BYTES     WR_LAT   RD_LAT  
librbd 56/s  47/s  33 MiB/s  2.5 MiB/s  232.68 ms  4.82 ms


ceph version 14.2.4-91.el8cp (23607558df3b077b6190cdf96cd8d9043aa2a1c5) nautilus (stable)

ceph-mon-14.2.4-91.el8cp.x86_64
ansible-2.8.7-1.el8ae.noarch
ceph-ansible-4.0.6-1.el8cp.noarch

Comment 14 errata-xmlrpc 2020-01-31 12:44:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0312