Description of problem: Cluster has down OSD and is experiencing data rebalance - Shows negative value for disk IOPs and disk Throughput - Once down OSD returned to service (but still rebalancing), disk IOPs and Throughput show normal again Version-Release number of selected component (if applicable): ceph-ansible-3.0.31-1.el7cp.noarch Mon Apr 30 11:38:59 2018 ceph-common-12.2.4-6.el7cp.x86_64 Mon Apr 30 11:38:47 2018 ceph-fuse-12.2.4-6.el7cp.x86_64 Mon Apr 30 11:38:49 2018 ceph-installer-1.3.0-1.el7scon.noarch Tue Jul 4 09:31:42 2017 libcephfs2-12.2.4-6.el7cp.x86_64 Mon Apr 30 11:37:29 2018 librados2-12.2.4-6.el7cp.x86_64 Mon Apr 30 11:37:27 2018 libradosstriper1-12.2.4-6.el7cp.x86_64 Mon Apr 30 11:37:30 2018 librbd1-12.2.4-6.el7cp.x86_64 Mon Apr 30 11:37:27 2018 python-cephfs-12.2.4-6.el7cp.x86_64 Mon Apr 30 11:37:29 2018 python-rados-12.2.4-6.el7cp.x86_64 Mon Apr 30 11:37:27 2018 python-rbd-12.2.4-6.el7cp.x86_64 Mon Apr 30 11:37:29 2018 cephmetrics-1.0-8.el7cp.x86_64 Tue Mar 13 15:38:03 2018 cephmetrics-ansible-1.0-8.el7cp.x86_64 Tue Mar 13 15:38:04 2018 cephmetrics-grafana-plugins-1.0-8.el7cp.x86_64 Tue Mar 13 15:37:59 2018 How reproducible: First observation Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. Regards, Giri