Bug 1461867 - rbd size is 0 even if there is 500MB
Summary: rbd size is 0 even if there is 500MB
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat Storage
Component: Ceph Integration
Version: 3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 4
Assignee: Shubhendu Tripathi
QA Contact: sds-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-06-15 13:28 UTC by Martin Kudlej
Modified: 2018-11-19 05:41 UTC (History)
4 users (show)

Fixed In Version: tendrl-ceph-integration-3.0-alpha.6.el7scon.noarch.rpm
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-19 05:41:35 UTC
Embargoed:


Attachments (Terms of Use)

Description Martin Kudlej 2017-06-15 13:28:09 UTC
Description of problem:
I've created RBD and copied there 500MB. I don't see current RBD size in UI, API or Etcd.

$ etcdctl --endpoints=http://${HOSTNAME}:2379 get /clusters/f491fbab-ea71-49d2-8c82-986f628ffb67/Pools/0/Rbds/rbd2/used
0
$ curl -s -X GET -k -H "Authorization: Bearer $(cat token.txt)" ${URL}/GetClusterList | jq '.clusters[0].pools["0"].rbds'

{
  "rbd2": {
    "hash": "c4a106ae8faa691b1e7e6e073747caa1",
    "name": "rbd2",
    "updated_at": "2017-06-14 18:42:59.130814+00:00",
    "pool_id": "0",
    "flags": "",
    "size": "1024",
    "provisioned": "1073741824",
    "used": "0"
  },
  "rbd3": {
    "name": "rbd3",
    "updated_at": "2017-06-14 18:42:59.883503+00:00",
    "pool_id": "0",
    "flags": "",
    "size": "1024",
    "provisioned": "1073741824",
    "used": "0",
    "hash": "5cefacdf4480c6bc6fc64ec30fd5aaf4"
  }
}


Version-Release number of selected component (if applicable):
ceph-ansible-2.2.11-1.el7scon.noarch
ceph-base-11.2.0-0.el7.x86_64
ceph-common-11.2.0-0.el7.x86_64
ceph-installer-1.3.0-1.el7scon.noarch
ceph-mon-11.2.0-0.el7.x86_64
ceph-osd-11.2.0-0.el7.x86_64
ceph-selinux-11.2.0-0.el7.x86_64
etcd-3.1.7-1.el7.x86_64
libcephfs2-11.2.0-0.el7.x86_64
python-cephfs-11.2.0-0.el7.x86_64
python-etcd-0.4.5-1.noarch
rubygem-etcd-0.3.0-1.el7.noarch
tendrl-alerting-3.0-alpha.3.el7scon.noarch
tendrl-api-3.0-alpha.4.el7scon.noarch
tendrl-api-doc-3.0-alpha.4.el7scon.noarch
tendrl-api-httpd-3.0-alpha.4.el7scon.noarch
tendrl-ceph-integration-3.0-alpha.5.el7scon.noarch
tendrl-commons-3.0-alpha.9.el7scon.noarch
tendrl-dashboard-3.0-alpha.4.el7scon.noarch
tendrl-node-agent-3.0-alpha.9.el7scon.noarch
tendrl-node-monitoring-3.0-alpha.5.el7scon.noarch
tendrl-performance-monitoring-3.0-alpha.7.el7scon.noarch

How reproducible:
100%

Steps to Reproduce:
1. create RBD with size 1GB
2. map RBD
3. mount RBD
4. copy to mounted RBD 500MB file
5. Check RBD list page

Actual results:
RBD size is not correct(or info in chart).

Expected results:
Info on RBD list page is correct.

Comment 3 Shubhendu Tripathi 2017-06-19 16:19:36 UTC
There was an issue with utilization sync for mapped rbds. The same is resolved as part of tendrl-ceph-integration-3.0-alpha.6.el7scon.noarch.rpm

Comment 6 Shubhendu Tripathi 2018-11-19 05:41:35 UTC
This product is EOL now


Note You need to log in before you can comment on or make changes to this bug.