Bug 1351703
Summary: | Misleading space available on a pool is reported when adding RBD into existing pool | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Storage Console | Reporter: | Martin Bukatovic <mbukatov> |
Component: | UI | Assignee: | kamlesh <kaverma> |
Status: | CLOSED ERRATA | QA Contact: | Martin Kudlej <mkudlej> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 2 | CC: | mkudlej, nthomas, sankarshan, shtripat, vsarmila |
Target Milestone: | --- | ||
Target Release: | 2 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | rhscon-ui-0.0.48-1.el7scon.noarch | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-08-23 19:56:17 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1353450 | ||
Attachments: |
Description
Martin Bukatovic
2016-06-30 15:38:16 UTC
Created attachment 1174654 [details]
screenshot 1: Add Block Storage page
Created attachment 1174655 [details]
screenshot 2: Pools list page
Created attachment 1174920 [details]
screenshot 3: Add Block Storage - Quotas
The misleading value of a pool capacity is reported during Quotas setup when one
creates new pool for a RBD - see attached screenshot 3.
Created attachment 1174923 [details] screenshot 4: Add Block Storage - overview before submission (In reply to Martin Bukatovic from Description) > Where does the 5.0 GB value come from? This seems to be values reported as "Optimized For" during an overview (the last page of Add Block Storage wizard). See attached screenshot 4. What is the purpose/meaning of this value? That said, this 'optimized for' is clearly not a capacity or size of the pool so that RHSC 2.0 ui should not use it that way anywhere. Providing additional information as requested during today's meeting. OSD configuration: ~~~ # ceph -c /etc/ceph/alpha.conf osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -10 0 root general -6 0 host dhcp-126-84.example.com -7 0 host dhcp-126-83.example.com -8 0 host dhcp-126-85.example.com -9 0 host dhcp-126-82.example.com -1 0.03998 root default -2 0.00999 host dhcp-126-82 0 0.00999 osd.0 up 1.00000 1.00000 -3 0.00999 host dhcp-126-83 1 0.00999 osd.1 up 1.00000 1.00000 -4 0.00999 host dhcp-126-84 2 0.00999 osd.2 up 1.00000 1.00000 -5 0.00999 host dhcp-126-85 3 0.00999 osd.3 up 1.00000 1.00000 ~~~ CRUSH and ruleset: ~~~ # ceph -c /etc/ceph/alpha.conf osd pool get rbd_pool crush_ruleset crush_ruleset: 1 # ceph -c /etc/ceph/alpha.conf osd crush rule dump general { "rule_id": 1, "rule_name": "general", "ruleset": 1, "type": 1, "min_size": 1, "max_size": 10, "steps": [ { "op": "take", "item": -10, "item_name": "general" }, { "op": "chooseleaf_firstn", "num": 0, "type": "host" }, { "op": "emit" } ] } ~~~ (In reply to Martin Bukatovic from comment #5) > Providing additional information as requested during today's meeting. > > OSD configuration: > > ~~~ > # ceph -c /etc/ceph/alpha.conf osd tree > ID WEIGHT TYPE NAME UP/DOWN > REWEIGHT PRIMARY-AFFINITY > -10 0 root general > -6 0 host dhcp-126-84.example.com > -7 0 host dhcp-126-83.example.com > -8 0 host dhcp-126-85.example.com > -9 0 host dhcp-126-82.example.com > -1 0.03998 root default > -2 0.00999 host dhcp-126-82 > 0 0.00999 osd.0 up > 1.00000 1.00000 > -3 0.00999 host dhcp-126-83 > 1 0.00999 osd.1 up > 1.00000 1.00000 > -4 0.00999 host dhcp-126-84 > 2 0.00999 osd.2 up > 1.00000 1.00000 > -5 0.00999 host dhcp-126-85 > 3 0.00999 osd.3 up > 1.00000 1.00000 > ~~~ > > CRUSH and ruleset: > > ~~~ > # ceph -c /etc/ceph/alpha.conf osd pool get rbd_pool crush_ruleset > crush_ruleset: 1 > # ceph -c /etc/ceph/alpha.conf osd crush rule dump general > { > "rule_id": 1, > "rule_name": "general", > "ruleset": 1, > "type": 1, > "min_size": 1, > "max_size": 10, > "steps": [ > { > "op": "take", > "item": -10, > "item_name": "general" > }, > { > "op": "chooseleaf_firstn", > "num": 0, > "type": "host" > }, > { > "op": "emit" > } > ] > } > ~~~ It turns out that this comment may be a little misleading. While this output is from the cluster I used to report this BZ 1351703, the problem which can be found in CRUSH cluster map based on this output is not related to this BZ 1351703 because I was able to reproduce it on a cluster which have no problems with CRUSH cluster map as shown here (I created a separate BZ 1354603 for that problem). fixed in rhscon-ui-0.0.48-1.el7scon.noarch ui build Tested with ceph-ansible-1.0.5-31.el7scon.noarch ceph-installer-1.0.14-1.el7scon.noarch rhscon-ceph-0.0.39-1.el7scon.x86_64 rhscon-core-0.0.39-1.el7scon.x86_64 rhscon-core-selinux-0.0.39-1.el7scon.noarch rhscon-ui-0.0.51-1.el7scon.noarch and it works. Available sizes are correct. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2016:1754 |