Bug 1351703 - Misleading space available on a pool is reported when adding RBD into existing pool
Summary: Misleading space available on a pool is reported when adding RBD into existin...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat Storage
Component: UI
Version: 2
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 2
Assignee: kamlesh
QA Contact: Martin Kudlej
URL:
Whiteboard:
Depends On:
Blocks: Console-2-GA
TreeView+ depends on / blocked
 
Reported: 2016-06-30 15:38 UTC by Martin Bukatovic
Modified: 2016-08-23 19:56 UTC (History)
5 users (show)

Fixed In Version: rhscon-ui-0.0.48-1.el7scon.noarch
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-23 19:56:17 UTC
Embargoed:


Attachments (Terms of Use)
screenshot 1: Add Block Storage page (40.63 KB, image/png)
2016-06-30 15:39 UTC, Martin Bukatovic
no flags Details
screenshot 2: Pools list page (20.65 KB, image/png)
2016-06-30 15:40 UTC, Martin Bukatovic
no flags Details
screenshot 3: Add Block Storage - Quotas (48.34 KB, image/png)
2016-07-01 13:07 UTC, Martin Bukatovic
no flags Details
screenshot 4: Add Block Storage - overview before submission (28.23 KB, image/png)
2016-07-01 13:15 UTC, Martin Bukatovic
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2016:1754 0 normal SHIPPED_LIVE New packages: Red Hat Storage Console 2.0 2017-04-18 19:09:06 UTC

Description Martin Bukatovic 2016-06-30 15:38:16 UTC
Description of problem
======================

When one tries to create RADOS block device on already existing pool,
a misleading value of space available is reported.

Version-Release
===============

On RHSC 2.0 machine:

rhscon-ui-0.0.42-1.el7scon.noarch
rhscon-core-0.0.28-1.el7scon.x86_64
rhscon-ceph-0.0.27-1.el7scon.x86_64
rhscon-core-selinux-0.0.28-1.el7scon.noarch
ceph-installer-1.0.12-3.el7scon.noarch
ceph-ansible-1.0.5-23.el7scon.noarch
rhscon-ceph-0.0.27-1.el7scon.x86_64

On Ceph machines:

rhscon-agent-0.0.13-1.el7scon.noarch
libcephfs1-10.2.2-5.el7cp.x86_64
ceph-osd-10.2.2-5.el7cp.x86_64
python-cephfs-10.2.2-5.el7cp.x86_64
ceph-common-10.2.2-5.el7cp.x86_64
ceph-base-10.2.2-5.el7cp.x86_64
ceph-selinux-10.2.2-5.el7cp.x86_64

How reproducible
================

100 %

Steps to Reproduce
==================

1. Install RHSC 2.0 following the documentation.
2. Accept few nodes for the ceph cluster.
3. Create new ceph cluster named 'alpha'.
4. Create new ceph rbd on this cluster (so that a ceph pool
   is created during this as well).
5. Try to create another rbd, click on "Add Storage" button, select RBD,
   and then select "Choose existing pool" using the pool created in a
   previous step - but don't click next, stay here.
6. Check stats about space available on this pool.

Actual results
==============

The "Add Block Storage" wizard reports that there is (see screenshot #1):

 * 5.0 GB available
 * 1.0 GB to be added (this is the target size I have selected in the form)
 * 4.0 GB remaining

but on the Pools list page, I see that `115.0 B of 13.3 GB used` (see
screenshot #2).

Moreover `ceph df` shows:

~~~
# ceph -c /etc/ceph/alpha.conf df 
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED 
    40915M     40774M         141M          0.34 
POOLS:
    NAME         ID     USED     %USED     MAX AVAIL     OBJECTS 
    rbd_pool     1       115         0        13591M           4
~~~

Where does the 5.0 GB value come from?

Expected results
================

The "Add Block Storage" wizard reports the same size as both Pools list page
and ceph df command.

Comment 1 Martin Bukatovic 2016-06-30 15:39:40 UTC
Created attachment 1174654 [details]
screenshot 1: Add Block Storage page

Comment 2 Martin Bukatovic 2016-06-30 15:40:29 UTC
Created attachment 1174655 [details]
screenshot 2: Pools list page

Comment 3 Martin Bukatovic 2016-07-01 13:07:32 UTC
Created attachment 1174920 [details]
screenshot 3: Add Block Storage - Quotas

The misleading value of a pool capacity is reported during Quotas setup when one
creates new pool for a RBD - see attached screenshot 3.

Comment 4 Martin Bukatovic 2016-07-01 13:15:55 UTC
Created attachment 1174923 [details]
screenshot 4: Add Block Storage - overview before submission

(In reply to Martin Bukatovic from Description)
> Where does the 5.0 GB value come from?

This seems to be values reported as "Optimized For" during an overview (the last
page of Add Block Storage wizard). See attached screenshot 4.

What is the purpose/meaning of this value?

That said, this 'optimized for' is clearly not a capacity or size of the pool
so that RHSC 2.0 ui should not use it that way anywhere.

Comment 5 Martin Bukatovic 2016-07-07 14:56:02 UTC
Providing additional information as requested during today's meeting.

OSD configuration:

~~~
# ceph -c /etc/ceph/alpha.conf osd tree
ID  WEIGHT  TYPE NAME                                           UP/DOWN REWEIGHT PRIMARY-AFFINITY
-10       0 root general
 -6       0     host dhcp-126-84.example.com
 -7       0     host dhcp-126-83.example.com
 -8       0     host dhcp-126-85.example.com
 -9       0     host dhcp-126-82.example.com
 -1 0.03998 root default
 -2 0.00999     host dhcp-126-82
  0 0.00999         osd.0                                            up  1.00000          1.00000
 -3 0.00999     host dhcp-126-83
  1 0.00999         osd.1                                            up  1.00000          1.00000
 -4 0.00999     host dhcp-126-84
  2 0.00999         osd.2                                            up  1.00000          1.00000
 -5 0.00999     host dhcp-126-85
  3 0.00999         osd.3                                            up  1.00000          1.00000
~~~

CRUSH and ruleset:

~~~
# ceph -c /etc/ceph/alpha.conf osd pool get rbd_pool crush_ruleset
crush_ruleset: 1
# ceph -c /etc/ceph/alpha.conf osd crush rule dump general
{
    "rule_id": 1,
    "rule_name": "general",
    "ruleset": 1,
    "type": 1,
    "min_size": 1,
    "max_size": 10,
    "steps": [
        {
            "op": "take",
            "item": -10,
            "item_name": "general"
        },
        {
            "op": "chooseleaf_firstn",
            "num": 0,
            "type": "host"
        },
        {
            "op": "emit"
        }
    ]
}
~~~

Comment 6 Martin Bukatovic 2016-07-11 17:28:53 UTC
(In reply to Martin Bukatovic from comment #5)
> Providing additional information as requested during today's meeting.
> 
> OSD configuration:
> 
> ~~~
> # ceph -c /etc/ceph/alpha.conf osd tree
> ID  WEIGHT  TYPE NAME                                           UP/DOWN
> REWEIGHT PRIMARY-AFFINITY
> -10       0 root general
>  -6       0     host dhcp-126-84.example.com
>  -7       0     host dhcp-126-83.example.com
>  -8       0     host dhcp-126-85.example.com
>  -9       0     host dhcp-126-82.example.com
>  -1 0.03998 root default
>  -2 0.00999     host dhcp-126-82
>   0 0.00999         osd.0                                            up 
> 1.00000          1.00000
>  -3 0.00999     host dhcp-126-83
>   1 0.00999         osd.1                                            up 
> 1.00000          1.00000
>  -4 0.00999     host dhcp-126-84
>   2 0.00999         osd.2                                            up 
> 1.00000          1.00000
>  -5 0.00999     host dhcp-126-85
>   3 0.00999         osd.3                                            up 
> 1.00000          1.00000
> ~~~
> 
> CRUSH and ruleset:
> 
> ~~~
> # ceph -c /etc/ceph/alpha.conf osd pool get rbd_pool crush_ruleset
> crush_ruleset: 1
> # ceph -c /etc/ceph/alpha.conf osd crush rule dump general
> {
>     "rule_id": 1,
>     "rule_name": "general",
>     "ruleset": 1,
>     "type": 1,
>     "min_size": 1,
>     "max_size": 10,
>     "steps": [
>         {
>             "op": "take",
>             "item": -10,
>             "item_name": "general"
>         },
>         {
>             "op": "chooseleaf_firstn",
>             "num": 0,
>             "type": "host"
>         },
>         {
>             "op": "emit"
>         }
>     ]
> }
> ~~~

It turns out that this comment may be a little misleading.

While this output is from the cluster I used to report this BZ 1351703, the
problem which can be found in CRUSH cluster map based on this output is not
related to this BZ 1351703 because I was able to reproduce it on a cluster
which have no problems with CRUSH cluster map as shown here (I created a
separate BZ 1354603 for that problem).

Comment 7 kamlesh 2016-07-20 10:26:59 UTC
fixed in rhscon-ui-0.0.48-1.el7scon.noarch ui build

Comment 8 Martin Kudlej 2016-08-04 11:34:15 UTC
Tested with
ceph-ansible-1.0.5-31.el7scon.noarch
ceph-installer-1.0.14-1.el7scon.noarch
rhscon-ceph-0.0.39-1.el7scon.x86_64
rhscon-core-0.0.39-1.el7scon.x86_64
rhscon-core-selinux-0.0.39-1.el7scon.noarch
rhscon-ui-0.0.51-1.el7scon.noarch
and it works. Available sizes are correct.

Comment 10 errata-xmlrpc 2016-08-23 19:56:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2016:1754


Note You need to log in before you can comment on or make changes to this bug.