Bug 1339533 - Number of PGs
Summary: Number of PGs
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat Storage
Component: core
Version: 2
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 2
Assignee: Shubhendu Tripathi
QA Contact: Daniel Horák
URL:
Whiteboard:
Depends On:
Blocks: Console-2-DevFreeze
TreeView+ depends on / blocked
 
Reported: 2016-05-25 09:29 UTC by Lubos Trilety
Modified: 2016-08-23 19:51 UTC (History)
10 users (show)

Fixed In Version: rhscon-ceph-0.0.36-1.el7scon.x86_64
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-23 19:51:58 UTC
Embargoed:


Attachments (Terms of Use)
bad number PGs (138.14 KB, image/png)
2016-05-25 09:44 UTC, Lubos Trilety
no flags Details
text PG (132.46 KB, image/png)
2016-05-25 09:44 UTC, Lubos Trilety
no flags Details
Number of PGs for each pool is 2 (80.06 KB, image/png)
2016-07-22 11:52 UTC, Daniel Horák
no flags Details
PGNUM field is UI static and can't be modified (71.33 KB, image/tiff)
2016-08-10 17:15 UTC, Jean-Charles Lopez
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Gerrithub.io 284941 0 None None None 2016-07-22 17:15:26 UTC
Red Hat Product Errata RHEA-2016:1754 0 normal SHIPPED_LIVE New packages: Red Hat Storage Console 2.0 2017-04-18 19:09:06 UTC

Description Lubos Trilety 2016-05-25 09:29:20 UTC
Description of problem:
During pool creation there's a field where a number of placement group could be filed. The value of the field is not checked as for other numeric fields, moreover Ui ignores that field completely.

Version-Release number of selected component (if applicable):
rhscon-core-0.0.19-1.el7scon.x86_64
rhscon-ceph-0.0.18-1.el7scon.x86_64
rhscon-ui-0.0.34-1.el7scon.noarch

How reproducible:
100%

Steps to Reproduce:
1. Start with pool creation, look for 'Placement Groups'
2. Fill some number and finish a pool creation.
3. Check the post data

The post looks like this
POST http://<hostname>:80...rs/0ecc4703-4ea7-457d-8c39-ad227b594673/storages
{"name":"test_pool","profile":"general","size":"128GB","options":{"pgnum":"128"},"type":"replicated","replicas":3}


where 'pgnum' is as it was 

Actual results:
As number of placement groups could be filed any value, it has no impact on what UI posts. It always use automatically counted number as 'pgnum'.


Expected results:
UI posts manually given number of PGs as 'pgnum'.

Additional info:

Comment 1 Lubos Trilety 2016-05-25 09:44:02 UTC
Created attachment 1161361 [details]
bad number PGs

Filed bad number as number of PGs

Comment 2 Lubos Trilety 2016-05-25 09:44:47 UTC
Created attachment 1161362 [details]
text PG

filed text as number of PGs

Comment 3 Nishanth Thomas 2016-06-14 12:40:16 UTC
Design is changed, PGs are auto populated

Comment 4 Martin Kudlej 2016-07-22 11:34:48 UTC
Design is different and it works now:
1) all clusters have default value 128 PGs if there are less than 50 nodes
2) if there is more than nodes user set number of PGs

It works on small clusters.

ceph-ansible-1.0.5-27.el7scon.noarch
ceph-installer-1.0.14-1.el7scon.noarch
rhscon-ceph-0.0.33-1.el7scon.x86_64
rhscon-core-0.0.34-1.el7scon.x86_64
rhscon-core-selinux-0.0.34-1.el7scon.noarch
rhscon-ui-0.0.48-1.el7scon.noarch

Comment 5 Daniel Horák 2016-07-22 11:52:18 UTC
Created attachment 1182839 [details]
Number of PGs for each pool is 2

Comment 6 Daniel Horák 2016-07-22 11:57:19 UTC
It is not working properly on larger cluster with more than 50 OSDs.

I have cluster with 40 OSD nodes, 3 OSDs peer cluster = 120 OSDs total.

I'm able to chose the desired "Placement Groups/Optimal Pool Size" during the pool creation (in my case between 128 and 4096 PGs), but no mater what value I choose, the create pool have always only 2 PGs (as you can see on the attachment 1182839 [details]).

And it is also visible in from ceph status command - 4 pools, 8 pgs:
# ceph --cluster TestClusterA -s 
    cluster e53ac897-18c3-44e7-aac9-805d45f4bade
     health HEALTH_OK
     monmap e3: 3 mons at {dhcp42-124=10.70.42.124:6789/0,dhcp42-28=10.70.42.28:6789/0,dhcp42-36=10.70.42.36:6789/0}
            election epoch 10, quorum 0,1,2 dhcp42-28,dhcp42-36,dhcp42-124
     osdmap e751: 120 osds: 120 up, 120 in
            flags sortbitwise
      pgmap v3707: 8 pgs, 4 pools, 6650 bytes data, 16 objects
            8443 MB used, 118 TB / 118 TB avail
                   8 active+clean

USM Server:
  ceph-ansible-1.0.5-31.el7scon.noarch
  ceph-installer-1.0.14-1.el7scon.noarch
  mongodb-2.6.5-4.1.el7.x86_64
  mongodb-server-2.6.5-4.1.el7.x86_64
  rhscon-ceph-0.0.33-1.el7scon.x86_64
  rhscon-core-0.0.34-1.el7scon.x86_64
  rhscon-core-selinux-0.0.34-1.el7scon.noarch
  rhscon-ui-0.0.48-1.el7scon.noarch

Ceph MON Server:
  calamari-server-1.4.7-1.el7cp.x86_64
  ceph-base-10.2.2-24.el7cp.x86_64
  ceph-common-10.2.2-24.el7cp.x86_64
  ceph-mon-10.2.2-24.el7cp.x86_64
  ceph-selinux-10.2.2-24.el7cp.x86_64
  libcephfs1-10.2.2-24.el7cp.x86_64
  python-cephfs-10.2.2-24.el7cp.x86_64
  rhscon-agent-0.0.15-1.el7scon.noarch
  rhscon-core-selinux-0.0.34-1.el7scon.noarch

Comment 7 Christina Meno 2016-07-22 19:40:01 UTC
not ceph-ansible AND I don't know where it goes. Shunbhendu would you please fix the sub-component field?

Comment 8 Shubhendu Tripathi 2016-07-22 19:43:57 UTC
By mistake it was set as ceph-ansible my me. Corrected the same.

Comment 9 Daniel Horák 2016-08-05 06:13:40 UTC
Tested on cluster with 40 OSD nodes/3 OSDs peer node (total 120 OSDs).

Tested with three pools with different number of PGs/optimal pool size:
  * 128PGs / 517GB
  * 512PGs / 2TB
  * 4096PGs / 16TB

All pools was created with proper/configured number of PGs.

USM Server (RHEL 7.2):
  ceph-ansible-1.0.5-32.el7scon.noarch
  ceph-installer-1.0.14-1.el7scon.noarch
  rhscon-ceph-0.0.39-1.el7scon.x86_64
  rhscon-core-0.0.39-1.el7scon.x86_64
  rhscon-core-selinux-0.0.39-1.el7scon.noarch
  rhscon-ui-0.0.51-1.el7scon.noarch

Ceph MON Server (RHEL 7.2):
  calamari-server-1.4.8-1.el7cp.x86_64
  ceph-base-10.2.2-33.el7cp.x86_64
  ceph-common-10.2.2-33.el7cp.x86_64
  ceph-mon-10.2.2-33.el7cp.x86_64
  ceph-selinux-10.2.2-33.el7cp.x86_64
  libcephfs1-10.2.2-33.el7cp.x86_64
  python-cephfs-10.2.2-33.el7cp.x86_64
  rhscon-agent-0.0.16-1.el7scon.noarch
  rhscon-core-selinux-0.0.39-1.el7scon.noarch

Ceph OSD Server (RHEL 7.2):
  ceph-base-10.2.2-33.el7cp.x86_64
  ceph-common-10.2.2-33.el7cp.x86_64
  ceph-osd-10.2.2-33.el7cp.x86_64
  ceph-selinux-10.2.2-33.el7cp.x86_64
  libcephfs1-10.2.2-33.el7cp.x86_64
  python-cephfs-10.2.2-33.el7cp.x86_64
  rhscon-agent-0.0.16-1.el7scon.noarch
  rhscon-core-selinux-0.0.39-1.el7scon.noarch

>> VERIFIED

Comment 10 Jean-Charles Lopez 2016-08-10 17:15:42 UTC
Created attachment 1189741 [details]
PGNUM field is UI static and can't be modified

Screen capture from 2016-08-09 showing the field PGNUM as static when creating a POOL

Comment 12 errata-xmlrpc 2016-08-23 19:51:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2016:1754


Note You need to log in before you can comment on or make changes to this bug.