Bug 1339533
Summary: | Number of PGs | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Storage Console | Reporter: | Lubos Trilety <ltrilety> | ||||||||||
Component: | core | Assignee: | Shubhendu Tripathi <shtripat> | ||||||||||
Status: | CLOSED ERRATA | QA Contact: | Daniel Horák <dahorak> | ||||||||||
Severity: | medium | Docs Contact: | |||||||||||
Priority: | unspecified | ||||||||||||
Version: | 2 | CC: | adeza, aschoen, ceph-eng-bugs, gmeno, jelopez, mkudlej, nthomas, sankarshan, shtripat, vsarmila | ||||||||||
Target Milestone: | --- | ||||||||||||
Target Release: | 2 | ||||||||||||
Hardware: | Unspecified | ||||||||||||
OS: | Unspecified | ||||||||||||
Whiteboard: | |||||||||||||
Fixed In Version: | rhscon-ceph-0.0.36-1.el7scon.x86_64 | Doc Type: | If docs needed, set a value | ||||||||||
Doc Text: | Story Points: | --- | |||||||||||
Clone Of: | Environment: | ||||||||||||
Last Closed: | 2016-08-23 19:51:58 UTC | Type: | Bug | ||||||||||
Regression: | --- | Mount Type: | --- | ||||||||||
Documentation: | --- | CRM: | |||||||||||
Verified Versions: | Category: | --- | |||||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||||
Embargoed: | |||||||||||||
Bug Depends On: | |||||||||||||
Bug Blocks: | 1344195 | ||||||||||||
Attachments: |
|
Description
Lubos Trilety
2016-05-25 09:29:20 UTC
Created attachment 1161361 [details]
bad number PGs
Filed bad number as number of PGs
Created attachment 1161362 [details]
text PG
filed text as number of PGs
Design is changed, PGs are auto populated Design is different and it works now: 1) all clusters have default value 128 PGs if there are less than 50 nodes 2) if there is more than nodes user set number of PGs It works on small clusters. ceph-ansible-1.0.5-27.el7scon.noarch ceph-installer-1.0.14-1.el7scon.noarch rhscon-ceph-0.0.33-1.el7scon.x86_64 rhscon-core-0.0.34-1.el7scon.x86_64 rhscon-core-selinux-0.0.34-1.el7scon.noarch rhscon-ui-0.0.48-1.el7scon.noarch Created attachment 1182839 [details]
Number of PGs for each pool is 2
It is not working properly on larger cluster with more than 50 OSDs.
I have cluster with 40 OSD nodes, 3 OSDs peer cluster = 120 OSDs total.
I'm able to chose the desired "Placement Groups/Optimal Pool Size" during the pool creation (in my case between 128 and 4096 PGs), but no mater what value I choose, the create pool have always only 2 PGs (as you can see on the attachment 1182839 [details]).
And it is also visible in from ceph status command - 4 pools, 8 pgs:
# ceph --cluster TestClusterA -s
cluster e53ac897-18c3-44e7-aac9-805d45f4bade
health HEALTH_OK
monmap e3: 3 mons at {dhcp42-124=10.70.42.124:6789/0,dhcp42-28=10.70.42.28:6789/0,dhcp42-36=10.70.42.36:6789/0}
election epoch 10, quorum 0,1,2 dhcp42-28,dhcp42-36,dhcp42-124
osdmap e751: 120 osds: 120 up, 120 in
flags sortbitwise
pgmap v3707: 8 pgs, 4 pools, 6650 bytes data, 16 objects
8443 MB used, 118 TB / 118 TB avail
8 active+clean
USM Server:
ceph-ansible-1.0.5-31.el7scon.noarch
ceph-installer-1.0.14-1.el7scon.noarch
mongodb-2.6.5-4.1.el7.x86_64
mongodb-server-2.6.5-4.1.el7.x86_64
rhscon-ceph-0.0.33-1.el7scon.x86_64
rhscon-core-0.0.34-1.el7scon.x86_64
rhscon-core-selinux-0.0.34-1.el7scon.noarch
rhscon-ui-0.0.48-1.el7scon.noarch
Ceph MON Server:
calamari-server-1.4.7-1.el7cp.x86_64
ceph-base-10.2.2-24.el7cp.x86_64
ceph-common-10.2.2-24.el7cp.x86_64
ceph-mon-10.2.2-24.el7cp.x86_64
ceph-selinux-10.2.2-24.el7cp.x86_64
libcephfs1-10.2.2-24.el7cp.x86_64
python-cephfs-10.2.2-24.el7cp.x86_64
rhscon-agent-0.0.15-1.el7scon.noarch
rhscon-core-selinux-0.0.34-1.el7scon.noarch
not ceph-ansible AND I don't know where it goes. Shunbhendu would you please fix the sub-component field? By mistake it was set as ceph-ansible my me. Corrected the same. Tested on cluster with 40 OSD nodes/3 OSDs peer node (total 120 OSDs).
Tested with three pools with different number of PGs/optimal pool size:
* 128PGs / 517GB
* 512PGs / 2TB
* 4096PGs / 16TB
All pools was created with proper/configured number of PGs.
USM Server (RHEL 7.2):
ceph-ansible-1.0.5-32.el7scon.noarch
ceph-installer-1.0.14-1.el7scon.noarch
rhscon-ceph-0.0.39-1.el7scon.x86_64
rhscon-core-0.0.39-1.el7scon.x86_64
rhscon-core-selinux-0.0.39-1.el7scon.noarch
rhscon-ui-0.0.51-1.el7scon.noarch
Ceph MON Server (RHEL 7.2):
calamari-server-1.4.8-1.el7cp.x86_64
ceph-base-10.2.2-33.el7cp.x86_64
ceph-common-10.2.2-33.el7cp.x86_64
ceph-mon-10.2.2-33.el7cp.x86_64
ceph-selinux-10.2.2-33.el7cp.x86_64
libcephfs1-10.2.2-33.el7cp.x86_64
python-cephfs-10.2.2-33.el7cp.x86_64
rhscon-agent-0.0.16-1.el7scon.noarch
rhscon-core-selinux-0.0.39-1.el7scon.noarch
Ceph OSD Server (RHEL 7.2):
ceph-base-10.2.2-33.el7cp.x86_64
ceph-common-10.2.2-33.el7cp.x86_64
ceph-osd-10.2.2-33.el7cp.x86_64
ceph-selinux-10.2.2-33.el7cp.x86_64
libcephfs1-10.2.2-33.el7cp.x86_64
python-cephfs-10.2.2-33.el7cp.x86_64
rhscon-agent-0.0.16-1.el7scon.noarch
rhscon-core-selinux-0.0.39-1.el7scon.noarch
>> VERIFIED
Created attachment 1189741 [details]
PGNUM field is UI static and can't be modified
Screen capture from 2016-08-09 showing the field PGNUM as static when creating a POOL
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2016:1754 |