Bug 1840721 - gwcli reports wrong ec k+m value of EC pool
Summary: gwcli reports wrong ec k+m value of EC pool
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: iSCSI
Version: 4.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 5.0z4
Assignee: Xiubo Li
QA Contact: Gopi
URL:
Whiteboard:
Depends On:
Blocks: 1959686
TreeView+ depends on / blocked
 
Reported: 2020-05-27 13:44 UTC by Heðin
Modified: 2022-02-08 13:01 UTC (History)
7 users (show)

Fixed In Version: ceph-iscsi-3.5-2.el8cp
Doc Type: Bug Fix
Doc Text:
.The `gwcli` tool now shows the correct erasure coded pool profile Previously, the `gwcli` tool would show the incorrect `k+m` values of the erasure coded pool. With this release, the `gwcli` tool pulls the information from the erasure coded pool settings from the associated erasure coded profile and the {storage-product} cluster shows the correct erasure coded pool profile.
Clone Of:
Environment:
Last Closed: 2022-02-08 13:01:20 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-iscsi pull 240 0 None Merged gwcli: fix the erasure pool profile's display info 2022-01-07 09:58:05 UTC
Red Hat Product Errata RHBA-2022:0466 0 None None None 2022-02-08 13:01:44 UTC

Description Heðin 2020-05-27 13:44:14 UTC
Description of problem:
#1 Create ec-data pool, with k=8, m=3
#2 Enter gwcli
#3 ls /cluster/ceph/pools
...
o- ec83_data ......... [(9+2),
...

Version-Release number of selected component (if applicable):
$ ceph versions
{
    "mon": {
        "ceph version 14.2.4-125.el8cp (db63624068590e593c47150c7574d08c1ec0d3e4) nautilus (stable)": 3
    },
    "mgr": {
        "ceph version 14.2.4-125.el8cp (db63624068590e593c47150c7574d08c1ec0d3e4) nautilus (stable)": 3
    },
    "osd": {
        "ceph version 14.2.4-125.el8cp (db63624068590e593c47150c7574d08c1ec0d3e4) nautilus (stable)": 57
    },
    "mds": {},
    "tcmu-runner": {
        "ceph version 14.2.4-125.el8cp (db63624068590e593c47150c7574d08c1ec0d3e4) nautilus (stable)": 4
    },
    "overall": {
        "ceph version 14.2.4-125.el8cp (db63624068590e593c47150c7574d08c1ec0d3e4) nautilus (stable)": 67
    }
}


How reproducible:
Always

Steps to Reproduce:
#1 Create ec-data pool, with k=8, m=3
#2 Enter gwcli
#3 ls /cluster/ceph/pools
...
o- ec83_data ......... [(9+2),
...

Actual results:


Expected results:
(8+3)

Additional info:

Comment 1 Heðin 2020-05-27 13:53:15 UTC
size=k+m=8+3=11
min_size=k+1=8+1=9

The pool I have created atm for this is seen below.

pool X 'ec83_data' erasure size 11 min_size 9 crush_rule 1 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode warn last_change 957 flags hashpspool,ec_overwrites,selfmanaged_snaps stripe_width 32768 application rbd

Comment 2 Jason Dillaman 2020-05-28 00:37:06 UTC
'gwcli' really should pull the EC pool settings from the associated EC profile and not from the size/min-size.

Comment 14 Gopi 2022-01-18 11:02:31 UTC
Working as expected.

sh-4.4# rpm -qa | grep  ceph-iscsi*
ceph-iscsi-3.5-2.el8cp.noarch

[ceph: root@ceph-rbd1-5-1gpatta-gz7tuk-node1-installer ~]# ceph osd erasure-code-profile set myprofile \
> k=8 \
> m=3 \
> crush-failure-domain=rack

[ceph: root@ceph-rbd1-5-1gpatta-gz7tuk-node1-installer ~]# ceph osd pool create ec_pool erasure myprofile
pool 'ec_pool' created

[root@ceph-rbd1-5-1gpatta-gz7tuk-node1-installer cephuser]# podman exec -it 14d8477a8314 sh
sh-4.4# gwcli
/> ls /cluster/ceph/pools/ec_pool
o- ec_pool ..................................................................... [(8+3), Commit: 0.00Y/129773816K (0%), Used: 0.00Y]
/>

Comment 19 errata-xmlrpc 2022-02-08 13:01:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:0466


Note You need to log in before you can comment on or make changes to this bug.