Description of problem: #1 Create ec-data pool, with k=8, m=3 #2 Enter gwcli #3 ls /cluster/ceph/pools ... o- ec83_data ......... [(9+2), ... Version-Release number of selected component (if applicable): $ ceph versions { "mon": { "ceph version 14.2.4-125.el8cp (db63624068590e593c47150c7574d08c1ec0d3e4) nautilus (stable)": 3 }, "mgr": { "ceph version 14.2.4-125.el8cp (db63624068590e593c47150c7574d08c1ec0d3e4) nautilus (stable)": 3 }, "osd": { "ceph version 14.2.4-125.el8cp (db63624068590e593c47150c7574d08c1ec0d3e4) nautilus (stable)": 57 }, "mds": {}, "tcmu-runner": { "ceph version 14.2.4-125.el8cp (db63624068590e593c47150c7574d08c1ec0d3e4) nautilus (stable)": 4 }, "overall": { "ceph version 14.2.4-125.el8cp (db63624068590e593c47150c7574d08c1ec0d3e4) nautilus (stable)": 67 } } How reproducible: Always Steps to Reproduce: #1 Create ec-data pool, with k=8, m=3 #2 Enter gwcli #3 ls /cluster/ceph/pools ... o- ec83_data ......... [(9+2), ... Actual results: Expected results: (8+3) Additional info:
size=k+m=8+3=11 min_size=k+1=8+1=9 The pool I have created atm for this is seen below. pool X 'ec83_data' erasure size 11 min_size 9 crush_rule 1 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode warn last_change 957 flags hashpspool,ec_overwrites,selfmanaged_snaps stripe_width 32768 application rbd
'gwcli' really should pull the EC pool settings from the associated EC profile and not from the size/min-size.
Working as expected. sh-4.4# rpm -qa | grep ceph-iscsi* ceph-iscsi-3.5-2.el8cp.noarch [ceph: root@ceph-rbd1-5-1gpatta-gz7tuk-node1-installer ~]# ceph osd erasure-code-profile set myprofile \ > k=8 \ > m=3 \ > crush-failure-domain=rack [ceph: root@ceph-rbd1-5-1gpatta-gz7tuk-node1-installer ~]# ceph osd pool create ec_pool erasure myprofile pool 'ec_pool' created [root@ceph-rbd1-5-1gpatta-gz7tuk-node1-installer cephuser]# podman exec -it 14d8477a8314 sh sh-4.4# gwcli /> ls /cluster/ceph/pools/ec_pool o- ec_pool ..................................................................... [(8+3), Commit: 0.00Y/129773816K (0%), Used: 0.00Y] />
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:0466