Bug 1339480 - EC pool creation succeeds in backend but shows failed in UI
Summary: EC pool creation succeeds in backend but shows failed in UI
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat Storage
Component: core
Version: 2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 2
Assignee: Shubhendu Tripathi
QA Contact: sds-qe-bugs
URL:
Whiteboard:
: 1335525 1348652 (view as bug list)
Depends On: 1329190
Blocks: Console-2-Feature-Freeze
TreeView+ depends on / blocked
 
Reported: 2016-05-25 07:32 UTC by Martin Kudlej
Modified: 2018-11-19 05:32 UTC (History)
4 users (show)

Fixed In Version: rhscon-ceph-0.0.29-1.el7scon
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-19 05:32:27 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gerrithub.io 281972 0 None None None 2016-06-28 06:52:15 UTC

Description Martin Kudlej 2016-05-25 07:32:27 UTC
Created attachment 1161326 [details]
logs from monitor node with calamari

Description of problem:
I've tried to create erasure-coded pool in USM. Calamari task has failed(/var/log/calamari/cthulhu.log):
2016-05-25 01:52:21,818 - ERROR - calamari.request_collection Request 92cf45c6-db36-40ad-b2b2-3cffe6c6b28c experienced an error: can not change the size of an erasure-coded pool
2016-05-25 01:52:21,818 - ERROR - calamari.request_collection Request 92cf45c6-db36-40ad-b2b2-3cffe6c6b28c experienced an error: can not change the size of an erasure-coded pool

But it seems that pool has been created:
$ ceph -c /etc/ceph/testcluster.conf osd pool ls detail
pool 1 'testpool' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 33 flags hashpspool stripe_width 0
	removed_snaps [1~3]
--> pool 2 'test2pool' erasure size 3 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 128 pgp_num 128 last_change 35 flags hashpspool stripe_width 4096

$ ceph -c /etc/ceph/testcluster.conf osd pool stats
pool testpool id 1
  nothing is going on

--> pool test2pool id 2
      nothing is going on

Version-Release number of selected component (if applicable):
calamari-server-1.4.0-0.9.rc12.el7cp.x86_64
ceph-base-10.2.1-6.el7cp.x86_64
ceph-common-10.2.1-6.el7cp.x86_64
ceph-mon-10.2.1-6.el7cp.x86_64
ceph-selinux-10.2.1-6.el7cp.x86_64
libcephfs1-10.2.1-6.el7cp.x86_64
python-cephfs-10.2.1-6.el7cp.x86_64
rhscon-agent-0.0.8-1.el7scon.noarch

Steps to Reproduce:
1. install USM
2. create cluster
3. try to create erasure-coded pool

Actual results:
Calamari reports error but pool is created and according ceph commands also in good state.

Comment 2 Christina Meno 2016-05-25 20:12:51 UTC
root@vagrant-cent7:~# ceph osd pool create ecpool 12 12 erasure
pool 'ecpool' created
root@vagrant-cent7:~# ceph osd pool set ecpool size 10
Error ENOTSUP: can not change the size of an erasure-coded pool
root@vagrant-cent7:~#

Comment 4 Nishanth Thomas 2016-06-06 15:58:49 UTC
Could you please provide the info about this bug. The description and comments, also heading is confusing

Comment 5 Shubhendu Tripathi 2016-06-10 06:59:53 UTC
@Martin, there is a BZ#1329190 already in calamari where the ec pool creation actually succeeds in back-end but the task returns failed.

Once the calamari issue resolved, it should be taken care.

Comment 6 Sharmilla Abhilash 2016-06-23 06:56:26 UTC
*** Bug 1348652 has been marked as a duplicate of this bug. ***

Comment 7 Nishanth Thomas 2016-06-23 12:23:23 UTC
*** Bug 1335525 has been marked as a duplicate of this bug. ***

Comment 8 Lubos Trilety 2016-07-01 15:33:14 UTC
Tested on:
rhscon-core-0.0.29-1.el7scon.x86_64
rhscon-ceph-0.0.29-1.el7scon.x86_64
rhscon-core-selinux-0.0.29-1.el7scon.noarch
rhscon-ui-0.0.43-1.el7scon.noarch

seems to be working


Note You need to log in before you can comment on or make changes to this bug.