Bug 1397720 - Can't import the exist ceph cluster via storage console web UI
Summary: Can't import the exist ceph cluster via storage console web UI
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat Storage
Component: Ceph Integration
Version: 2
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: 3
Assignee: Nishanth Thomas
QA Contact: sds-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-23 09:07 UTC by liuwei
Modified: 2023-09-15 00:00 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)
the error picture (149.83 KB, image/png)
2016-11-23 09:07 UTC, liuwei
no flags Details

Description liuwei 2016-11-23 09:07:50 UTC
Created attachment 1223075 [details]
the error picture

Description of problem:
According to the "Red Hat Storage Console 2.0 Quick Start Guide ", the customer can't import the exist ceph cluster .   The error is below:

Failed to retrive cluster information from the selected host 'rhcs hostname'. Please select a monitor host and try again.

For the explore debug messages, the error is below:

Failed to load resource: the server responded with a status of 500 (Internal Server Error) 

"Error getting import cluster details"

The guide link is below:

https://access.redhat.com/documentation/en/red-hat-storage-console/2.0/single/quick-start-guide

Version-Release number of selected component (if applicable):

OS version: RHEl7.2/7.3

ceph rpm version:

1 ceph storage console side:

rpm -qa |grep ceph-
ceph-ansible-1.0.5-45.el7scon.noarch
ceph-installer-1.0.15-2.el7scon.noarch
rhscon-ceph-0.0.43-1.el7scon.x86_64

2 ceph storage monitor side:

rpm -qa |grep ceph-
ceph-common-10.2.2-41.el7cp.x86_64
ceph-test-10.2.2-41.el7cp.x86_64
ceph-ansible-1.0.5-34.el7scon.noarch
ceph-osd-10.2.2-41.el7cp.x86_64
ceph-selinux-10.2.2-41.el7cp.x86_64
ceph-mon-10.2.2-41.el7cp.x86_64
ceph-base-10.2.2-41.el7cp.x86_64
rpm -qa |grep calamari-
calamari-server-1.4.9-1.el7cp.x86_64
How reproducible:

Follow the guide, 100% reproduced.

Steps to Reproduce:
1.
2.
3.

Actual results:

import the exist ceph cluster information failure

Expected results:

import it successfully 

Additional info:

Comment 7 Tupper Cole 2017-01-25 09:19:18 UTC
I'm sitting on site with a customer, hitting the same issue. The inability to perform the *primary task of RHSC* is a pretty big blocker, no? I would expect some kind of discussion or workaround in the BZ notes. As is, I'm staring at a customer who would like to use RHSC and Ceph, but can't. I hope this doesn't push him to the competing product he is evaluating, which has a working GUI management system.

Comment 10 Cyril Lopez 2017-03-27 09:07:01 UTC
+1 

please share the password ceph-console is expecting for calamari-light user :




Mar 27 05:05:59 ceph1-mon calamari-lite: 172.20.1.16 - - [2017-03-27 04:05:59] "GET /api/v2/cluster?format=json HTTP/1.1" 403 242 0.206186
Mar 27 05:05:59 ceph1-mon calamari-lite: 172.20.1.16 - - [2017-03-27 04:05:59] "GET /api/v2/auth/login/ HTTP/1.1" 200 552 0.029658
Mar 27 05:05:59 ceph1-mon calamari-lite: 172.20.1.16 - - [2017-03-27 04:05:59] "POST /api/v2/auth/login/ HTTP/1.1" 401 462 0.118966
Mar 27 05:05:59 ceph1-mon calamari-lite: 172.20.1.16 - - [2017-03-27 04:05:59] "GET /api/v2/cluster?format=json HTTP/1.1" 403 242 0.003048

Comment 12 Fabian Dammekens 2017-03-31 08:41:45 UTC
It seems the default 'admin' user is not created, create it using calamari-ctl add_user and you should be able to continue the import process. For the record, the password should also be 'admin'.

Comment 13 Stuart James 2017-05-25 15:06:18 UTC
To fix this do the following on all the monitor nodes

yum install calamari-server
calamari-ctl clear --yes-i-am-sure
calamari-ctl initialize --admin-username admin --admin-password admin --admin-email RTFM


From my testing you must then select the monitor node that has Calamari-server installed otherwise the cluster import fails, this brings me to the question what if this monitor node(with calamari-server) fails how does RHSCON handle this scenario, i am assuming it relies on the calamari-server for its associated cluster metrics, assuming you did actually install calamari-server on all monitor nodes does RHSCON know to utilize the others in the event the one you associated with during import fails?

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html-single/installation_guide_for_red_hat_enterprise_linux/#calamari_server_installation

Comment 14 Stuart James 2017-05-25 20:34:57 UTC
Ive tested this further and there is single point of failure here, the Monitor node you select (you are restricted to only select one when you import) becomes a single point of failure within the RHSCON interface(it is used exclusively for Calamari server), this does not relate to original ticket i will raise a new one.


i have created bug 1455693

Comment 17 Shubhendu Tripathi 2018-11-19 05:43:51 UTC
This product is EOL now

Comment 18 Red Hat Bugzilla 2023-09-15 00:00:38 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days


Note You need to log in before you can comment on or make changes to this bug.