| Summary: | Can't import the exist ceph cluster via storage console web UI | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Storage Console | Reporter: | liuwei <wliu> | ||||
| Component: | Ceph Integration | Assignee: | Nishanth Thomas <nthomas> | ||||
| Status: | CLOSED EOL | QA Contact: | sds-qe-bugs | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | high | ||||||
| Version: | 2 | CC: | ceph-eng-bugs, cylopez, fabian, gmeno, mhackett, mkarnik, nthomas, sankarshan, stuartjames, tcole | ||||
| Target Milestone: | --- | Flags: | gmeno:
needinfo?
(nthomas) |
||||
| Target Release: | 3 | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | Type: | Bug | |||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Attachments: |
|
||||||
|
Description
liuwei
2016-11-23 09:07:50 UTC
I'm sitting on site with a customer, hitting the same issue. The inability to perform the *primary task of RHSC* is a pretty big blocker, no? I would expect some kind of discussion or workaround in the BZ notes. As is, I'm staring at a customer who would like to use RHSC and Ceph, but can't. I hope this doesn't push him to the competing product he is evaluating, which has a working GUI management system. +1 please share the password ceph-console is expecting for calamari-light user : Mar 27 05:05:59 ceph1-mon calamari-lite: 172.20.1.16 - - [2017-03-27 04:05:59] "GET /api/v2/cluster?format=json HTTP/1.1" 403 242 0.206186 Mar 27 05:05:59 ceph1-mon calamari-lite: 172.20.1.16 - - [2017-03-27 04:05:59] "GET /api/v2/auth/login/ HTTP/1.1" 200 552 0.029658 Mar 27 05:05:59 ceph1-mon calamari-lite: 172.20.1.16 - - [2017-03-27 04:05:59] "POST /api/v2/auth/login/ HTTP/1.1" 401 462 0.118966 Mar 27 05:05:59 ceph1-mon calamari-lite: 172.20.1.16 - - [2017-03-27 04:05:59] "GET /api/v2/cluster?format=json HTTP/1.1" 403 242 0.003048 It seems the default 'admin' user is not created, create it using calamari-ctl add_user and you should be able to continue the import process. For the record, the password should also be 'admin'. To fix this do the following on all the monitor nodes yum install calamari-server calamari-ctl clear --yes-i-am-sure calamari-ctl initialize --admin-username admin --admin-password admin --admin-email RTFM From my testing you must then select the monitor node that has Calamari-server installed otherwise the cluster import fails, this brings me to the question what if this monitor node(with calamari-server) fails how does RHSCON handle this scenario, i am assuming it relies on the calamari-server for its associated cluster metrics, assuming you did actually install calamari-server on all monitor nodes does RHSCON know to utilize the others in the event the one you associated with during import fails? https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html-single/installation_guide_for_red_hat_enterprise_linux/#calamari_server_installation Ive tested this further and there is single point of failure here, the Monitor node you select (you are restricted to only select one when you import) becomes a single point of failure within the RHSCON interface(it is used exclusively for Calamari server), this does not relate to original ticket i will raise a new one. i have created bug 1455693 This product is EOL now |