Bug 1375899
Summary: | when new disk devices are added into storage nodes, RHSC creates new OSD(s) and adds them into cluster automatically without any admin intervention | ||||||
---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Storage Console | Reporter: | Martin Bukatovic <mbukatov> | ||||
Component: | core | Assignee: | Nishanth Thomas <nthomas> | ||||
core sub component: | provisioning | QA Contact: | sds-qe-bugs | ||||
Status: | CLOSED WONTFIX | Docs Contact: | |||||
Severity: | high | ||||||
Priority: | unspecified | CC: | linuxkidd, nthomas, sankarshan, tpetr | ||||
Version: | 2 | ||||||
Target Milestone: | --- | ||||||
Target Release: | 2 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2018-11-19 05:42:07 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Martin Bukatovic
2016-09-14 08:18:02 UTC
Created attachment 1200755 [details]
screenshot 1: OSDs tab (after adding few new OSDs)
Hello, I am experiencing the same issue, the auto-expansion is picking up new disks,and causes troubles, because it is needed to manually revert these changes. ----- rhscon-ceph-0.0.43-1.el7scon.x86_64 rhscon-core-0.0.45-1.el7scon.x86_64 rhscon-core-selinux-0.0.45-1.el7scon.noarch rhscon-ui-0.0.60-1.el7scon.noarch ansible-2.2.1.0-1.el7.noarch ceph-ansible-2.1.9-1.el7scon.noarch kernel-3.10.0-514.2.2.el7.x86_64 rhscon-agent-0.0.19-1.el7scon.noarch rhscon-core-selinux-0.0.45-1.el7scon.noarch ----- Issue 1: to already running Ceph Storage node were attached 1xSSD disk and 5x HDD. - On SSD disk were created 10x Ceph journal partition even when only 5 of them are used: partial output of ceph-disk list with new attached disks: # ceph-disk list .... /dev/sdm : /dev/sdm1 ceph data, active, cluster ceph, osd.108, journal /dev/sdr6 /dev/sdn : /dev/sdn1 ceph data, active, cluster ceph, osd.109, journal /dev/sdr7 /dev/sdo : /dev/sdo1 ceph data, active, cluster ceph, osd.110, journal /dev/sdr8 /dev/sdp : /dev/sdp1 ceph data, active, cluster ceph, osd.111, journal /dev/sdr9 /dev/sdq : /dev/sdq1 ceph data, active, cluster ceph, osd.107, journal /dev/sdr5 /dev/sdr : /dev/sdr1 ceph journal /dev/sdr2 ceph journal /dev/sdr3 ceph journal /dev/sdr4 ceph journal /dev/sdr6 ceph journal, for /dev/sdm1 /dev/sdr7 ceph journal, for /dev/sdn1 /dev/sdr8 ceph journal, for /dev/sdo1 /dev/sdr9 ceph journal, for /dev/sdp1 /dev/sdr5 ceph journal, for /dev/sdq1 ---- Issue 2.: To second already running Ceph Storage node were attached only 2 new 300GB SSD disks intended to use as journal holders. Storage console created on first SSD 1x journal partition and on the second SSD Ceph OSD data partition. This is not wanted behavior. ---- Question 1.: In BZ#1342969 https://bugzilla.redhat.com/show_bug.cgi?id=1342969, https://review.gerrithub.io/#/c/294928/1/provider/import_cluster.go was disabled auto-expansion when OSDs with co-located journals were detected. Would be possible to set disable auto-expansion at any case from default? Like have option to allow auto-expansion manually during cluster import/creation otherwise it would be disabled? ---- Question 2.: related to the https://review.gerrithub.io/#/c/294928/1/provider/import_cluster.go : In mongodb, would be manually change autoexpand flag to false enough and working? Or are there any other dependencies? db.storage_clusters.update({"autoexpand":true},{$set:{"autoexpand":false}}) ---- Question 3.: If I want to prevent the Storage console from taking actions while attaching new disks, as we want use different tool for adding new OSDs, is disable and mask all the Skyring related services enough? Thanks, Tomas (In reply to Tomas Petr from comment #3) > Question 2.: > related to the > https://review.gerrithub.io/#/c/294928/1/provider/import_cluster.go : > In mongodb, would be manually change autoexpand flag to false enough and > working? Or are there any other dependencies? > db.storage_clusters.update({"autoexpand":true},{$set:{"autoexpand":false}}) > Got answer from Darshan on Question 2 in private email thread, so adding the steps to disable auto-expand + outputs from my test environment: There is an option to change it with API calls, either via REST clients in browser or using curl. - username and password is the same as for Storage Console dashboard log in - clusterid can be obtained either from "ceph -s" output from specific Ceph cluster or from browser "https://FQDN-skyring-server:10443/api/v1/clusters" to show all Ceph clusters in Storage Console, details Diagnostic steps part - "disableautoexpand":true - disable auto-expand - "disableautoexpand":false - enable auto-expand, will not work if co-located journal is detected, even if value "disableautoexpand":false is set # curl --cacert <path-to-skyring-certificate> -X POST --data '{"username":"<user>","password":"<password>"}' https://<FQDN-skyring-server>:10443/api/v1/auth/login -i # curl --cacert path-to-skyring-certificate> -X PATCH --data '{"disableautoexpand":true}' https://<FQDN-skyring-server>:10443/api/v1/clusters/<ceph-clusterid/uuid> -b session-key=<cookie-session-key-returned-in-previous-step> -i # EXAMPLE to disable: # curl --cacert /etc/pki/tls/skyring.crt -X POST --data '{"username":"admin","password":"admin"}' https://rhscon.subman:10443/api/v1/auth/login -i HTTP/1.1 200 OK Set-Cookie: session-key=MTUwMDk2NjAwNnxHd3dBR0RVNU56WmxZemMyTkRRMU0yRXlNRGhrWlRrd00ySXhOdz09fKRAopAblhzl3Iw1tPeNOveaii7XFiWYcM9FI1DkvYth; Path=/; Expires=Tue, 01 Aug 2017 07:00:06 GMT; Max-Age=604800 Date: Tue, 25 Jul 2017 07:00:06 GMT Content-Length: 24 Content-Type: text/plain; charset=utf-8 # curl --cacert /etc/pki/tls/skyring.crt -X PATCH --data '{"disableautoexpand":true}' https://rhscon.subman:10443/api/v1/clusters/a93b8b8c-a7fe-4103-8434-6a490f641a66 -b session-key=MTUwMDk2NjAwNnxHd3dBR0RVNU56WmxZemMyTkRRMU0yRXlNRGhrWlRrd00ySXhOdz09fKRAopAblhzl3Iw1tPeNOveaii7XFiWYcM9FI1DkvYth -i HTTP/1.1 200 OK Date: Tue, 25 Jul 2017 07:08:52 GMT Content-Length: 0 Content-Type: text/plain; charset=utf-8 Disable auto-expand by changing record in mongodb directly: Login to the mongodb on rhscon node, admin password is by default in the /etc/skyring/skyring.conf file # mongo 127.0.0.1:27017/skyring -u admin -p <passwd> use skyring show collections db.storage_clusters.find() db.storage_clusters.find().forEach(printjson) db.storage_clusters.find({},{autoexpand:1}) # disable auto-expand db.storage_clusters.update({"autoexpand":true},{$set:{"autoexpand":false}}) exit This product is EOL now The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days |