Bug 1974441 - Document upgrade procedure from 4.7 to 4.8 for external mode with external object store
Summary: Document upgrade procedure from 4.7 to 4.8 for external mode with external ob...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenShift Container Storage
Classification: Red Hat Storage
Component: documentation
Version: 4.8
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: ---
: OCS 4.8.0
Assignee: Erin Donnelly
QA Contact: Sidhant Agrawal
URL:
Whiteboard:
Depends On:
Blocks: 1921784
TreeView+ depends on / blocked
 
Reported: 2021-06-21 17:08 UTC by Sébastien Han
Modified: 2023-09-15 01:10 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-01-24 15:26:21 UTC
Embargoed:


Attachments (Terms of Use)

Description Sébastien Han 2021-06-21 17:08:26 UTC
This is ONLY needed when the OCS cluster consumes an external object store too.

Similarly to https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/4.6/html-
single/updating_openshift_container_storage/index#enabling-monitoring-for-the-object-service-dashboard_rhocs

we must document the upgrade from 4.7 to 4.8 for external mode when an object store is used.
The script "ceph-external-cluster-details-exporter.py" must run again, with the same identical arguments used the first time.

The output will contain the newly created rgw admin ops user that rook will use to call out the rgw admin ops API.

Then, the JSON output will be used to update the Secret rook-ceph-external-cluster-details. For this we will use the UI.

Example output of the new JSON:

[{"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "a=10.100.223.211:6789", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "5cd5684c-8b8b-4d59-a98a-525f63ae69a3", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "client.healthchecker", "userKey": "AQCzxdBg5bB5LhAANBqGVmWn+4dsPESBCWkcLw=="}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "AQAe0bhgVvahJxAAGJLuP/Tajh2fz6ExqxYT/A=="}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "device_health_metrics"}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "172.17.0.11", "MonitoringPort": "9283"}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "AQAe0bhgkpHWGhAAJNPrBBD5OfbQvGRAYUrbCg=="}}, {"name": "ceph-rgw", "kind": "StorageClass", "data": {"endpoint": "10.108.182.47:80", "poolPrefix": "my-store"}}, {"name": "rgw-admin-ops-user", "kind": "Secret", "data": {"accessKey": "ESV683AZEPJ13BC5IO9R", "secretKey": "YzL0veXMUBvXGPCKmpA83ud7Vm57zB8fxK17Vewz"}}]

Comment 9 Sébastien Han 2021-07-07 08:53:59 UTC
Just FYI I'm meeting with Erin today to clarify the doc.

Comment 16 Sébastien Han 2021-07-19 15:07:17 UTC
Answered in the doc.

Sidhant, this is not expected at least enabled should be true and rulesNamespace should be set.
Arun will have more insight since this is ocs-op code.

However, it's strange that the endpoint is being updated if the spec is empty, this should not happen

Comment 17 Sidhant Agrawal 2021-07-22 04:51:40 UTC
Thanks Sebastien, I have raised another bug 1984735 for the unexpected behaviour observed in Comment #15

Comment 18 Sidhant Agrawal 2021-07-22 05:11:33 UTC
The outcome of bug 1984735 should help us decide that whether we need to document updating the secret (for monitoring endpoint) after upgrade or not.

The main aim of this bug was to document steps to create a new object store user, and a corresponding new section titled "3.1. Creating a new object store user to interact with the Ceph Object Store Administrative API" has been added in the docs.
Hence, I'm moving this bug to Verified.

Comment 19 Red Hat Bugzilla 2023-09-15 01:10:15 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days


Note You need to log in before you can comment on or make changes to this bug.