Bug 2044983 - modify upgrade flag in external cluster
Summary: modify upgrade flag in external cluster
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: rook
Version: 4.10
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ODF 4.10.0
Assignee: Parth Arora
QA Contact: Vijay Avuthu
URL:
Whiteboard:
Depends On:
Blocks: 2056571
TreeView+ depends on / blocked
 
Reported: 2022-01-25 12:34 UTC by Parth Arora
Modified: 2023-08-09 17:03 UTC (History)
8 users (show)

Fixed In Version: 4.10.0-171
Doc Type: Bug Fix
Doc Text:
.Adding an upgrade flag to grant new permissions With this update, you can upgrade the `cephCSIKeyrings`, for example, client.csi-cephfs-provision with new permissions caps. To upgrade all the `cephCSIKeyrings` run `python3 /etc/ceph/create-external-cluster-resources.py --upgrade`. The upgrade flag is required when you already have an ODF deployment with RHCS(external Ceph storage system) and now you are either upgrading or adding a new ODF deployment(multi-tenant) to the RHCS cluster. The upgrade flag is not required when you are freshly creating an ODF deployment with RHCS cluster.
Clone Of:
Environment:
Last Closed: 2022-04-21 09:12:44 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage rook pull 348 0 None open Bug 2044983: csi: modify upgrade flag in external cluster 2022-02-22 15:04:47 UTC
Github rook rook pull 9609 0 None open csi: modify upgrade flag in external cluster 2022-01-25 12:37:17 UTC

Internal Links: 2070510

Description Parth Arora 2022-01-25 12:34:20 UTC
Description of problem (please be detailed as possible and provide log
snippests):

The upgrade function doesn't seem to be that smart for now so it can update the new auth caps listed with the existing one, it only compares the value of the current cap with the MIN_USER_CAP_PERMISSIONS

Version of all relevant components (if applicable): 4.10


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)? No


Is there any workaround available to the best of your knowledge? Yes, to recreate the clients


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)? 3


Can this issue reproducible? yes


Can this issue reproduce from the UI? yes


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. 
2.
3.


Actual results:


Expected results: Existing users should be modified with updated caps.


Additional info:

Comment 2 Parth Arora 2022-01-25 12:35:58 UTC
Part of https://github.com/rook/rook/pull/9609

Comment 5 Travis Nielsen 2022-02-21 16:47:33 UTC
Parth please go ahead and open the backport PR for 4.10, thanks

Comment 6 Parth Arora 2022-02-22 15:00:43 UTC
Travis Created it https://github.com/red-hat-storage/rook/pull/348, thankx :)

Comment 7 Parth Arora 2022-03-07 06:33:25 UTC
Adding Doc text:

Upgrade flag:
For upgrading the older caps of 'CSI-user'(For example client.csi-cephfs-provisioner) to a newer one with new permissions.

Sample run: `python3 /etc/ceph/create-external-cluster-resources.py --upgrade`, this will upgrade all the default CSI user
                                   
PS: Upgrade flag should only be used to append new permissions to users, it shouldn't be used for changing user already applied for permission, for example, you shouldn't change in which pool user has access.

Upgrade Scenarios where upgrade flag would be needed:
 
1) If the customer already has RHCS deployment with odf.

i) So the CSI users are already created(4.9 or earlier) and if run the python script in (4.10 or later) the caps will still be the same.
ii) and so if we need to have the upgraded caps of 4.10 script, we need to run the script with --upgrade flag

2) If the customer doesn't have rhcs cluster and creates it for the first time

So there will be no CSI users and it will create for the first time and will be created with the upgraded caps permission, no need to run the upgrade flag in this case.

Comment 9 Vijay Avuthu 2022-04-05 14:21:27 UTC
verified below scenarios:

1. upgrade from ocs-registry:4.9.5-4 to ocs-registry:4.10.0-210

 https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/11349/consoleFull

After upgrade, below are caps

client.csi-cephfs-node
key: AQCYz0piYgu/IRAAipji4C8+Lfymu9vOrox3zQ==
caps: [mds] allow rw
caps: [mgr] allow rw
caps: [mon] allow r, allow command 'osd blocklist'
caps: [osd] allow rw tag cephfs *=*
client.csi-cephfs-provisioner
key: AQCYz0piDUMSIxAARuGUyhLXFO9u4zQeRG65pQ==
caps: [mgr] allow rw
caps: [mon] allow r, allow command 'osd blocklist'
caps: [osd] allow rw tag cephfs metadata=*
client.csi-rbd-node
key: AQCYz0pi88IKHhAAvzRN4fD90nkb082ldrTaHA==
caps: [mon] profile rbd, allow command 'osd blocklist'
caps: [osd] profile rbd
client.csi-rbd-provisioner
key: AQCYz0pi6W8IIBAAgRJfrAW7kZfucNdqJqS9dQ==
caps: [mgr] allow rw
caps: [mon] profile rbd, allow command 'osd blocklist'
caps: [osd] profile rbd

2. New ODF 4.10 

 https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster-prod/3747/console 

3. Deploy ODF4.9 and then Deploy ODF4.10 and check caps

client.csi-cephfs-node
key: AQCd5EtihCCRCRAAnnlXomaIiI8E7tsSrNShyw==
caps: [mds] allow rw
caps: [mgr] allow rw
caps: [mon] allow r
caps: [osd] allow rw tag cephfs *=*
client.csi-cephfs-provisioner
key: AQCd5EtiguedCxAANydeIB7z3Q6EBW9subYDHA==
caps: [mgr] allow rw
caps: [mon] allow r
caps: [osd] allow rw tag cephfs metadata=*
client.csi-rbd-node
key: AQCd5Etis9CpBRAA9FB/xDqRyGxnRC3SL7gLhg==
caps: [mon] profile rbd
caps: [osd] profile rbd
client.csi-rbd-provisioner
key: AQCd5EtihlWMBxAAI/2D8dbF1uF78s9PHOeQcQ==
caps: [mgr] allow rw
caps: [mon] profile rbd
caps: [osd] profile rbd

and run the exporter script with --upgrade and checked whether caps are upgraded  or not

client.csi-cephfs-node
key: AQCd5EtihCCRCRAAnnlXomaIiI8E7tsSrNShyw==
caps: [mds] allow rw
caps: [mgr] allow rw
caps: [mon] allow r, allow command 'osd blocklist'
caps: [osd] allow rw tag cephfs *=*
client.csi-cephfs-provisioner
key: AQCd5EtiguedCxAANydeIB7z3Q6EBW9subYDHA==
caps: [mgr] allow rw
caps: [mon] allow r, allow command 'osd blocklist'
caps: [osd] allow rw tag cephfs metadata=*
client.csi-rbd-node
key: AQCd5Etis9CpBRAA9FB/xDqRyGxnRC3SL7gLhg==
caps: [mon] profile rbd, allow command 'osd blocklist'
caps: [osd] profile rbd
client.csi-rbd-provisioner
key: AQCd5EtihlWMBxAAI/2D8dbF1uF78s9PHOeQcQ==
caps: [mgr] allow rw
caps: [mon] profile rbd, allow command 'osd blocklist'
caps: [osd] profile rbd

Moving to verified


Note You need to log in before you can comment on or make changes to this bug.