Description of problem (please be detailed as possible and provide log snippests): The upgrade function doesn't seem to be that smart for now so it can update the new auth caps listed with the existing one, it only compares the value of the current cap with the MIN_USER_CAP_PERMISSIONS Version of all relevant components (if applicable): 4.10 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? No Is there any workaround available to the best of your knowledge? Yes, to recreate the clients Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 3 Can this issue reproducible? yes Can this issue reproduce from the UI? yes If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Existing users should be modified with updated caps. Additional info:
Part of https://github.com/rook/rook/pull/9609
Parth please go ahead and open the backport PR for 4.10, thanks
Travis Created it https://github.com/red-hat-storage/rook/pull/348, thankx :)
Adding Doc text: Upgrade flag: For upgrading the older caps of 'CSI-user'(For example client.csi-cephfs-provisioner) to a newer one with new permissions. Sample run: `python3 /etc/ceph/create-external-cluster-resources.py --upgrade`, this will upgrade all the default CSI user PS: Upgrade flag should only be used to append new permissions to users, it shouldn't be used for changing user already applied for permission, for example, you shouldn't change in which pool user has access. Upgrade Scenarios where upgrade flag would be needed: 1) If the customer already has RHCS deployment with odf. i) So the CSI users are already created(4.9 or earlier) and if run the python script in (4.10 or later) the caps will still be the same. ii) and so if we need to have the upgraded caps of 4.10 script, we need to run the script with --upgrade flag 2) If the customer doesn't have rhcs cluster and creates it for the first time So there will be no CSI users and it will create for the first time and will be created with the upgraded caps permission, no need to run the upgrade flag in this case.
verified below scenarios: 1. upgrade from ocs-registry:4.9.5-4 to ocs-registry:4.10.0-210 https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/11349/consoleFull After upgrade, below are caps client.csi-cephfs-node key: AQCYz0piYgu/IRAAipji4C8+Lfymu9vOrox3zQ== caps: [mds] allow rw caps: [mgr] allow rw caps: [mon] allow r, allow command 'osd blocklist' caps: [osd] allow rw tag cephfs *=* client.csi-cephfs-provisioner key: AQCYz0piDUMSIxAARuGUyhLXFO9u4zQeRG65pQ== caps: [mgr] allow rw caps: [mon] allow r, allow command 'osd blocklist' caps: [osd] allow rw tag cephfs metadata=* client.csi-rbd-node key: AQCYz0pi88IKHhAAvzRN4fD90nkb082ldrTaHA== caps: [mon] profile rbd, allow command 'osd blocklist' caps: [osd] profile rbd client.csi-rbd-provisioner key: AQCYz0pi6W8IIBAAgRJfrAW7kZfucNdqJqS9dQ== caps: [mgr] allow rw caps: [mon] profile rbd, allow command 'osd blocklist' caps: [osd] profile rbd 2. New ODF 4.10 https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster-prod/3747/console 3. Deploy ODF4.9 and then Deploy ODF4.10 and check caps client.csi-cephfs-node key: AQCd5EtihCCRCRAAnnlXomaIiI8E7tsSrNShyw== caps: [mds] allow rw caps: [mgr] allow rw caps: [mon] allow r caps: [osd] allow rw tag cephfs *=* client.csi-cephfs-provisioner key: AQCd5EtiguedCxAANydeIB7z3Q6EBW9subYDHA== caps: [mgr] allow rw caps: [mon] allow r caps: [osd] allow rw tag cephfs metadata=* client.csi-rbd-node key: AQCd5Etis9CpBRAA9FB/xDqRyGxnRC3SL7gLhg== caps: [mon] profile rbd caps: [osd] profile rbd client.csi-rbd-provisioner key: AQCd5EtihlWMBxAAI/2D8dbF1uF78s9PHOeQcQ== caps: [mgr] allow rw caps: [mon] profile rbd caps: [osd] profile rbd and run the exporter script with --upgrade and checked whether caps are upgraded or not client.csi-cephfs-node key: AQCd5EtihCCRCRAAnnlXomaIiI8E7tsSrNShyw== caps: [mds] allow rw caps: [mgr] allow rw caps: [mon] allow r, allow command 'osd blocklist' caps: [osd] allow rw tag cephfs *=* client.csi-cephfs-provisioner key: AQCd5EtiguedCxAANydeIB7z3Q6EBW9subYDHA== caps: [mgr] allow rw caps: [mon] allow r, allow command 'osd blocklist' caps: [osd] allow rw tag cephfs metadata=* client.csi-rbd-node key: AQCd5Etis9CpBRAA9FB/xDqRyGxnRC3SL7gLhg== caps: [mon] profile rbd, allow command 'osd blocklist' caps: [osd] profile rbd client.csi-rbd-provisioner key: AQCd5EtihlWMBxAAI/2D8dbF1uF78s9PHOeQcQ== caps: [mgr] allow rw caps: [mon] profile rbd, allow command 'osd blocklist' caps: [osd] profile rbd Moving to verified