Description of problem: Clusterrole storage-admin should have access right to all APIs in snapshot.storage.k8s.io group Version-Release number of selected component (if applicable): $ oc get clusterversion --context=admin NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.5.0-0.nightly-2020-05-30-025738 True False 5h41m Cluster version is 4.5.0-0.nightly-2020-05-30-025738 How reproducible: Always Steps to Reproduce: 1. login with a regular user(testuser-1) 2. give this user the clusterfole of storage-admin oc adm policy add-cluster-role-to-user storage-admin testuser-1 --context=admin oc --context=admin config rename-context /xxxx:6443/testuser-1 storage-admin 3. run some commands with testuser-1 Actual results: $ oc get pv --context=storage-admin NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-193d2447 1Gi RWO Delete Available local-sc 122m local-pv-5535fd7e 1Gi RWO Delete Available local-sc 122m local-pv-d020cf98 1500Gi RWO Delete Bound default/pvc1 local-sc 81m $ oc get VolumeSnapshotContent --context=storage-admin Error from server (Forbidden): volumesnapshotcontents.snapshot.storage.k8s.io is forbidden: User "testuser-1" cannot list resource "volumesnapshotcontents" in API group "snapshot.storage.k8s.io" at the cluster scope $ oc get VolumeSnapshot --context=storage-admin Error from server (Forbidden): volumesnapshots.snapshot.storage.k8s.io is forbidden: User "testuser-1" cannot list resource "volumesnapshots" in API group "snapshot.storage.k8s.io" in the namespace "default" $ oc get VolumeSnapshotClass --context=storage-admin Error from server (Forbidden): volumesnapshotclasses.snapshot.storage.k8s.io is forbidden: User "testuser-1" cannot list resource "volumesnapshotclasses" in API group "snapshot.storage.k8s.io" at the cluster scope Expected results: testuser-1 should have access right to all APIs in snapshot.storage.k8s.io group Master Log: Node Log (of failed PODs): PV Dump: PVC Dump: StorageClass Dump (if StorageClass used by PV/PVC): Additional info:
It looks like that kubelet actually stores its iscsi.json file into the mounted volume! And in case the volume is read-only, it can't store the data there at all. This is wrong, kubelet should not touch data on the volume at all.
Please disregard comment #1, wrong bug.
Verified pass [wduan@MINT azuredisk]$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.0-0.nightly-2020-07-16-211200 True False 7h6m Cluster version is 4.6.0-0.nightly-2020-07-16-211200 After giving this user the clusterfole of storage-admin: Could create/list VolumeSnapshotClass [wduan@MINT azuredisk]$ oc create -f VolumeSnapshotClass.yaml volumesnapshotclass.snapshot.storage.k8s.io/csi-snapclass-1 created [wduan@MINT azuredisk]$ oc get volumesnapshotclass NAME DRIVER DELETIONPOLICY AGE csi-snapclass disk.csi.azure.com Delete 82m csi-snapclass-1 disk.csi.azure.com Delete 14s Could list volumesnapshot [wduan@MINT azuredisk]$ oc get volumesnapshot NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE mysnapshot01 true pvc-ori 2Gi csi-snapclass snapcontent-a9c6fb76-17f8-44e2-85e5-684d952a1962 72m 74m Could list volumesnapshotcontent [wduan@MINT azuredisk]$ oc get volumesnapshotcontent snapcontent-a9c6fb76-17f8-44e2-85e5-684d952a1962 NAME READYTOUSE RESTORESIZE DELETIONPOLICY DRIVER VOLUMESNAPSHOTCLASS VOLUMESNAPSHOT AGE snapcontent-a9c6fb76-17f8-44e2-85e5-684d952a1962 true 2147483648 Delete disk.csi.azure.com csi-snapclass mysnapshot01 72m
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196