Created attachment 1881282 [details] standalone_mcg_kms Description of problem (please be detailed as possible and provide log snippets): When a standalone MCG cluster is deployed with KMS enabled using Vault, the cluster does not use Vault to store the encryption keys and instead uses the noobaa-root-master-key secret to store it. $ oc get noobaa NAME MGMT-ENDPOINTS S3-ENDPOINTS STS-ENDPOINTS IMAGE PHASE AGE noobaa ["https://10.0.165.59:31466"] ["https://10.0.204.254:32544"] ["https://10.0.204.254:30389"] quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:c994b32b55a98deaeaae0a46d3b474299d1b5a1600ac8e622b00af0b0bca5678 Ready 5h4m $ oc get secret noobaa-root-master-key -o yaml apiVersion: v1 data: cipher_key_b64: cnRWY25WTithRXdPcStzS0JzdWg0emFhOE1vMWZPbkJYaFRaaDFHcGZGQT0= kind: Secret metadata: creationTimestamp: "2022-05-19T05:18:36Z" name: noobaa-root-master-key namespace: openshift-storage resourceVersion: "47918" uid: c637428b-46e5-4106-9285-7e375f0f7fed type: Opaque $ oc get noobaa noobaa -o yaml apiVersion: noobaa.io/v1alpha1 kind: NooBaa [...] security: kms: {} $ oc get cm ocs-kms-connection-details -o yaml apiVersion: v1 data: KMS_PROVIDER: vault KMS_SERVICE_NAME: vault-token VAULT_ADDR: https://vault.qe.rh-ocs.com:8200 VAULT_AUTH_METHOD: token VAULT_BACKEND_PATH: odf VAULT_CACERT: ocs-kms-ca-secret-lcz05f VAULT_CLIENT_CERT: ocs-kms-client-cert-jq8aso VAULT_CLIENT_KEY: ocs-kms-client-key-cq16jk VAULT_NAMESPACE: "" VAULT_TLS_SERVER_NAME: "" kind: ConfigMap Version of all relevant components (if applicable): --------------------------------------------------- OCP: 4.11.0-0.nightly-2022-05-18-171831 ODF: odf-operator.v4.11.0 full_version=4.11.0-75 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Yes, user is not able to use KMS to store encryption keys on standalone MCG clusters Is there any workaround available to the best of your knowledge? Not that I am aware of Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 2 Can this issue reproducible? Yes Can this issue reproduce from the UI? Yes If this is a regression, please provide more details to justify this: Steps to Reproduce: ------------------- 1. Deploy a standalone MCG cluster using KMS 2. Verify if encryption is enabled using KMS for MCG. Actual results: --------------- Vault KMS is not used for storing the encryption keys Expected results: ----------------- When encryption is enabled using external KMS, the keys should be stored in the external KMS.
Arun, I think the support was added by you in ocs-operator. Can you please take a look?
Since this is a regression found by QE team, providing QA ack.
On further checks, KMS is enabled upon TWO conditions, a. sc.Spec.Encryption.Enable should be true OR b. sc.Spec.Encryption.ClusterWide should be true Unfortunately here BOTH the flags are FALSE, because sc.Spec.Encryption.Enable is a deprecated flag (so we are not setting it) and we are not setting the "ClusterWide" encryption. Thus we are not reaching the KMS (enabling) code. PR: https://github.com/red-hat-storage/ocs-operator/pull/1719, submitted ________________________________________ PS: We are failing in reconcile loop while updating the status, here are the log lines. But this is not relevant for KMS as this will happen only at the end of reconciliation (at this point every resources should be set). ``` 2022-05-19T05:18:36.779835600Z {"level":"info","ts":1652937516.7797787,"logger":"controllers.StorageCluster","msg":"Could not update StorageCluster status.","Request.Namespace":"openshift-storage","Request.Name":"ocs-storagecluster","StorageCluster":{"name":"ocs-storagecluster","namespace":"openshift-storage"}} 2022-05-19T05:18:36.779879415Z {"level":"error","ts":1652937516.779834,"logger":"controller.storagecluster","msg":"Reconciler error","reconciler group":"ocs.openshift.io","reconciler kind":"StorageCluster","name":"ocs-storagecluster","namespace":"openshift-storage","error":"Operation cannot be fulfilled on storageclusters.ocs.openshift.io \"ocs-storagecluster\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:227"} ```
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:6156