Bug 2189984 - [KMS][VAULT] Storage cluster remains in 'Progressing' state during deployment with storage class encryption, despite all pods being up and running.
Summary: [KMS][VAULT] Storage cluster remains in 'Progressing' state during deployment...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.13
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ODF 4.13.0
Assignee: arun kumar mohan
QA Contact: Parag Kamble
URL:
Whiteboard:
Depends On:
Blocks: 2192596 2209254
TreeView+ depends on / blocked
 
Reported: 2023-04-26 16:54 UTC by Parag Kamble
Modified: 2023-08-09 17:00 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: we set noobaa encryption in any of the following condition, when ClusterWide encryption is turned on OR when KMS is enabled Consequence: this allowed noobaa encryption to be turned on whenever KMS is enabled even when the clusterwide encryption is turned off. This erroneous state made the Storage Cluster to be forever in 'in-progress' state. Fix: Noobaa encryption should be turned on only when, KMS is enabled AND either ClusterWide encryption is turned on OR Noobaa should be in Standalone state. Result: Now noobaa encryption is enabled only when there is a cluster wide encryption or noobaa is in standalone mode (provided KMS is enabled). Storage Cluster cluster moves to a ready state as expected.
Clone Of:
: 2192596 (view as bug list)
Environment:
Last Closed: 2023-06-21 15:25:28 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 2044 0 None open Bug 2189984: [release-4.13] Fix encryption enablement in Noobaa 2023-05-02 12:25:10 UTC
Red Hat Product Errata RHBA-2023:3742 0 None None None 2023-06-21 15:25:44 UTC

Description Parag Kamble 2023-04-26 16:54:13 UTC
Created attachment 1960169 [details]
must gather logs

Created attachment 1960169 [details]
must gather logs

Description of problem (please be detailed as possible and provide log
snippests):


Version of all relevant components (if applicable): 4.13


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
I can continue work without any issue


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible? YES


Can this issue reproduce from the UI? YES


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Install ODF operator 
2. Configure kubernetes auth method as mention in DOC: https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.11/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/deploy-using-local-storage-devices-bm#enabling-cluster-wide-encryprtion-with-the-kubernetes-authentication-using-kms_local-bare-metal
  
3. Create storage system
4. Select enable data encryption for block and file.
5. Select StorageClass Encryption (refer attached screenshot)
6. Click on next and complete storage system creation.


Actual results:
Storagecluster not moved out of 'Progressing' Phase.

Expected results:
Storage cluster should be in 'Ready' state.

Additional info:

The storage cluster has been enabled with storage class encryption and the 'ocs-storagecluster-ceph-rbd-encrypted' storage class has been created. However, the storage cluster remains in a 'Progressing' state even though all pods are up and running.

Although I am able to use all the functionality without any issue.

StorgeCluster Details
==============================================
❯ oc get storagecluster   -n openshift-storage                                                                                                                 ─╯
NAME                 AGE   PHASE         EXTERNAL   CREATED AT             VERSION
ocs-storagecluster   23m   Progressing              2023-04-26T16:36:26Z   4.13.0

Storageclass Output
===============================================
❯ oc get storageclass                                                                                                                                          ─╯
NAME                                    PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gp2-csi                                 ebs.csi.aws.com                         Delete          WaitForFirstConsumer   true                   3h54m
gp3-csi (default)                       ebs.csi.aws.com                         Delete          WaitForFirstConsumer   true                   3h54m
ocs-storagecluster-ceph-rbd             openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true                   19m
ocs-storagecluster-ceph-rbd-encrypted   openshift-storage.rbd.csi.ceph.com      Delete          Immediate              false                  19m
ocs-storagecluster-cephfs               openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true                   19m

Comment 7 arun kumar mohan 2023-04-27 10:43:23 UTC
PR up for review: https://github.com/red-hat-storage/ocs-operator/pull/2040

Comment 8 Sanjal Katiyar 2023-04-27 10:46:00 UTC
MODIFIED will be when PR is merged in 4.13... we need acks for 4.13 for this BZ as well...

Comment 9 arun kumar mohan 2023-05-02 11:03:57 UTC
@ebenahar , can you please provide us with QA_ACK+ flag?

Comment 18 errata-xmlrpc 2023-06-21 15:25:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Data Foundation 4.13.0 enhancement and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:3742


Note You need to log in before you can comment on or make changes to this bug.