Bug 2099592

Summary: [RFE] Provide a way to debug failed StorageClassClaims
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Filip Balák <fbalak>
Component: odf-managed-serviceAssignee: Ohad <omitrani>
Status: ON_QA --- QA Contact: Jilju Joy <jijoy>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.11CC: aeyal, dbindra, jijoy, odf-bz-bot, omitrani
Target Milestone: ---Keywords: FutureFeature, RFE
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Filip Balák 2022-06-21 09:55:18 UTC
Description of problem:
It is difficult to debug what went wrong with storageclassclaim when phase is Failed. There should be a message that describes what error occurred.

Version-Release number of selected component (if applicable):
odf-operator.v4.11.0

Steps to Reproduce:
1. This was discovered during testing of https://bugzilla.redhat.com/show_bug.cgi?id=2099581 but any incorrect configuration in KMS should turn StorageClassClaim into failed state.

Actual results:
Example of created storageclassclaim:

apiVersion: ocs.openshift.io/v1alpha1
kind: StorageClassClaim
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"ocs.openshift.io/v1alpha1","kind":"StorageClassClaim","metadata":{"annotations":{},"name":"encrypted-rbd-test","namespace":"kms-test"},"spec":{"encryptionMethod":"aws-sts-metadata-test","type":"blockpool"}}
  resourceVersion: '911497'
  name: encrypted-rbd-test
  uid: 380a05d7-173b-4b79-9ceb-dab99073cd0f
  creationTimestamp: '2022-06-21T09:03:33Z'
  generation: 1
  managedFields:
    - apiVersion: ocs.openshift.io/v1alpha1
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:kubectl.kubernetes.io/last-applied-configuration': {}
        'f:spec':
          .: {}
          'f:encryptionMethod': {}
          'f:type': {}
      manager: kubectl-client-side-apply
      operation: Update
      time: '2022-06-21T09:03:33Z'
    - apiVersion: ocs.openshift.io/v1alpha1
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:finalizers':
            .: {}
            'v:"storageclassclaim.ocs.openshift.io"': {}
      manager: ocs-operator
      operation: Update
      time: '2022-06-21T09:03:33Z'
    - apiVersion: ocs.openshift.io/v1alpha1
      fieldsType: FieldsV1
      fieldsV1:
        'f:status':
          .: {}
          'f:phase': {}
      manager: ocs-operator
      operation: Update
      subresource: status
      time: '2022-06-21T09:03:33Z'
  namespace: kms-test
  finalizers:
    - storageclassclaim.ocs.openshift.io
spec:
  encryptionMethod: aws-sts-metadata-test
  type: blockpool
status:
  phase: Failed

In "status" section is only "phase" attribute with "Failed" value.

Expected results:
There should be a way how to debug failed storageclassclaim. E.g. in "status" section could be another attribute except for "phase" that describes what went wrong with creation of the storageclassclaim.

Additional info:

Comment 4 Dhruv Bindra 2022-09-20 07:45:42 UTC
Moving the bug to ON_QA as we see the logs in ocs-operator or provider-api-server

Comment 11 Ohad 2023-07-03 14:04:39 UTC
@fbalak
In my opinion, we should move this bug to the OCS-operator component. There is nothing we can do, from the deployer/service side, to accommodate.