Bug 2175327 - couldn't find key CSI_ENABLE_READ_AFFINITY in ConfigMap openshift-storage/ocs-operator-config
Summary: couldn't find key CSI_ENABLE_READ_AFFINITY in ConfigMap openshift-storage/ocs...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.13
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Malay Kumar parida
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-03-03 22:30 UTC by Petr Balogh
Modified: 2023-08-09 17:00 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-03-09 10:55:59 UTC
Embargoed:


Attachments (Terms of Use)

Description Petr Balogh 2023-03-03 22:30:25 UTC
Description of problem (please be detailed as possible and provide log
snippests):

        message: couldn't find key CSI_ENABLE_READ_AFFINITY in ConfigMap openshift-storage/ocs-operator-config
        reason: CreateContainerConfigError
?
I see it in one of the upgrade execution to 4.13 here:
http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j-060vu1cs33-uba/j-060vu1cs33-uba_20230301T135226/logs/failed_testcase_ocs_logs_1677682178/test_upgrade_ocs_logs/ocs_must_gather/quay-io-rhceph-dev-ocs-must-gather-sha256-67192f075627d70af0c39a81b6b045025c5d6c129b36f413abe35bd2564a1492/namespaces/openshift-storage/pods/rook-ceph-operator-6495f967c-94f2g/rook-ceph-operator-6495f967c-94f2g.yaml

CSV: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j-060vu1cs33-uba/j-060vu1cs33-uba_20230301T135226/logs/failed_testcase_ocs_logs_1677682178/test_upgrade_ocs_logs/ocs_must_gather/quay-io-rhceph-dev-ocs-must-gather-sha256-67192f075627d70af0c39a81b6b045025c5d6c129b36f413abe35bd2564a1492/namespaces/openshift-storage/oc_output/csv


After discussion with Travis here: https://chat.google.com/room/AAAAREGEba8/i3d1e8Pn5Ag

He thinks it might be related to this:
https://github.com/red-hat-storage/ocs-operator/pull/1939

Version of all relevant components (if applicable):


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
Not sure, trying here:
https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-trigger-vsphere-upi-1az-rhcos-vsan-3m-3w-upgrade-ocp-ocs-auto/63/

Can this issue reproduce from the UI?
Haven't tried

If this is a regression, please provide more details to justify this:
It worked before without hitting such issue

Steps to Reproduce:
1. Upgrade to 4.13
2.
3.


Actual results:
Described error

Expected results:
No issue in upgrade

Additional info:
This is the job.

https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-trigger-vsphere-upi-1az-rhcos-vsan-3m-3w-upgrade-ocp-ocs-auto/60/

And full must gather link: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j-060vu1cs33-uba/j-060vu1cs33-uba_20230301T135226/logs/failed_testcase_ocs_logs_1677682178/test_upgrade_ocs_logs/

Comment 2 Petr Balogh 2023-03-09 10:40:24 UTC
Both upgrade succeeded without issue, haven't reproduced the issue yet.


Note You need to log in before you can comment on or make changes to this bug.