Bug 1877339 - Incorrect use of logr
Summary: Incorrect use of logr
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Container Storage
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.6
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: OCS 4.6.0
Assignee: N Balachandran
QA Contact: Svetlana Avetisyan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-09 12:19 UTC by N Balachandran
Modified: 2020-12-17 06:24 UTC (History)
7 users (show)

Fixed In Version: 4.6.0-102.ci
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-12-17 06:24:01 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift ocs-operator pull 779/ 0 None None None 2020-09-30 06:28:21 UTC
Red Hat Product Errata RHSA-2020:5605 0 None None None 2020-12-17 06:24:26 UTC

Description N Balachandran 2020-09-09 12:19:16 UTC
Description of problem (please be detailed as possible and provide log
snippets):


Incorrect logr usage. Use structured logging for messages.



Some logs do not follow the structured format causing logr to print errors in the log file:

r.reqLogger.Info("reconciling metrics exporter service %v", namespacedName)

As this is not structured, the following message shows up the ocs-operator pod log:

Output:
{"level":"dpanic","ts":"2020-09-08T14:45:48.477Z","logger":"controller_storagecluster","msg":"odd number of arguments passed as key-value pairs for logging","ignored key":{"namespace":"openshift-storage","name":"ocs-metrics-exporter"},"stacktrace":"github.com/go-logr/zapr.handleFields\n\t/home/nibalach/go/pkg/mod/github.com/go-logr/zapr.1/zapr.go:106\ngithub.com/go-logr/zapr.(*infoLogger).Info\n\t/home/nibalach/go/pkg/mod/github.com/go-logr/zapr.1/zapr.go:70\ngithub.com/openshift/ocs-operator/pkg/controller/storagecluster.CreateOrUpdateService\n\t/home/nibalach/go/src/github.com/ocs-operator/ocs-operator/pkg/controller/storagecluster/exporter.go:93\ngithub.com/openshift/ocs-operator/pkg/controller/storagecluster.(*ReconcileStorageCluster).enableMetricsExporter\n\t/home/nibalach/go/src/github.com/ocs-operator/ocs-operator/pkg/controller/storagecluster/exporter.go:32\ngithub.com/openshift/ocs-operator/pkg/controller/storagecluster.(*ReconcileStorageCluster).Reconcile\n\t/home/nibalach/go/src/github.com/ocs-operator/ocs-operator/pkg/controller/storagecluster/reconcile.go:350\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/home/nibalach/go/pkg/mod/sigs.k8s.io/controller-runtime.2/pkg/internal/controller/controller.go:256\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/nibalach/go/pkg/mod/sigs.k8s.io/controller-runtime.2/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/home/nibalach/go/pkg/mod/sigs.k8s.io/controller-runtime.2/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/home/nibalach/go/pkg/mod/k8s.io/apimachinery.4/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/home/nibalach/go/pkg/mod/k8s.io/apimachinery.4/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/home/nibalach/go/pkg/mod/k8s.io/apimachinery.4/pkg/util/wait/wait.go:88"} {"level":"info","ts":"2020-09-08T14:45:48.477Z","logger":"controller_storagecluster","msg":"reconciling metrics exporter service %v"}



Version of all relevant components (if applicable):


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Is this issue reproducible?
yes

Can this issue be reproduced from the UI?
Yes

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Install the OCS operator
2. Create a storage cluster
3. oc logs <ocs-operator-pod-name> -n openshift-storage


Actual results:


Expected results:


Additional info:

Comment 6 Svetlana Avetisyan 2020-10-27 05:14:27 UTC
Deploying new cluster to validate this  bug.
https://ocs4-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/qe-destroy-ocs-cluster/10929/

Comment 7 Svetlana Avetisyan 2020-10-27 05:36:18 UTC
on this version  I can see normal  way of logging 4.6.0-0.nightly-2020-10-13-064047
(.venv) [savetisy@savetisy ocs-ci]$ oc logs pod/ocs-operator-8565974c99-lws9j

{"level":"error","ts":"2020-10-27T04:27:15.722Z","logger":"controller-runtime.controller","msg":"Reconciler error","controller":"storagecluster-controller","request":"openshift-storage/ocs-storagecluster","error":"failed to update service openshift-storage/ocs-metrics-exporter. Service \"ocs-metrics-exporter\" is invalid: spec.clusterIP: Invalid value: \"\": field is immutable","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/remote-source/app/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"info","ts":"2020-10-27T04:27:16.722Z","logger":"controller_storagecluster","msg":"Reconciling StorageCluster","Request.Namespace":"openshift-storage","Request.Name":"ocs-storagecluster"}
{"level":"info","ts":"2020-10-27T04:27:16.745Z","logger":"controller_storagecluster","msg":"No component operator reported negatively","Request.Namespace":"openshift-storage","Request.Name":"ocs-storagecluster"}
{"level":"info","ts":"2020-10-27T04:27:16.768Z","logger":"controller_storagecluster","msg":"Reconciling metrics exporter service","NamespacedName":{"namespace":"openshift-storage","name":"ocs-metrics-exporter"}}
{"level":"error","ts":"2020-10-27T04:27:16.774Z","logger":"controller_storagecluster","msg":"failed to reconcile metrics exporter","Request.Namespace":"openshift-storage","Request.Name":"ocs-storagecluster","error":"failed to update service openshift-storage/ocs-metrics-exporter. Service \"ocs-metrics-exporter\" is invalid: spec.clusterIP: Invalid value: \"\": field is immutable","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/remote-source/app/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/openshift/ocs-operator/pkg/controller/storagecluster.(*ReconcileStorageCluster).Reconcile\n\t/remote-source/app/pkg/controller/storagecluster/reconcile.go:332\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":"2020-10-27T04:27:16.775Z","logger":"controller-runtime.controller","msg":"Reconciler error","controller":"storagecluster-controller","request":"openshift-storage/ocs-storagecluster","error":"failed to update service openshift-storage/ocs-metrics-exporter. Service \"ocs-metrics-exporter\" is invalid: spec.clusterIP: Invalid value: \"\": field is immutable","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/remote-source/app/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/remote-source/app/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/remote-source/app/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"info","ts":"2020-10-27T04:27:17.775Z","logger":"controller_storagecluster","msg":"Reconciling StorageCluster","Request.Namespace":"openshift-storage","Request.Name":"ocs-storagecluster"}
{"level":"info","ts":"2020-10-27T04:27:17.789Z","logger":"controller_storagecluster","msg":"No component operator reported negatively","Request.Namespace":"openshift-storage","Request.Name":"ocs-storagecluster"}
{"level":"info","ts":"2020-10-27T04:27:17.806Z","logger":"controller_storagecluster","msg":"Reconciling metrics exporter service","NamespacedName":{"namespace":"openshift-storage","name":"ocs-metrics-exporter"}}

Comment 9 errata-xmlrpc 2020-12-17 06:24:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5605


Note You need to log in before you can comment on or make changes to this bug.