Bug 1738421

Summary: [sriov] sriov operator should be able to restore sriovnetworknodepolicy if the default policy is deleted
Product: OpenShift Container Platform Reporter: zhaozhanqi <zzhao>
Component: NetworkingAssignee: Peng Liu <pliu>
Status: CLOSED ERRATA QA Contact: Weibin Liang <weliang>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.2.0CC: aos-bugs, piqin, wsun, zshi
Target Milestone: ---   
Target Release: 4.2.0   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-10-16 06:35:02 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description zhaozhanqi 2019-08-07 06:54:40 UTC
Description of problem:

Sriov operator cannot restore the default policy in CR sriovnetworknodepolicy if it was deleted.

Version-Release number of selected component (if applicable):
quay.io/openshift-release-dev/ocp-v4.0-art-dev:v4.2.0-201908051019-ose-sriov-network-operator

How reproducible:
always

Steps to Reproduce:
1. Install sriov operator
2. Delete the default policy in sriovnetworknodepolicy from webconsole
3. found all ds are deleted in sriov namepaces
4. wait the sriov operator to restore default policy
5. check the logs of operator
Actual results:
4. sriov operator cannot restore default policy
5. 
   {"level":"info","ts":1565156966.3371418,"logger":"controller_caconfig","msg":"Reconciling CA config map","Request.Namespace":"sriov-network-operator","Request.Name":"sriov-network-operator-lock"}
{"level":"info","ts":1565157387.1095076,"logger":"controller_sriovnetworknodepolicy","msg":"Reconciling SriovNetworkNodePolicy","Request.Namespace":"sriov-network-operator","Request.Name":"default"}
{"level":"error","ts":1565157387.109652,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"sriovnetworknodepolicy-controller","request":"sriov-network-operator/default","error":"SriovNetworkNodePolicy.sriovnetwork.openshift.io \"default\" not found","stacktrace":"github.com/openshift/sriov-network-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/sriov-network-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/openshift/sriov-network-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/sriov-network-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\ngithub.com/openshift/sriov-network-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/sriov-network-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/openshift/sriov-network-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/sriov-network-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/openshift/sriov-network-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/sriov-network-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/openshift/sriov-network-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/sriov-network-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"info","ts":1565157387.3210003,"logger":"controller_caconfig","msg":"Reconciling CA config map","Request.Namespace":"sriov-network-operator","Request.Name":"openshift-service-ca"}
{"level":"error","ts":1565157387.3210638,"logger":"controller_caconfig","msg":"Couldn't get caBundle ConfigMap","Request.Namespace":"sriov-network-operator","Request.Name":"openshift-service-ca","error":"ConfigMap \"openshift-service-ca\" not found","stacktrace":"github.com/openshift/sriov-network-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/sriov-network-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/openshift/sriov-network-operator/pkg/controller/caconfig.(*ReconcileCAConfigMap).Reconcile\n\t/go/src/github.com/openshift/sriov-network-operator/pkg/controller/caconfig/caconfig_controller.go:81\ngithub.com/openshift/sriov-network-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/sriov-network-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\ngithub.com/openshift/sriov-network-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/sriov-network-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/openshift/sriov-network-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/sriov-network-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/openshift/sriov-network-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/sriov-network-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/openshift/sriov-network-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/sriov-network-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"

Expected results:

sriov operator can restore default policy and work well 

Additional info:

Comment 3 zhaozhanqi 2019-08-22 07:39:05 UTC
Verified this bug on quay.io/openshift-release-dev/ocp-v4.0-art-dev:v4.2.0-201908192219-ose-sriov-network-operator

Comment 4 errata-xmlrpc 2019-10-16 06:35:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922