Bug 2100507 - Remove redundant log lines from obj_retry.go
Summary: Remove redundant log lines from obj_retry.go
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.11
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: 4.11.0
Assignee: Riccardo Ravaioli
QA Contact: jechen
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-06-23 14:49 UTC by Riccardo Ravaioli
Modified: 2022-08-10 11:19 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-08-10 11:19:12 UTC
Target Upstream Version:
Embargoed:
jechen: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift ovn-kubernetes pull 1162 0 None open Bug 2100507: Remove redundant log lines in obj_retry.go 2022-06-23 14:51:25 UTC
Red Hat Product Errata RHSA-2022:5069 0 None None None 2022-08-10 11:19:22 UTC

Description Riccardo Ravaioli 2022-06-23 14:49:31 UTC
Remove redundant log lines from obj_retry.go with respect to update events ("Updating..."):

obj_retry.go:1465] Updating *factory.egressIPPod http-scale-reencrypt/http-perf-62-6c7fc89784-9cnxl
2022-06-20T03:50:09.671682495+00:00 stderr F I0620 03:50:09.671670       1 obj_retry.go:1465] Updating *factory.egressIPPod http-scale-reencrypt/http-perf-62-6c7fc89784-9cnxl
2022-06-20T03:50:09.671697363+00:00 stderr F I0620 03:50:09.671684       1 obj_retry.go:1465] Updating *v1.Pod http-scale-reencrypt/http-perf-62-6c7fc89784-9cnxl
2022-06-20T03:50:09.671704653+00:00 stderr F I0620 03:50:09.671696       1 obj_retry.go:1465] Updating *v1.Pod http-scale-reencrypt/http-perf-62-6c7fc89784-9cnxl
2022-06-20T03:50:09.671711823+00:00 stderr F I0620 03:50:09.671705       1 obj_retry.go:1465] Updating *factory.egressIPPod http-scale-reencrypt/http-perf-62-6c7fc89784-9cnxl
2022-06-20T03:50:09.826404756+00:00 stderr F I0620 03:50:09.826362       1 obj_retry.go:1364] Creating *v1.Pod http-scale-reencrypt/http-perf-100-5d54c7bcd5-8mbmf took: 500ns
2022-06-20T03:50:09.826404756+00:00 stderr F I0620 03:50:09.826397       1 obj_retry.go:1364] Creating *factory.egressIPPod http-scale-reencrypt/http-perf-100-5d54c7bcd5-8mbmf took: 10.437µs
2022-06-20T03:50:09.834601714+00:00 stderr F I0620 03:50:09.834574       1 services_controller.go:248] Processing sync for service http-scale-reencrypt/http-perf-100
2022-06-20T03:50:09.834787354+00:00 stderr F I0620 03:50:09.834771       1 kube.go:303] Getting endpoints for slice http-scale-reencrypt/http-perf-100-ljj5j
2022-06-20T03:50:09.834800991+00:00 stderr F I0620 03:50:09.834785       1 kube.go:346] LB Endpoints for http-scale-reencrypt/http-perf-100 are: [] / [] on port: 0
2022-06-20T03:50:09.836846395+00:00 stderr F I0620 03:50:09.836818       1 obj_retry.go:1465] Updating *v1.Pod http-scale-reencrypt/http-perf-100-5d54c7bcd5-8mbmf
2022-06-20T03:50:09.836846395+00:00 stderr F I0620 03:50:09.836840       1 pods.go:364] [http-scale-reencrypt/http-perf-100-5d54c7bcd5-8mbmf] creating logical port for pod on switch ip-10-0-202-130.us-east-2.compute.internal
2022-06-20T03:50:09.836952110+00:00 stderr F I0620 03:50:09.836936       1 kube.go:73] Setting annotations map[k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.129.56.41/23"],"mac_address":"0a:58:0a:81:38:29","gateway_ips":["10.129.56.1"],"ip_address":"10.129.56.41/23","gateway_ip":"10.129.56.1"}}] on pod http-scale-reencrypt/http-perf-100-5d54c7bcd5-8mbmf
2022-06-20T03:50:09.840156866+00:00 stderr F I0620 03:50:09.840123       1 services_controller.go:312] Service http-scale-reencrypt/http-perf-100 has 1 cluster-wide and 1 per-node configs, making 1 and 127 load balancers
2022-06-20T03:50:09.840464057+00:00 stderr F I0620 03:50:09.840449       1 services_controller.go:321] Skipping no-op change for service http-scale-reencrypt/http-perf-100
2022-06-20T03:50:09.840464057+00:00 stderr F I0620 03:50:09.840459       1 services_controller.go:252] Finished syncing service http-perf-100 on namespace http-scale-reencrypt : 5.8906ms
2022-06-20T03:50:09.866961658+00:00 stderr F I0620 03:50:09.866887       1 transact.go:41] Configuring OVN: [{Op:mutate Table:Address_Set Row:map[] Rows:[] Columns:[] Mutations:[{Column:addresses Mutator:insert Value:{GoSet:[10.129.56.41]}}] Timeout:<nil> Where:[where column _uuid == {96ebab1d-2a59-47d6-9a88-5505bc857906}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:insert Table:NAT Row:map[external_ip:10.0.202.130 logical_ip:10.129.56.41 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:u2597217125} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:u2597217125}]}}] Timeout:<nil> Where:[where column _uuid == {b93ee365-02c3-4050-972e-0a7e790fab8a}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:} {Op:insert Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:81:38:29 10.129.56.41]} external_ids:{GoMap:map[namespace:http-scale-reencrypt pod:true]} name:http-scale-reencrypt_http-perf-100-5d54c7bcd5-8mbmf options:{GoMap:map[iface-id-ver:bac2d32a-9a5b-425c-a37b-51b203ef596b requested-chassis:ip-10-0-202-130.us-east-2.compute.internal]} port_security:{GoSet:[0a:58:0a:81:38:29 10.129.56.41]}] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:u2597217126} {Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:u2597217126}]}}] Timeout:<nil> Where:[where column _uuid == {578ddb3b-2e4e-41dc-a0c4-f7f650f125d0}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}]
2022-06-20T03:50:09.872333792+00:00 stderr F I0620 03:50:09.872312       1 pods.go:353] [http-scale-reencrypt/http-perf-100-5d54c7bcd5-8mbmf] addLogicalPort took 35.473456ms, libovsdb time 5.414785ms, annotation time: 29.413274ms
2022-06-20T03:50:09.872347575+00:00 stderr F I0620 03:50:09.872339       1 obj_retry.go:1465] Updating *factory.egressIPPod http-scale-reencrypt/http-perf-100-5d54c7bcd5-8mbmf
2022-06-20T03:50:09.872374352+00:00 stderr F I0620 03:50:09.872362       1 obj_retry.go:1465] Updating *v1.Pod http-scale-reencrypt/http-perf-100-5d54c7bcd5-8mbmf
2022-06-20T03:50:09.872382218+00:00 stderr F I0620 03:50:09.872375       1 obj_retry.go:1465] Updating *factory.egressIPPod http-scale-reencrypt/http-perf-100-5d54c7bcd5-8mbmf
2022-06-20T03:50:09.872401355+00:00 stderr F I0620 03:50:09.872387       1 obj_retry.go:1465] Updating *v1.Pod http-scale-reencrypt/http-perf-100-5d54c7bcd5-8mbmf
2022-06-20T03:50:09.872401355+00:00 stderr F I0620 03:50:09.872398       1 obj_retry.go:1465] Updating *factory.egressIPPod http-scale-reencrypt/http-perf-100-5d54c7bcd5-8mbmf
2022-06-20T03:50:10.025734247+00:00 stderr F I0620 03:50:10.025702       1 obj_retry.go:1364] Creating *v1.Pod http-scale-reencrypt/http-perf-209-748cc8f8ff-fvq9q took: 361ns
2022-06-20T03:50:10.025776901+00:00 stderr F I0620 03:50:10.025733       1 obj_retry.go:1364] Creating *factory.egressIPPod http-scale-reencrypt/http-perf-209-748cc8f8ff-fvq9q took: 9.756µs

Comment 5 jechen 2022-07-12 19:45:58 UTC
Verified with 4.11.0-0.nightly-2022-07-11-080250

$ oc get clusterversion
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.11.0-0.nightly-2022-07-11-080250   True        False         31m     Cluster version is 4.11.0-0.nightly-2022-07-11-080250

$ oc get -o jsonpath='{.metadata.annotations.control-plane\.alpha\.kubernetes\.io/leader}'  -n openshift-ovn-kubernetes  cm ovn-kubernetes-master
{"holderIdentity":"jechen-0712b-77l7t-master-1.c.openshift-qe.internal","leaseDurationSeconds":60,"acquireTime":"2022-07-12T18:52:13Z","renewTime":"2022-07-12T19:18:35Z","leaderTransitions":0}


$ oc get pod -n openshift-ovn-kubernetes -l app=ovnkube-master --field-selector=spec.nodeName=jechen-0712b-77l7t-master-1.c.openshift-qe.internal -o jsonpath={.items[*].metadata.name}
ovnkube-master-rln2t

$ oc -n openshift-ovn-kubernetes logs ovnkube-master-rln2t -c ovnkube-master 

.......

E0712 18:52:14.242729       1 obj_retry.go:1358] Failed to create *v1.Pod object openshift-network-diagnostics/network-check-target-w5n4z, error: addLogicalPort failed for openshift-network-diagnostics/network-check-target-w5n4z: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "jechen-0712b-77l7t-master-1.c.openshift-qe.internal"
I0712 18:52:14.242771       1 obj_retry.go:1364] Creating *v1.Pod openshift-multus/multus-additional-cni-plugins-2nbst took: 449ns
I0712 18:52:14.242789       1 obj_retry.go:1364] Creating *v1.Pod openshift-ovn-kubernetes/ovnkube-node-tn57q took: 556ns
I0712 18:52:14.242804       1 obj_retry.go:1364] Creating *v1.Pod openshift-ingress-operator/ingress-operator-5c659bd8cd-c62jb took: 212ns
I0712 18:52:14.242868       1 obj_retry.go:1465] Updating *v1.Pod openshift-network-diagnostics/network-check-target-92h28
I0712 18:52:14.242904       1 pods.go:369] [openshift-network-diagnostics/network-check-target-92h28] creating logical port for pod on switch jechen-0712b-77l7t-master-2.c.openshift-qe.internal
I0712 18:52:14.243088       1 pods.go:358] [openshift-network-diagnostics/network-check-target-92h28] addLogicalPort took 186.771µs, libovsdb time 0s, annotation time: 0s
E0712 18:52:14.243119       1 obj_retry.go:1468] Failed to update resource *v1.Pod, old=openshift-network-diagnostics/network-check-target-92h28, new=openshift-network-diagnostics/network-check-target-92h28, error: addLogicalPort failed for openshift-network-diagnostics/network-check-target-92h28: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "jechen-0712b-77l7t-master-2.c.openshift-qe.internal"
I0712 18:52:14.243151       1 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-network-diagnostics", Name:"network-check-target-w5n4z", UID:"5d4386e3-2747-4410-a102-38d47fb741bc", APIVersion:"v1", ResourceVersion:"3326", FieldPath:""}): type: 'Warning' reason: 'ErrorAddingLogicalPort' addLogicalPort failed for openshift-network-diagnostics/network-check-target-w5n4z: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "jechen-0712b-77l7t-master-1.c.openshift-qe.internal"
I0712 18:52:14.243165       1 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-network-diagnostics", Name:"network-check-target-92h28", UID:"b125e0ec-1688-413e-b01a-dac5f6967190", APIVersion:"v1", ResourceVersion:"4009", FieldPath:""}): type: 'Warning' reason: 'ErrorAddingLogicalPort' addLogicalPort failed for openshift-network-diagnostics/network-check-target-92h28: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "jechen-0712b-77l7t-master-2.c.openshift-qe.internal"
I0712 18:52:14.243207       1 obj_retry.go:1465] Updating *v1.Pod openshift-multus/network-metrics-daemon-skxzl
I0712 18:52:14.243237       1 pods.go:369] [openshift-multus/network-metrics-daemon-skxzl] creating logical port for pod on switch jechen-0712b-77l7t-master-0.c.openshift-qe.internal
I0712 18:52:14.243279       1 obj_retry.go:1364] Creating *factory.egressIPNamespace openshift-cluster-version took: 65.121µs
I0712 18:52:14.243307       1 obj_retry.go:1364] Creating *factory.egressIPNamespace openshift-dns-operator took: 5.365µs
I0712 18:52:14.243322       1 obj_retry.go:1364] Creating *factory.egressIPNamespace openshift-ovirt-infra took: 2.736µs
I0712 18:52:14.243358       1 pods.go:358] [openshift-multus/network-metrics-daemon-skxzl] addLogicalPort took 123.721µs, libovsdb time 0s, annotation time: 0s
E0712 18:52:14.243387       1 obj_retry.go:1468] Failed to update resource *v1.Pod, old=openshift-multus/network-metrics-daemon-skxzl, new=openshift-multus/network-metrics-daemon-skxzl, error: addLogicalPort failed for openshift-multus/network-metrics-daemon-skxzl: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "jechen-0712b-77l7t-master-0.c.openshift-qe.internal"
I0712 18:52:14.243394       1 obj_retry.go:1364] Creating *factory.egressIPNamespace openshift-cloud-credential-operator took: 21.156µs
I0712 18:52:14.243406       1 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-multus", Name:"network-metrics-daemon-skxzl", UID:"a0b99553-5d2e-4d55-9971-8150673ab0b2", APIVersion:"v1", ResourceVersion:"4002", FieldPath:""}): type: 'Warning' reason: 'ErrorAddingLogicalPort' addLogicalPort failed for openshift-multus/network-metrics-daemon-skxzl: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "jechen-0712b-77l7t-master-0.c.openshift-qe.internal"
I0712 18:52:14.243413       1 obj_retry.go:1364] Creating *factory.egressIPNamespace openshift-network-operator took: 4.495µs
I0712 18:52:14.243439       1 obj_retry.go:1364] Creating *factory.egressIPNamespace openshift-kni-infra took: 8.35µs
I0712 18:52:14.243448       1 obj_retry.go:1364] Creating *factory.egressIPNamespace openshift-kube-controller-manager took: 23.054µs
I0712 18:52:14.243454       1 obj_retry.go:1364] Creating *factory.egressIPNamespace openshift-kube-apiserver-operator took: 2.809µs
I0712 18:52:14.243466       1 obj_retry.go:1364] Creating *factory.egressIPNamespace openshift-etcd took: 3.03µs
I0712 18:52:14.243468       1 obj_retry.go:1364] Creating *factory.egressIPNamespace kube-public took: 1.959µs
I0712 18:52:14.243482       1 obj_retry.go:1364] Creating *factory.egressIPNamespace openshift-cloud-network-config-controller took: 2.4µs


.....


==> Verified that there is no redundant log lines from obj_retry.go

Comment 6 errata-xmlrpc 2022-08-10 11:19:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5069


Note You need to log in before you can comment on or make changes to this bug.