Bug 2027485 - [4.9z] AddressManager should not call sync() from ErrorCallback
Summary: [4.9z] AddressManager should not call sync() from ErrorCallback
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.9
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: ---
: 4.9.z
Assignee: ffernand
QA Contact: Anurag saxena
URL:
Whiteboard:
Depends On: 2009873
Blocks: 2022042 2027487
TreeView+ depends on / blocked
 
Reported: 2021-11-29 18:56 UTC by ffernand
Modified: 2021-12-13 12:06 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2027487 (view as bug list)
Environment:
Last Closed: 2021-12-13 12:06:24 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift ovn-kubernetes pull 852 0 None open Bug 2027485 [4.9z]: addressManager should not call sync() from ErrorCallback 2021-11-29 19:15:15 UTC
Red Hat Product Errata RHBA-2021:5003 0 None None None 2021-12-13 12:06:53 UTC

Description ffernand 2021-11-29 18:56:06 UTC
This is a follow up issue for bug 2022042, which is a clone of
bug 2009873 (4.10).

Should ErrorCallback from addressManager take place after
stopChan is closed, it is not safe to call sync() because
c.watchFactory is no longer usable:

I1119 17:12:47.755099   18058 node_ip_handler_linux.go:213] Skipping invalid IP address found on host: %s::1
I1119 17:12:47.755140   18058 node_ip_handler_linux.go:213] Skipping invalid IP address found on host: %sfe80::20d:3aff:fe8e:860c
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x48 pc=0x251de6c]

goroutine 85 [running]:
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/node.(*addressManager).doesNodeHostAddressesMatch(0xc000099590, 0xc0005e1b00)
	/home/runner/work/ovn-kubernetes/ovn-kubernetes/go-controller/pkg/node/node_ip_handler_linux.go:157 +0xec
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/node.(*addressManager).sync(0xc000099590)
	/home/runner/work/ovn-kubernetes/ovn-kubernetes/go-controller/pkg/node/node_ip_handler_linux.go:220 +0x8c5
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/node.(*addressManager).Run.func1(0x2d139e0, 0xc00031b4d0)
	/home/runner/work/ovn-kubernetes/ovn-kubernetes/go-controller/pkg/node/node_ip_handler_linux.go:76 +0x165
github.com/vishvananda/netlink.addrSubscribeAt.func2(0xc000689980, 0xc0000495c0, 0xc00031af20)
	/home/runner/work/ovn-kubernetes/ovn-kubernetes/go-controller/vendor/github.com/vishvananda/netlink/addr_linux.go:360 +0x7ab
created by github.com/vishvananda/netlink.addrSubscribeAt
	/home/runner/work/ovn-kubernetes/ovn-kubernetes/go-controller/vendor/github.com/vishvananda/netlink/addr_linux.go:354 +0x130
make: *** [Makefile:46: check] Error 2
Error: Process completed with exit code 2.

Comment 7 Lalatendu Mohanty 2021-12-10 17:56:45 UTC
We're asking the following questions to evaluate whether or not this bug warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. The ultimate goal is to avoid delivering an update which introduces new risk or reduces cluster functionality in any way. Sample answers are provided to give more context and the 

ImpactStatementRequested label has been added to this bug. When responding, please remove ImpactStatementRequested and set the ImpactStatementProposed label. The expectation is that the assignee answers these questions.

Who is impacted? If we have to block upgrade edges based on this issue, which edges would need blocking?

    example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with thousands of namespaces, approximately 5% of the subscribed fleet
    example: All customers upgrading from 4.y.z to 4.y+1.z fail approximately 10% of the time

What is the impact? Is it serious enough to warrant blocking edges?

    example: Up to 2 minute disruption in edge routing
    example: Up to 90 seconds of API downtime
    example: etcd loses quorum and you have to restore from backup

How involved is remediation (even moderately serious impacts might be acceptable if they are easy to mitigate)?

    example: Issue resolves itself after five minutes
    example: Admin uses oc to fix things
    example: Admin must SSH to hosts, restore from backups, or other non standard admin activities

Is this a regression (if all previous versions were also vulnerable, updating to the new, vulnerable version does not increase exposure)?

    example: No, it has always been like this we just never noticed
    example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1

Comment 8 ffernand 2021-12-10 18:53:31 UTC
(In reply to Lalatendu Mohanty from comment #7)
> We're asking the following questions to evaluate whether or not this bug
> warrants blocking an upgrade edge from either the previous X.Y or X.Y.Z. The
> ultimate goal is to avoid delivering an update which introduces new risk or
> reduces cluster functionality in any way. Sample answers are provided to
> give more context and the 
> 
> ImpactStatementRequested label has been added to this bug. When responding,
> please remove ImpactStatementRequested and set the ImpactStatementProposed
> label. The expectation is that the assignee answers these questions.
> 
> Who is impacted? If we have to block upgrade edges based on this issue,
> which edges would need blocking?

I do not think this bz should block upgrade edges. It is just a fix for the test code.

> 
>     example: Customers upgrading from 4.y.Z to 4.y+1.z running on GCP with
> thousands of namespaces, approximately 5% of the subscribed fleet
>     example: All customers upgrading from 4.y.z to 4.y+1.z fail
> approximately 10% of the time
> 
> What is the impact? Is it serious enough to warrant blocking edges?
> 
>     example: Up to 2 minute disruption in edge routing
>     example: Up to 90 seconds of API downtime
>     example: etcd loses quorum and you have to restore from backup
> 
> How involved is remediation (even moderately serious impacts might be
> acceptable if they are easy to mitigate)?
> 
>     example: Issue resolves itself after five minutes
>     example: Admin uses oc to fix things
>     example: Admin must SSH to hosts, restore from backups, or other non
> standard admin activities
> 
> Is this a regression (if all previous versions were also vulnerable,
> updating to the new, vulnerable version does not increase exposure)?
> 
>     example: No, it has always been like this we just never noticed
>     example: Yes, from 4.y.z to 4.y+1.z Or 4.y.z to 4.y.z+1

Comment 9 W. Trevor King 2021-12-10 19:32:03 UTC
The backport story for bug 2009873 is a bit complicated.  I'll ask for an overall summary / impact-statement over there.

Comment 11 errata-xmlrpc 2021-12-13 12:06:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.9.11 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:5003


Note You need to log in before you can comment on or make changes to this bug.