Bug 1914053

Summary: pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting.
Product: OpenShift Container Platform Reporter: Xingbin Li <xingli>
Component: NetworkingAssignee: Douglas Smith <dosmith>
Networking sub component: multus QA Contact: Weibin Liang <weliang>
Status: CLOSED ERRATA Docs Contact:
Severity: urgent    
Priority: urgent CC: acandelp, bbennett, bmehra, dborkows, dosmith, gferrazs, midzik, pnarkhed, rhowe, r.martinez, satripat, sushil.suresh, wrussell
Version: 4.6   
Target Milestone: ---   
Target Release: 4.10.0   
Hardware: Unspecified   
OS: Unspecified   
Fixed In Version: Doc Type: Enhancement
Doc Text:
Feature: Added reconciliation job which periodically cleans up stranded IP addresses. Reason: Sometimes IP address allocations with Whereabouts are left stranded due to cluster events such as reboots, which don't allow the CNI DEL to process in order to properly release an IP address. When pods that reference Whereabouts IP-managed secondary interfaces use ranges that are exhausted due to too many stranded IP addresses, these pods can wind up in an irreconcilable crashloop. Result: Stranded IP addresses are periodically cleaned.
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-03-10 16:02:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Comment 17 Weibin Liang 2021-11-03 15:45:54 UTC
Tested and verified in 4.10.0-0.nightly-2021-11-02-19163

Comment 25 errata-xmlrpc 2022-03-10 16:02:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Comment 31 Adrian 2023-01-30 07:26:03 UTC
Hello Team,

I have another customer with the same issue in RHOCP 4.11.6 --> 03412816

Please, could you confirm if the bug was also backported to 4.11?

Many thanks in advance,

Best Regards,