Bug 1848419
| Summary: | Root cause desired: Nodes are going into NotReady state intermittently. | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | manisha <mdhanve> |
| Component: | Node | Assignee: | Peter Hunt <pehunt> |
| Node sub component: | CRI-O | QA Contact: | Sunil Choudhary <schoudha> |
| Status: | CLOSED INSUFFICIENT_DATA | Docs Contact: | |
| Severity: | medium | ||
| Priority: | medium | CC: | aos-bugs, dornelas, jokerman, mpatel, pehunt |
| Version: | 3.11.0 | ||
| Target Milestone: | --- | ||
| Target Release: | 3.11.z | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-11-13 21:20:33 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
manisha
2020-06-18 10:56:39 UTC
Setting the target release to the development branch so we can investigate and fix. Once we understand the issue we can consider a backport. Network error messages aren't related; they're normal and uninteresting. The PLEG failing is what matters. Over to node. What size disks are you running? I suspect you are hitting an IOPS throttle. Do you see high io/wait times? High memory utilization? > GenericPLEG: Unable to retrieve pods: rpc error: code = ResourceExhausted desc = grpc: trying to send message larger than max (8396465 vs. 8388608) CU hit the new grpc message size limit We increased this a while back https://github.com/kubernetes/kubernetes/pull/63977 https://access.redhat.com/solutions/3803411 Not sure how to avoid hitting it unless we figure out some way to do more aggressive GC I assume that cleaning up dead containers reduced the size of the message and restored connectivity between the kubelet and crio. given the above root cause (un-gc'd pods causing grpc overflow), is there more action needed for this bug? |