Description of problem: Nodes unexpectedly entered into a NodeNotReady state, with no information explaining why besides Kubelet stopped posting node status. Customer has confirmed there is a monitoring between hosts and masters via LB. No timeouts or problems were recorded in the last roundabout 16 hours. Version-Release number of selected component (if applicable): OCP 3.3.1.20-1 How reproducible: Partially on Customer environment Steps to Reproduce: 1. 2. 3. Actual results: Nodes suddenly going into a NodeNotReady status Expected results: Cluster working propperly Additional info: Added within the comments.
@Seth, At the same time, in case we still thinking the problem should be on the LB, I'm wondering if we can just skip the LB, by replacing it for a HAproxy native solution, changing the DNS entries to make them point to the new LB, and just replacing the haproxy.cfg in the new LB to point the backend servers to the cluster masters. Is the above something that might help us on isolating the source of this issue?
Description of problem (refocus): A node stops posting status updates to the master due to the connection being severed. If the node does not receive a TCP FIN on the request, the node will wait on the 15min timeout default timeout set in net/http. After this time out the node will try to update its status again. Version-Release number of selected component (if applicable): OCP 3.3.1.20-1 Actual results: Thu, 22 Jun 2017 15:46:03 +0200 NodeStatusUnknown Kubelet stopped posting node status. Thu, 22 Jun 2017 16:01:05 +0200 KubeletReady kubelet is posting ready status Expected results: A timeout to happen on the node that generates an error and retries sending the master an update status. This time out should fall some where between the nodes node-status-update-frequency (default 10s) and master-controllers node-monitor-grace-period duration (default 40s).
Hi Seth, Do you think that replacing the current LB for a native haproxy and set the connection timeouts for server/client to a lower value (default 5m) would cause any side effect on the cluster behaviour? i.e.: timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 300s #5min --> set it to 60s timeout server 300s #5min --> set it to 60s timeout http-keep-alive 10s
Not fixed in rebase, fixed in 1.7.8 upstream, still pending cherry pick
Nicolas Nosenzo: Could you please provider more details about the LB? thanks
Nicolas Nosenzo, Maciej Szulik: With the old version:3.7.0-0.127.0, and haproxy HA, when I stop the master's api service, I couldn't reproduce the issue. Could you please help support more details, thanks.
It might be hard to reproduce, you'll need to generate load big enough to hit those limits. I'll defer to Nicolas for the reproducer.
Need to test when 3.8 scalability lab is available.
This area was regression tested in a 300 node AWS cluster on 3.7.0-0.190.0. The originally reported problem was not reproduced. During the test, a cluster horizontal stress test was run and high stress logging testing was performed at rates over 75 million messages/hour. During this testing no NotReady nodes were seen. Additionally SVT has run its suite of network performance tests for 3.7 and no issues were seen. Marking this bug VERFIED for 3.7 and creating a card (internal board) for SVT to create a test case to explicitly test this area again in 3.8.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:3188
Hi, re-opening this BZ, affected customer are concerned about whether this will be backported to 3.4-3.6. Is there any plans for doing so ? Thanks.
(In reply to Nicolas Nosenzo from comment #44) > Hi, re-opening this BZ, affected customer are concerned about whether this > will be backported to 3.4-3.6. Is there any plans for doing so ? > > Thanks. Can you please open a separate bug or clone this bug for 3.6?
(In reply to Michal Fojtik from comment #45) > (In reply to Nicolas Nosenzo from comment #44) > > Hi, re-opening this BZ, affected customer are concerned about whether this > > will be backported to 3.4-3.6. Is there any plans for doing so ? > > > > Thanks. > > Can you please open a separate bug or clone this bug for 3.6? Done, https://bugzilla.redhat.com/show_bug.cgi?id=1527389 closing this one.