Bug 1576398 - IP failover doesn't react on router's pod being scaled down
Summary: IP failover doesn't react on router's pod being scaled down
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Routing
Version: 3.7.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 3.10.0
Assignee: Ivan Chavero
QA Contact: zhaozhanqi
URL:
Whiteboard:
: 1517723 (view as bug list)
Depends On:
Blocks: 1607538
TreeView+ depends on / blocked
 
Reported: 2018-05-09 11:14 UTC by Vladislav Walek
Modified: 2018-10-11 08:56 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1607538 (view as bug list)
Environment:
Last Closed: 2018-07-30 19:14:54 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:1816 None None None 2018-07-30 19:15:17 UTC
Origin (Github) 19890 None None None 2018-05-31 19:13:40 UTC

Description Vladislav Walek 2018-05-09 11:14:20 UTC
Description of problem:

when router's pod is scaled down with 1 replica (from 3 > 2) the ipfailover doesn't react on that and keeps the VIP on node where no router's pod is running. This causes that the VIP is still added to the node and it is not served (any http/https attempt ends with connection refused).
The issue causes that the IP is still in the node - should be removed from that node.

Version-Release number of selected component (if applicable):
OpenShift Container Platform 3.7.23
IPFailover image version 3.7

How reproducible:
- deploy 3 replicas of router
- deploy 3 replicas of ipfailover
- scale down the router to 2 replicas
- monitor the ipfailover (watch oc get pods -o wide, ip -4 addr sh eth0 | grep inet) 
- see that the VIP still remains on the node where the router pod is no longer running


Actual results:
VIP is still on the node even that there is no router's pod running

Expected results:
The VIP is removed if the router's pod is not running on the node, the ipfailover pod should be also removed

Additional info:

Comment 1 Phil Cameron 2018-05-11 17:45:53 UTC
Looking into this. When the router scales down haproxy remains until current sessions complete. This also happens when router reloads occur. As long as haproxy accepts connections ipfailover thinks its still alive.

Comment 2 Miciah Dashiel Butler Masters 2018-05-11 19:01:47 UTC
Continuing the line of reasoning in comment 1, can you run the following commands on the host where the router has been scaled down, substituting the router's address for $router_addr and the router service's address for $service_addr?

    </dev/tcp/$router_addr/80
    echo $?
    </dev/tcp/$service_addr/80
    echo $?

If the first command prints 0, then the router is still accepting connections.  If the first command does not print 0 but the second does, then it may be that we need to change the router check to use the router's address instead of the service's.

I also saw that in the case associated with this Bugzilla report, it is reported that failover is not happening "when the check script should fail (verified by manual test)".  Are you not seeing failover even when OPENSHIFT_HA_CHECK_SCRIPT is set to a command that exits non-zero?

Comment 3 Weibin Liang 2018-05-22 20:53:25 UTC
@Phil,

I saw the same problem in the latest 3.10 code, the virtual IP address stay in the node even router pod is removed from that node.

Another finding is without deploying router pods to any nodes, ipfailover pod can be deployed in the nodes, and virtual IP addressed can be assigned to those nodes.

Comment 4 Weibin Liang 2018-05-23 12:55:27 UTC
Saw Unable to access script `</dev/tcp/172.17.0.4/80` message when oc log ipfailover-pod. 

[root@qe-weliang-3master-etcd-nfs-1 keepalived]# oc log ipf-har-1-w89dk
log is DEPRECATED and will be removed in a future version. Use logs instead.
  - Loading ip_vs module ...
  - Checking if ip_vs module is available ...
ip_vs                 141432  0 
  - Module ip_vs is loaded.
  - check for iptables rule for keepalived multicast (224.0.0.18) ...
  - Generating and writing config to /etc/keepalived/keepalived.conf
  - Starting failover services ...
Starting Keepalived v1.3.5 (03/19,2017), git commit v1.3.5-6-g6fa32f2
Opening file '/etc/keepalived/keepalived.conf'.
Starting Healthcheck child process, pid=96
Initializing ipvs
Opening file '/etc/keepalived/keepalived.conf'.
Starting VRRP child process, pid=97
Registering Kernel netlink reflector
Registering Kernel netlink command channel
Registering gratuitous ARP shared channel
Opening file '/etc/keepalived/keepalived.conf'.
WARNING - default user 'keepalived_script' for script execution does not exist - please create.
Unable to access script `</dev/tcp/172.17.0.4/80`
Disabling track script chk_ipf_har since not found

Comment 5 Ben Bennett 2018-05-23 13:09:22 UTC
The key error is:
  Unable to access script `</dev/tcp/172.17.0.4/80`

That seems to kill the monitoring and disable the script.

Comment 6 Ben Bennett 2018-05-23 17:12:37 UTC
This was fixed in keepalived with https://github.com/acassen/keepalived/commit/5cd5fff78de11178c51ca245ff5de61a86b85049

The question is when the security checks were added and if we can work out an alternative... or if we should make a check script that does the same thing and takes the ip and port as args (if that works).

Comment 8 Ben Bennett 2018-05-31 19:13:40 UTC
PR https://github.com/openshift/origin/pull/19890

Comment 10 Ben Bennett 2018-06-05 17:39:10 UTC
*** Bug 1517723 has been marked as a duplicate of this bug. ***

Comment 11 zhaozhanqi 2018-06-06 02:08:10 UTC
Verified this bug on v3.10.0-0.58.0

steps
1. Create two routers
2. Create ipfailover pods
 oc adm ipfailover --create --replicas=2 -w 80 --virtual-ips=10.10.10.10-11

3. Check the logs When there are two pod become running, no found the logs like "
Unable to access script `</dev/tcp/172.17.0.4/80`

4. stop one router pod by replicas=1

5. Check the vip has been removed and switched to another node.

Comment 15 errata-xmlrpc 2018-07-30 19:14:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1816

Comment 16 seferovic 2018-08-16 08:33:28 UTC
Hi,

would it be possible to implement this change in the next 3.9 release as well? Thank you!

Kind regards,
E.


Note You need to log in before you can comment on or make changes to this bug.