Bug 1810996 - Keepalived: Ingress VIP Cluster password collisions
Summary: Keepalived: Ingress VIP Cluster password collisions
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Machine Config Operator
Version: 4.4
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.4.0
Assignee: Antoni Segura Puimedon
QA Contact: Victor Voronkov
URL:
Whiteboard:
Depends On: 1803232
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-06 11:30 UTC by Antoni Segura Puimedon
Modified: 2020-05-04 11:45 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-04 11:45:08 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Github openshift machine-config-operator pull 1550 None closed Bug 1810996: [release-4.4] Fixed hardcoded ingress VIP password 2020-08-25 13:49:17 UTC
Red Hat Product Errata RHBA-2020:0581 None None None 2020-05-04 11:45:29 UTC

Description Antoni Segura Puimedon 2020-03-06 11:30:50 UTC
This bug was initially created as a copy of Bug #1803232

I am copying this bug because: 



Description of problem:
In case two clusters ended up by chance on the same Ingress virtual router id, there would be membership issues due to the fact that the password is static, in contrast with the other two VIPs, which have it include the cluster name.

Version-Release number of selected component (if applicable): 4.4 ane 4.3


How reproducible: To have the issue you need to deploy two clusters in the same multicast domain in which:
* fletcher8(CLUSTER_A_NAME-vip) = x = fletcher8(CLUSTER_B_NAME)


Steps to Reproduce:
1. Find names that collide under fletcher8
2. Deploy both clusters with the names as above
2. Look at Ingress VIP membership status in keepalived logs (crictl keepalived logs)

Actual results:
Nodes from both clusters are seen as belonging to a single virtual router

Expected results:
Keepalived wrong password messages.


Additional info:
In order to simplify the reproduction, one can just check that the keepalived configuration does not change even if you deploy with different cluster names.

Comment 2 Antonio Murdaca 2020-03-11 14:10:05 UTC
PR for 4.4 is already up also and needs review https://github.com/openshift/machine-config-operator/pull/1550

Comment 3 Steve Milner 2020-03-12 13:17:38 UTC
Once https://bugzilla.redhat.com/show_bug.cgi?id=1803232 is VERIFIED this can move forward.

Updating to POST since a PR has been posted.

Comment 4 Antoni Segura Puimedon 2020-03-16 10:18:35 UTC
waiting from the 4.5 targeting bz to be verified

Comment 7 Victor Voronkov 2020-04-01 06:23:40 UTC
[kni@provisionhost-0-0 ~]$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.4.0-0.nightly-2020-03-31-053841   True        False         17h     Cluster version is 4.4.0-0.nightly-2020-03-31-053841


[core@master-0-0 ~]$ cat /etc/keepalived/keepalived.conf | grep auth_pass
        auth_pass ocp-edge-cluster-0_api_vip
        auth_pass ocp-edge-cluster-0_dns_vip
        auth_pass ocp-edge-cluster-0_ingress_vip

Comment 9 errata-xmlrpc 2020-05-04 11:45:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0581


Note You need to log in before you can comment on or make changes to this bug.