Bug 1810996

Summary: Keepalived: Ingress VIP Cluster password collisions
Product: OpenShift Container Platform Reporter: Antoni Segura Puimedon <asegurap>
Component: Machine Config OperatorAssignee: Antoni Segura Puimedon <asegurap>
Status: CLOSED ERRATA QA Contact: Victor Voronkov <vvoronko>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.4CC: amurdaca, bperkins, smilner
Target Milestone: ---   
Target Release: 4.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-05-04 11:45:08 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1803232    
Bug Blocks:    

Description Antoni Segura Puimedon 2020-03-06 11:30:50 UTC
This bug was initially created as a copy of Bug #1803232

I am copying this bug because: 



Description of problem:
In case two clusters ended up by chance on the same Ingress virtual router id, there would be membership issues due to the fact that the password is static, in contrast with the other two VIPs, which have it include the cluster name.

Version-Release number of selected component (if applicable): 4.4 ane 4.3


How reproducible: To have the issue you need to deploy two clusters in the same multicast domain in which:
* fletcher8(CLUSTER_A_NAME-vip) = x = fletcher8(CLUSTER_B_NAME)


Steps to Reproduce:
1. Find names that collide under fletcher8
2. Deploy both clusters with the names as above
2. Look at Ingress VIP membership status in keepalived logs (crictl keepalived logs)

Actual results:
Nodes from both clusters are seen as belonging to a single virtual router

Expected results:
Keepalived wrong password messages.


Additional info:
In order to simplify the reproduction, one can just check that the keepalived configuration does not change even if you deploy with different cluster names.

Comment 2 Antonio Murdaca 2020-03-11 14:10:05 UTC
PR for 4.4 is already up also and needs review https://github.com/openshift/machine-config-operator/pull/1550

Comment 3 Steve Milner 2020-03-12 13:17:38 UTC
Once https://bugzilla.redhat.com/show_bug.cgi?id=1803232 is VERIFIED this can move forward.

Updating to POST since a PR has been posted.

Comment 4 Antoni Segura Puimedon 2020-03-16 10:18:35 UTC
waiting from the 4.5 targeting bz to be verified

Comment 7 Victor Voronkov 2020-04-01 06:23:40 UTC
[kni@provisionhost-0-0 ~]$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.4.0-0.nightly-2020-03-31-053841   True        False         17h     Cluster version is 4.4.0-0.nightly-2020-03-31-053841


[core@master-0-0 ~]$ cat /etc/keepalived/keepalived.conf | grep auth_pass
        auth_pass ocp-edge-cluster-0_api_vip
        auth_pass ocp-edge-cluster-0_dns_vip
        auth_pass ocp-edge-cluster-0_ingress_vip

Comment 9 errata-xmlrpc 2020-05-04 11:45:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0581