Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1619599

Summary: In HA env the node label node-role.kubernetes.io/master=true is lost after instance stop and start
Product: OpenShift Container Platform Reporter: Xingxing Xia <xxia>
Component: NodeAssignee: Seth Jennings <sjenning>
Status: CLOSED DUPLICATE QA Contact: DeShuai Ma <dma>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.9.0CC: aos-bugs, jokerman, mmccomas
Target Milestone: ---   
Target Release: 3.9.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-08-22 14:05:39 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Xingxing Xia 2018-08-21 09:40:20 UTC
Description of problem:
In HA env the node label node-role.kubernetes.io/master=true is lost after instance stop and start.
Non-HA env, e.g. a master and a node, doesn't reproduce it after master stop and start.

Version-Release number of selected component (if applicable):
openshift v3.9.41

How reproducible:
Always

Steps to Reproduce:
1. First, check nodes
[root@ip-172-18-7-74 ~]# oc get no
NAME                            STATUS    ROLES     AGE       VERSION
ip-172-18-1-221.ec2.internal    Ready     master    52m       v1.9.1+a0ce1bc657
ip-172-18-13-114.ec2.internal   Ready     compute   52m       v1.9.1+a0ce1bc657
ip-172-18-2-68.ec2.internal     Ready     compute   52m       v1.9.1+a0ce1bc657
ip-172-18-4-42.ec2.internal     Ready     master    52m       v1.9.1+a0ce1bc657
ip-172-18-7-74.ec2.internal     Ready     master    52m       v1.9.1+a0ce1bc657
2. Stop the instance ip-172-18-7-74.ec2.internal in AWS console.
3. Minutes later, start the instance in AWS console.
4. Check nodes again.
[root@ip-172-18-7-74 ~]# oc get no
NAME                            STATUS    ROLES     AGE       VERSION
ip-172-18-1-221.ec2.internal    Ready     master    1h        v1.9.1+a0ce1bc657
ip-172-18-13-114.ec2.internal   Ready     compute   1h        v1.9.1+a0ce1bc657
ip-172-18-2-68.ec2.internal     Ready     compute   1h        v1.9.1+a0ce1bc657
ip-172-18-4-42.ec2.internal     Ready     master    1h        v1.9.1+a0ce1bc657
ip-172-18-7-74.ec2.internal     Ready     <none>    18m       v1.9.1+a0ce1bc657

[root@ip-172-18-7-74 ~]# oc label --list no ip-172-18-7-74.ec2.internal | grep role
role=node

Actual results:
4. Node label node-role.kubernetes.io/master=true is lost

Expected results:
4. The label still exists

Additional info:

Comment 1 Seth Jennings 2018-08-22 14:05:39 UTC

*** This bug has been marked as a duplicate of bug 1559271 ***