Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1331829

Summary: node IP shown as <nil>
Product: OpenShift Container Platform Reporter: Miheer Salunke <misalunk>
Component: NodeAssignee: Seth Jennings <sjenning>
Status: CLOSED DUPLICATE QA Contact: DeShuai Ma <dma>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.1.0CC: aos-bugs, danw, jokerman, mmccomas, mwysocki
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-05-02 11:35:41 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Miheer Salunke 2016-04-29 17:29:26 UTC
Description of problem:
Node IP is not shown properly 

oc describe po/jenkins-2-5w8pp
Name:				jenkins-2-5w8pp
Namespace:			maci
Image(s):			registry.access.redhat.com/openshift3/jenkins-1-rhel7:latest
Node:				node-a1.example.com/<nil>

This happens regardless which node/namespace/project here is the node config, it was not  touched after the ansible installer finished
It is running in a ***containerized environment******.

-bash-4.2# cat /etc/origin/node/node-config.yaml 
allowDisabledDocker: false
apiVersion: v1
dnsDomain: cluster.local
dockerConfig:
  execHandlerName: ""
iptablesSyncPeriod: "5s"
imageConfig:
  format: openshift3/ose-${component}:${version}
  latest: false
kind: NodeConfig
kubeletArguments: 
  cloud-config:
  - /etc/origin/node/cloud.conf
  cloud-provider:
  - openstack
  image-gc-high-threshold:
  - '75'
  image-gc-low-threshold:
  - '60'
  max-pods:
  - '40'
masterKubeConfig: system:node:node-a1.example.com.kubeconfig
networkPluginName: redhat/openshift-ovs-multitenant
# networkConfig struct introduced in origin 1.0.6 and OSE 3.0.2 which
# deprecates networkPluginName above. The two should match.
networkConfig:
   mtu: 1350
   networkPluginName: redhat/openshift-ovs-multitenant
nodeName: node-a1.example.com
podManifestConfig:
servingInfo:
  bindAddress: 0.0.0.0:10250
  certFile: server.crt
  clientCA: ca.crt
  keyFile: server.key
volumeDirectory: /var/lib/origin/openshift.local.volumes
proxyArguments:
  proxy-mode:
     - iptables



I checked other pods as well-

$ oc get pod docker-registry-2-4m2is -o yaml |grep -i ip
        ih4/ZV2iPyZsOk54ww5lfiZc1feV+vq484FUDUL0L6JBEpXHLuHNS2PWIHeUePmn
        MIIEowIBAAKCAQEAvOHY8uTdPV27XAoxtHOk1p92IP7agorsZTJm+VzApUEsXvOn
  hostIP: <nil>
  podIP: 10.1.x.x




node-a1.example.com resolves on both the node and the master.
it resolves on all hosts in the cluster and also from within the
containers that run the master and node services.


From time to time we do see scheduler errors,
but eventually, it still succeeds to deploy and run the pods.

We tried to set nodeIP manually in the node-config.yaml for one of the hosts

 
$ oc describe po router-1-viawu|head -11
Name:				router-1-viawu
Namespace:			default
Image(s):			openshift3/ose-haproxy-router:v3.1.1.6
Node:				infra-a1-74feik83.test.osp.sfa.se/<nil>
Start Time:			Wed, 27 Apr 2016 15:48:32 +0200
Labels:				deployment=router-1,deploymentconfig=router,router=router
Status:				Running
Reason:				
Message:			
IP:				<nil>
Replication Controllers:	router-1 (3/3 replicas created)

 sudo grep nodeIP /etc/origin/node/node-config.yaml
nodeIP: 192.168.x.x



Version-Release number of selected component (if applicable):
 Openshift Enterprise in containerized environment

How reproducible:
Always

Steps to Reproduce:
1.Install a few RHEL atomic host systems on OpenStack, put the cloud.conf file with openstack credentials somewhere on the systems and set the cloud bits in the ansible hosts file before installing.

2.
3.

Actual results:
nodeIP not shown

Expected results:
nodeIP shall be shown

Additional info:

I guess during QA it was only tested with creating the cloud config bits after the initial installation and not beforehand. in which case, of course everything looks fine.

When not using the cloud-config and cloud-provider bits inside of the node-config.yaml it works just fine.
There is also another issue that may or not be related.
When there is the cloud-config adding a new node doesnt create a hostsubnet. ( https://bugzilla.redhat.com/show_bug.cgi?id=1320959 )
Mailing list -
http://post-office.corp.redhat.com/archives/openshift-sme/2016-April/msg01077.html

Comment 1 Jason DeTiberus 2016-04-29 17:37:47 UTC
This sounds like it might be related to the hard-coded network names used by the OpenStack cloud provider in Kubernetes 1.1/OpenShift Enterprise 3.1 (And has been addressed in Kubernetes 1.2/OpenShift Enterprise 3.2).

Comment 2 Dan Winship 2016-04-29 17:50:27 UTC
isn't this (like bug 1320959) just another visible symptom of bug 1303085?

Comment 3 Marcel Wysocki 2016-05-02 08:41:24 UTC
yes. most likely the same issue

Comment 4 Dan McPherson 2016-05-02 11:35:41 UTC

*** This bug has been marked as a duplicate of bug 1303085 ***