Bug 1288045

Summary: OpenShift refresh broken on undefined memory capacity
Product: Red Hat CloudForms Management Engine Reporter: Federico Simoncelli <fsimonce>
Component: ProvidersAssignee: zakiva
Status: CLOSED ERRATA QA Contact: Einat Pacifici <epacific>
Severity: high Docs Contact:
Priority: high    
Version: 5.5.0CC: azellner, bazulay, cpelland, dron, fsimonce, jfrey, jhardy, jprause, obarenbo, zakiva
Target Milestone: GA   
Target Release: 5.6.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: container
Fixed In Version: 5.6.0.0 Doc Type: Bug Fix
Doc Text:
Currently, if an inventory refresh takes place before OpenShift nodes have reported their capacity, the inventory refresh cannot process the entities. This typically happens if self-registration is disabled on the nodes, or if there are stale or unneeded nodes defined in the system. To work around this, remove the stale nodes from the system. This issue will be fixed in a future release by removing the strict requirement on the presence of node capacity.
Story Points: ---
Clone Of:
: 1288549 1289747 (view as bug list) Environment:
Last Closed: 2016-06-29 15:16:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1288549, 1289747    

Description Federico Simoncelli 2015-12-03 11:23:31 UTC
Description of problem:
According to:

https://github.com/ManageIQ/manageiq/issues/5678

it could be possible that a refresh takes place before a node communicated its capacity (or if it's a stale invalid entry: a node never actually used).

As far as I know this should never happen with nodes self-registration (but it should be checked).

Steps to Reproduce:
1. create a new node (without capacity)
2. refresh inventory from ManageIQ before it's reporting its capacity

Actual results:
Refresh inventory crashes

Expected results:
Refresh should succeed.

Comment 2 zakiva 2015-12-06 12:02:17 UTC
bug fix: 
https://github.com/ManageIQ/manageiq/pull/5726

Comment 3 John Prause 2015-12-08 21:19:40 UTC
*** Bug 1289747 has been marked as a duplicate of this bug. ***

Comment 10 Einat Pacifici 2016-05-15 12:10:41 UTC
Steps to reproduce: 
# oc create -f - <<EOF
{"kind": "Node", "apiVersion": "v1", "metadata": { "name": "mydummynode", "labels": { "name": "my-first-k8s-node" } } }
EOF

node "mydummynode" created

# oc get nodes
NAME                               LABELS                                                                              STATUS     AGE
10.240.79.157                      name=my-first-k8s-node                                                              NotReady   5m
mydummynode                        name=my-first-k8s-node                                                              Unknown    5s
ose-master.qa.lab.tlv.redhat.com   kubernetes.io/hostname=ose-master.qa.lab.tlv.redhat.com,region=infra,zone=default   Ready      26d
ose-node1.qa.lab.tlv.redhat.com    kubernetes.io/hostname=ose-node1.qa.lab.tlv.redhat.com,region=infra,zone=default    Ready      26d
ose-node2.qa.lab.tlv.redhat.com    kubernetes.io/hostname=ose-node2.qa.lab.tlv.redhat.com,region=primary,zone=east     Ready      26d


In CFME - to go providers - Select a provider - select Configuration:Refresh Items and relationships.
Expected: # of nodes should reflect additional new node. 
In CFME go to Containers - Container Nodes 
Expected: All new nodes are visible (including newly added node).

Comment 12 errata-xmlrpc 2016-06-29 15:16:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1348