Bug 1288045 - OpenShift refresh broken on undefined memory capacity
Summary: OpenShift refresh broken on undefined memory capacity
Alias: None
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: Providers
Version: 5.5.0
Hardware: Unspecified
OS: Unspecified
Target Milestone: GA
: 5.6.0
Assignee: zakiva
QA Contact: Einat Pacifici
Whiteboard: container
: 1289747 (view as bug list)
Depends On:
Blocks: 1288549 1289747
TreeView+ depends on / blocked
Reported: 2015-12-03 11:23 UTC by Federico Simoncelli
Modified: 2016-06-29 15:16 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Currently, if an inventory refresh takes place before OpenShift nodes have reported their capacity, the inventory refresh cannot process the entities. This typically happens if self-registration is disabled on the nodes, or if there are stale or unneeded nodes defined in the system. To work around this, remove the stale nodes from the system. This issue will be fixed in a future release by removing the strict requirement on the presence of node capacity.
Clone Of:
: 1288549 1289747 (view as bug list)
Last Closed: 2016-06-29 15:16:25 UTC
Category: ---
Cloudforms Team: ---
Target Upstream Version:

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1348 normal SHIPPED_LIVE CFME 5.6.0 bug fixes and enhancement update 2016-06-29 18:50:04 UTC

Description Federico Simoncelli 2015-12-03 11:23:31 UTC
Description of problem:
According to:


it could be possible that a refresh takes place before a node communicated its capacity (or if it's a stale invalid entry: a node never actually used).

As far as I know this should never happen with nodes self-registration (but it should be checked).

Steps to Reproduce:
1. create a new node (without capacity)
2. refresh inventory from ManageIQ before it's reporting its capacity

Actual results:
Refresh inventory crashes

Expected results:
Refresh should succeed.

Comment 2 zakiva 2015-12-06 12:02:17 UTC
bug fix: 

Comment 3 John Prause 2015-12-08 21:19:40 UTC
*** Bug 1289747 has been marked as a duplicate of this bug. ***

Comment 10 Einat Pacifici 2016-05-15 12:10:41 UTC
Steps to reproduce: 
# oc create -f - <<EOF
{"kind": "Node", "apiVersion": "v1", "metadata": { "name": "mydummynode", "labels": { "name": "my-first-k8s-node" } } }

node "mydummynode" created

# oc get nodes
NAME                               LABELS                                                                              STATUS     AGE                      name=my-first-k8s-node                                                              NotReady   5m
mydummynode                        name=my-first-k8s-node                                                              Unknown    5s
ose-master.qa.lab.tlv.redhat.com   kubernetes.io/hostname=ose-master.qa.lab.tlv.redhat.com,region=infra,zone=default   Ready      26d
ose-node1.qa.lab.tlv.redhat.com    kubernetes.io/hostname=ose-node1.qa.lab.tlv.redhat.com,region=infra,zone=default    Ready      26d
ose-node2.qa.lab.tlv.redhat.com    kubernetes.io/hostname=ose-node2.qa.lab.tlv.redhat.com,region=primary,zone=east     Ready      26d

In CFME - to go providers - Select a provider - select Configuration:Refresh Items and relationships.
Expected: # of nodes should reflect additional new node. 
In CFME go to Containers - Container Nodes 
Expected: All new nodes are visible (including newly added node).

Comment 12 errata-xmlrpc 2016-06-29 15:16:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.