Bug 1305131 - Metrics do not behave as expected if nodeIP is not set in node.yaml
Metrics do not behave as expected if nodeIP is not set in node.yaml
Status: CLOSED ERRATA
Product: OpenShift Container Platform
Classification: Red Hat
Component: Metrics (Show other bugs)
3.1.0
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Matt Wringe
chunchen
:
: 1305100 (view as bug list)
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-02-05 13:19 EST by Boris Kurktchiev
Modified: 2016-09-29 22:16 EDT (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-06-27 11:05:30 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Boris Kurktchiev 2016-02-05 13:19:55 EST
Description of problem:
In my multi node setup (3 nodes 2 masters) I have each of the 3 nodes labeled differently and they house different containers with different purposes. Metrics do not show up on all nodes unless nodeIP: is defined in the node configuration file. Without those defined you can look over the attached Bug Report for what happens and what the behavior is. 

During installation, there does not seem to be any mention of this dependency to nodeIP in either the Advanced Install steps (what I use for deployment) or the Metrics section. The installer example files https://github.com/openshift/openshift-ansible/blob/master/inventory/byo/hosts.ose.example has the following:
# Enable cluster metrics
#use_cluster_metrics=true
# Configure metricsPublicURL in the master config for cluster metrics
# See: https://docs.openshift.com/enterprise/latest/install_config/cluster_metrics.html
#openshift_master_metrics_public_url=https://hawkular-metrics.example.com/hawkular/metrics
# Configure nodeIP in the node config
# This is needed in cases where node traffic is desired to go over an
# interface other than the default network interface.
#openshift_node_set_node_ip=True
# Force setting of system hostname when configuring OpenShift
# This works around issues related to installations that do not have valid dns
# entries for the interfaces attached to the host.
#openshift_set_hostname=True

In none of the comments does it mention that Metrics is dependent on nodeIP and thus I probably should enable those the nodeIP and hostname options during install.

Also, another note for the ansible bit, initially I assumed based on the name that # Enable cluster metrics would actually go ahead and deploy the metrics pods/etc for me, but thats more of a wording thing than anything i think.

Version-Release number of selected component (if applicable):
3.1.1

How reproducible:
Install OSE, go through Metrics setup, try and view metrics on two different nodes that have different types of hosted projects (limited and unlimited)

Actual results:
In my case, there were no metrics for my Limited projects

Expected results:
Metrics regardless of location

Additional info:
https://bugzilla.redhat.com/show_bug.cgi?id=1305100
Comment 1 Matt Wringe 2016-02-08 09:54:40 EST
*** Bug 1305100 has been marked as a duplicate of this bug. ***
Comment 2 Boris Kurktchiev 2016-02-08 09:56:45 EST
Just to put it here. Clayton asked me to create the second bug and tag him in it. I just couldnt figure out how to :)
Comment 4 Brenton Leanhardt 2016-02-09 10:48:03 EST
Hi Boris,

There's definitely some confusion around nodeIP.  In your environment if you don't set nodeIP what IP are the Node getting?  I don't need to know the actual value I'm just curious if it's the IP of the host from the Master's perspective and not the IP that heapster should be using.

This may be a more general problem.  I've seen environments that have the concept of internal and public hostnames and inside the cluster both hostnames resolve to an IP that is physically present on the Node.

I've also seen environments where if from the Node you query its public hostname it returns an IP that is not actually present on the Node.  In that case I could easily see the Master setting the wrong IP that could lead to this and other problems.
Comment 5 Boris Kurktchiev 2016-02-09 10:55:55 EST
There is only a single IP on the box, nothing fancy. The nodes are getting the correct IP without nodeIP being set, but for whatever reason heapster+friends do not like NOT having the setting in there. Incidentally I just re-ran ansible and set openshift_node_set_node_ip=True but my node.yaml does not have it set, and heapster is not happy. I am about to add it back in and see how it goes.
Comment 6 Matt Wringe 2016-02-09 13:38:38 EST
Would it be possible for you to output the results before and after the change of:

`curl --insecure -H "Authorization: Bearer `oc whoami -t`" -X GET https://localhost:8443/api/v1/nodes`

[assuming you are logged into oc as an admin user and the kubernetes master is running on localhost:8443]

This is the call that Heapster uses to get the list of nodes from the system, and I suspect there might be a difference with the `externalID` value or some other parameter.
Comment 7 Boris Kurktchiev 2016-02-09 13:42:06 EST
Yeah will do, seems something is going on and i have managed to corrupt the metrics deployment. Let me get it going/redeployed and I will play with having nodeIP and not having it set and shoot the output.
Comment 8 Boris Kurktchiev 2016-02-10 10:59:35 EST
----> curl --insecure -H "Authorization: Bearer `oc whoami -t`" -X GET https://console.ose.devapps.unc.edu/api/v1/nodes
{
  "kind": "NodeList",
  "apiVersion": "v1",
  "metadata": {
    "selfLink": "/api/v1/nodes",
    "resourceVersion": "1892"
  },
  "items": [
    {
      "metadata": {
        "name": "osmaster0s.devapps.unc.edu",
        "selfLink": "/api/v1/nodes/osmaster0s.devapps.unc.edu",
        "uid": "795bf9ba-d00b-11e5-b7ab-005056a6874f",
        "resourceVersion": "1890",
        "creationTimestamp": "2016-02-10T15:32:20Z",
        "labels": {
          "kubernetes.io/hostname": "osmaster0s.devapps.unc.edu",
          "region": "infra",
          "zone": "master"
        }
      },
      "spec": {
        "externalID": "osmaster0s.devapps.unc.edu",
        "unschedulable": true
      },
      "status": {
        "capacity": {
          "cpu": "2",
          "memory": "8011464Ki",
          "pods": "50"
        },
        "conditions": [
          {
            "type": "Ready",
            "status": "True",
            "lastHeartbeatTime": "2016-02-10T15:58:35Z",
            "lastTransitionTime": "2016-02-10T15:32:20Z",
            "reason": "KubeletReady",
            "message": "kubelet is posting ready status"
          }
        ],
        "addresses": [
          {
            "type": "LegacyHostIP",
            "address": "152.19.229.206"
          },
          {
            "type": "InternalIP",
            "address": "152.19.229.206"
          }
        ],
        "daemonEndpoints": {
          "kubeletEndpoint": {
            "Port": 10250
          }
        },
        "nodeInfo": {
          "machineID": "8d15d3996ac2456189ef02a4827d455c",
          "systemUUID": "42262FBF-076F-30C3-8D42-E180EA6B93FA",
          "bootID": "86726e62-c382-4985-b73e-24f44863dff0",
          "kernelVersion": "3.10.0-327.4.5.el7.x86_64",
          "osImage": "Red Hat Enterprise Linux",
          "containerRuntimeVersion": "docker://1.8.2-el7",
          "kubeletVersion": "v1.1.0-origin-1107-g4c8e6f4",
          "kubeProxyVersion": "v1.1.0-origin-1107-g4c8e6f4"
        }
      }
    },
    {
      "metadata": {
        "name": "osmaster1s.devapps.unc.edu",
        "selfLink": "/api/v1/nodes/osmaster1s.devapps.unc.edu",
        "uid": "7e13ebd6-d00b-11e5-b7ab-005056a6874f",
        "resourceVersion": "1889",
        "creationTimestamp": "2016-02-10T15:32:28Z",
        "labels": {
          "kubernetes.io/hostname": "osmaster1s.devapps.unc.edu",
          "region": "infra",
          "zone": "master"
        }
      },
      "spec": {
        "externalID": "osmaster1s.devapps.unc.edu",
        "unschedulable": true
      },
      "status": {
        "capacity": {
          "cpu": "2",
          "memory": "8011464Ki",
          "pods": "50"
        },
        "conditions": [
          {
            "type": "Ready",
            "status": "True",
            "lastHeartbeatTime": "2016-02-10T15:58:33Z",
            "lastTransitionTime": "2016-02-10T15:32:28Z",
            "reason": "KubeletReady",
            "message": "kubelet is posting ready status"
          }
        ],
        "addresses": [
          {
            "type": "LegacyHostIP",
            "address": "152.19.229.203"
          },
          {
            "type": "InternalIP",
            "address": "152.19.229.203"
          }
        ],
        "daemonEndpoints": {
          "kubeletEndpoint": {
            "Port": 10250
          }
        },
        "nodeInfo": {
          "machineID": "a73244d83d0e4363a5a1ad7d9b7fba7c",
          "systemUUID": "42266F9C-BF9F-AB3D-D290-DEF1CA5FC361",
          "bootID": "8392c1fb-d0e9-418d-b2db-b362c32fec1e",
          "kernelVersion": "3.10.0-327.4.5.el7.x86_64",
          "osImage": "Red Hat Enterprise Linux",
          "containerRuntimeVersion": "docker://1.8.2-el7",
          "kubeletVersion": "v1.1.0-origin-1107-g4c8e6f4",
          "kubeProxyVersion": "v1.1.0-origin-1107-g4c8e6f4"
        }
      }
    },
    {
      "metadata": {
        "name": "osnode0s.devapps.unc.edu",
        "selfLink": "/api/v1/nodes/osnode0s.devapps.unc.edu",
        "uid": "7b4d40a9-d00b-11e5-b7ab-005056a6874f",
        "resourceVersion": "1892",
        "creationTimestamp": "2016-02-10T15:32:23Z",
        "labels": {
          "kubernetes.io/hostname": "osnode0s.devapps.unc.edu",
          "region": "primary",
          "zone": "cloudapps"
        }
      },
      "spec": {
        "externalID": "osnode0s.devapps.unc.edu"
      },
      "status": {
        "capacity": {
          "cpu": "2",
          "memory": "12140216Ki",
          "pods": "50"
        },
        "conditions": [
          {
            "type": "Ready",
            "status": "True",
            "lastHeartbeatTime": "2016-02-10T15:58:38Z",
            "lastTransitionTime": "2016-02-10T15:32:23Z",
            "reason": "KubeletReady",
            "message": "kubelet is posting ready status"
          }
        ],
        "addresses": [
          {
            "type": "LegacyHostIP",
            "address": "152.19.229.207"
          },
          {
            "type": "InternalIP",
            "address": "152.19.229.207"
          }
        ],
        "daemonEndpoints": {
          "kubeletEndpoint": {
            "Port": 10250
          }
        },
        "nodeInfo": {
          "machineID": "e13400afd535404784c15ce0a7965556",
          "systemUUID": "4226CBD2-C7AA-92CF-B572-E8BBEBBF111A",
          "bootID": "29c97ed1-d3ce-47e0-b3bf-97d755769c3a",
          "kernelVersion": "3.10.0-327.4.5.el7.x86_64",
          "osImage": "Red Hat Enterprise Linux",
          "containerRuntimeVersion": "docker://1.8.2-el7",
          "kubeletVersion": "v1.1.0-origin-1107-g4c8e6f4",
          "kubeProxyVersion": "v1.1.0-origin-1107-g4c8e6f4"
        }
      }
    },
    {
      "metadata": {
        "name": "osnode1s.devapps.unc.edu",
        "selfLink": "/api/v1/nodes/osnode1s.devapps.unc.edu",
        "uid": "7fd6ad42-d00b-11e5-89fa-005056a68fb4",
        "resourceVersion": "1891",
        "creationTimestamp": "2016-02-10T15:32:31Z",
        "labels": {
          "kubernetes.io/hostname": "osnode1s.devapps.unc.edu",
          "region": "primary",
          "zone": "vipdapps"
        }
      },
      "spec": {
        "externalID": "osnode1s.devapps.unc.edu"
      },
      "status": {
        "capacity": {
          "cpu": "2",
          "memory": "12140216Ki",
          "pods": "50"
        },
        "conditions": [
          {
            "type": "Ready",
            "status": "True",
            "lastHeartbeatTime": "2016-02-10T15:58:36Z",
            "lastTransitionTime": "2016-02-10T15:32:31Z",
            "reason": "KubeletReady",
            "message": "kubelet is posting ready status"
          }
        ],
        "addresses": [
          {
            "type": "LegacyHostIP",
            "address": "152.19.229.208"
          },
          {
            "type": "InternalIP",
            "address": "152.19.229.208"
          }
        ],
        "daemonEndpoints": {
          "kubeletEndpoint": {
            "Port": 10250
          }
        },
        "nodeInfo": {
          "machineID": "8e142ae5d75c4e6f9972e645b7131695",
          "systemUUID": "42264EBD-B6AA-0203-A6B2-22A39701AFA5",
          "bootID": "cf3af9ba-2f26-4c76-974f-3493302b8e97",
          "kernelVersion": "3.10.0-327.4.5.el7.x86_64",
          "osImage": "Red Hat Enterprise Linux",
          "containerRuntimeVersion": "docker://1.8.2-el7",
          "kubeletVersion": "v1.1.0-origin-1107-g4c8e6f4",
          "kubeProxyVersion": "v1.1.0-origin-1107-g4c8e6f4"
        }
      }
    },
    {
      "metadata": {
        "name": "osnode2s.devapps.unc.edu",
        "selfLink": "/api/v1/nodes/osnode2s.devapps.unc.edu",
        "uid": "7cb257e5-d00b-11e5-89fa-005056a68fb4",
        "resourceVersion": "1888",
        "creationTimestamp": "2016-02-10T15:32:25Z",
        "labels": {
          "kubernetes.io/hostname": "osnode2s.devapps.unc.edu",
          "region": "infra",
          "zone": "support"
        }
      },
      "spec": {
        "externalID": "osnode2s.devapps.unc.edu"
      },
      "status": {
        "capacity": {
          "cpu": "2",
          "memory": "12140216Ki",
          "pods": "50"
        },
        "conditions": [
          {
            "type": "Ready",
            "status": "True",
            "lastHeartbeatTime": "2016-02-10T15:58:31Z",
            "lastTransitionTime": "2016-02-10T15:32:25Z",
            "reason": "KubeletReady",
            "message": "kubelet is posting ready status"
          }
        ],
        "addresses": [
          {
            "type": "LegacyHostIP",
            "address": "152.19.229.209"
          },
          {
            "type": "InternalIP",
            "address": "152.19.229.209"
          }
        ],
        "daemonEndpoints": {
          "kubeletEndpoint": {
            "Port": 10250
          }
        },
        "nodeInfo": {
          "machineID": "4a46171ef4b54fbc91d2bfdac88b21e6",
          "systemUUID": "42266108-670E-26CE-3EED-654025581DAE",
          "bootID": "4f574744-293f-408f-bcc3-1240e0b9141f",
          "kernelVersion": "3.10.0-327.4.5.el7.x86_64",
          "osImage": "Red Hat Enterprise Linux",
          "containerRuntimeVersion": "docker://1.8.2-el7",
          "kubeletVersion": "v1.1.0-origin-1107-g4c8e6f4",
          "kubeProxyVersion": "v1.1.0-origin-1107-g4c8e6f4"
        }
      }
    }
  ]
}

It does look like externalID is set to the hostname and not the nodeIP
Comment 9 Boris Kurktchiev 2016-02-10 11:00:45 EST
The legacy and internalIP fields are correct. This is the output with nodeIP NOT being set in node-config.yaml
Comment 10 Boris Kurktchiev 2016-02-10 11:23:13 EST
here is the output after adding nodeIP to node-config.yaml... I am not seeing a difference :/

----> curl --insecure -H "Authorization: Bearer `oc whoami -t`" -X GET https://console.ose.devapps.unc.edu/api/v1/nodes
{
  "kind": "NodeList",
  "apiVersion": "v1",
  "metadata": {
    "selfLink": "/api/v1/nodes",
    "resourceVersion": "2716"
  },
  "items": [
    {
      "metadata": {
        "name": "osmaster0s.devapps.unc.edu",
        "selfLink": "/api/v1/nodes/osmaster0s.devapps.unc.edu",
        "uid": "795bf9ba-d00b-11e5-b7ab-005056a6874f",
        "resourceVersion": "2716",
        "creationTimestamp": "2016-02-10T15:32:20Z",
        "labels": {
          "kubernetes.io/hostname": "osmaster0s.devapps.unc.edu",
          "region": "infra",
          "zone": "master"
        }
      },
      "spec": {
        "externalID": "osmaster0s.devapps.unc.edu",
        "unschedulable": true
      },
      "status": {
        "capacity": {
          "cpu": "2",
          "memory": "8011464Ki",
          "pods": "50"
        },
        "conditions": [
          {
            "type": "Ready",
            "status": "True",
            "lastHeartbeatTime": "2016-02-10T16:22:00Z",
            "lastTransitionTime": "2016-02-10T15:32:20Z",
            "reason": "KubeletReady",
            "message": "kubelet is posting ready status"
          }
        ],
        "addresses": [
          {
            "type": "LegacyHostIP",
            "address": "152.19.229.206"
          },
          {
            "type": "InternalIP",
            "address": "152.19.229.206"
          }
        ],
        "daemonEndpoints": {
          "kubeletEndpoint": {
            "Port": 10250
          }
        },
        "nodeInfo": {
          "machineID": "8d15d3996ac2456189ef02a4827d455c",
          "systemUUID": "42262FBF-076F-30C3-8D42-E180EA6B93FA",
          "bootID": "86726e62-c382-4985-b73e-24f44863dff0",
          "kernelVersion": "3.10.0-327.4.5.el7.x86_64",
          "osImage": "Red Hat Enterprise Linux",
          "containerRuntimeVersion": "docker://1.8.2-el7",
          "kubeletVersion": "v1.1.0-origin-1107-g4c8e6f4",
          "kubeProxyVersion": "v1.1.0-origin-1107-g4c8e6f4"
        }
      }
    },
    {
      "metadata": {
        "name": "osmaster1s.devapps.unc.edu",
        "selfLink": "/api/v1/nodes/osmaster1s.devapps.unc.edu",
        "uid": "7e13ebd6-d00b-11e5-b7ab-005056a6874f",
        "resourceVersion": "2714",
        "creationTimestamp": "2016-02-10T15:32:28Z",
        "labels": {
          "kubernetes.io/hostname": "osmaster1s.devapps.unc.edu",
          "region": "infra",
          "zone": "master"
        }
      },
      "spec": {
        "externalID": "osmaster1s.devapps.unc.edu",
        "unschedulable": true
      },
      "status": {
        "capacity": {
          "cpu": "2",
          "memory": "8011464Ki",
          "pods": "50"
        },
        "conditions": [
          {
            "type": "Ready",
            "status": "True",
            "lastHeartbeatTime": "2016-02-10T16:21:58Z",
            "lastTransitionTime": "2016-02-10T15:32:28Z",
            "reason": "KubeletReady",
            "message": "kubelet is posting ready status"
          }
        ],
        "addresses": [
          {
            "type": "LegacyHostIP",
            "address": "152.19.229.203"
          },
          {
            "type": "InternalIP",
            "address": "152.19.229.203"
          }
        ],
        "daemonEndpoints": {
          "kubeletEndpoint": {
            "Port": 10250
          }
        },
        "nodeInfo": {
          "machineID": "a73244d83d0e4363a5a1ad7d9b7fba7c",
          "systemUUID": "42266F9C-BF9F-AB3D-D290-DEF1CA5FC361",
          "bootID": "8392c1fb-d0e9-418d-b2db-b362c32fec1e",
          "kernelVersion": "3.10.0-327.4.5.el7.x86_64",
          "osImage": "Red Hat Enterprise Linux",
          "containerRuntimeVersion": "docker://1.8.2-el7",
          "kubeletVersion": "v1.1.0-origin-1107-g4c8e6f4",
          "kubeProxyVersion": "v1.1.0-origin-1107-g4c8e6f4"
        }
      }
    },
    {
      "metadata": {
        "name": "osnode0s.devapps.unc.edu",
        "selfLink": "/api/v1/nodes/osnode0s.devapps.unc.edu",
        "uid": "7b4d40a9-d00b-11e5-b7ab-005056a6874f",
        "resourceVersion": "2712",
        "creationTimestamp": "2016-02-10T15:32:23Z",
        "labels": {
          "kubernetes.io/hostname": "osnode0s.devapps.unc.edu",
          "region": "primary",
          "zone": "cloudapps"
        }
      },
      "spec": {
        "externalID": "osnode0s.devapps.unc.edu"
      },
      "status": {
        "capacity": {
          "cpu": "2",
          "memory": "12140216Ki",
          "pods": "50"
        },
        "conditions": [
          {
            "type": "Ready",
            "status": "True",
            "lastHeartbeatTime": "2016-02-10T16:21:58Z",
            "lastTransitionTime": "2016-02-10T15:32:23Z",
            "reason": "KubeletReady",
            "message": "kubelet is posting ready status"
          }
        ],
        "addresses": [
          {
            "type": "LegacyHostIP",
            "address": "152.19.229.207"
          },
          {
            "type": "InternalIP",
            "address": "152.19.229.207"
          }
        ],
        "daemonEndpoints": {
          "kubeletEndpoint": {
            "Port": 10250
          }
        },
        "nodeInfo": {
          "machineID": "e13400afd535404784c15ce0a7965556",
          "systemUUID": "4226CBD2-C7AA-92CF-B572-E8BBEBBF111A",
          "bootID": "29c97ed1-d3ce-47e0-b3bf-97d755769c3a",
          "kernelVersion": "3.10.0-327.4.5.el7.x86_64",
          "osImage": "Red Hat Enterprise Linux",
          "containerRuntimeVersion": "docker://1.8.2-el7",
          "kubeletVersion": "v1.1.0-origin-1107-g4c8e6f4",
          "kubeProxyVersion": "v1.1.0-origin-1107-g4c8e6f4"
        }
      }
    },
    {
      "metadata": {
        "name": "osnode1s.devapps.unc.edu",
        "selfLink": "/api/v1/nodes/osnode1s.devapps.unc.edu",
        "uid": "7fd6ad42-d00b-11e5-89fa-005056a68fb4",
        "resourceVersion": "2715",
        "creationTimestamp": "2016-02-10T15:32:31Z",
        "labels": {
          "kubernetes.io/hostname": "osnode1s.devapps.unc.edu",
          "region": "primary",
          "zone": "vipdapps"
        }
      },
      "spec": {
        "externalID": "osnode1s.devapps.unc.edu"
      },
      "status": {
        "capacity": {
          "cpu": "2",
          "memory": "12140216Ki",
          "pods": "50"
        },
        "conditions": [
          {
            "type": "Ready",
            "status": "True",
            "lastHeartbeatTime": "2016-02-10T16:21:58Z",
            "lastTransitionTime": "2016-02-10T15:32:31Z",
            "reason": "KubeletReady",
            "message": "kubelet is posting ready status"
          }
        ],
        "addresses": [
          {
            "type": "LegacyHostIP",
            "address": "152.19.229.208"
          },
          {
            "type": "InternalIP",
            "address": "152.19.229.208"
          }
        ],
        "daemonEndpoints": {
          "kubeletEndpoint": {
            "Port": 10250
          }
        },
        "nodeInfo": {
          "machineID": "8e142ae5d75c4e6f9972e645b7131695",
          "systemUUID": "42264EBD-B6AA-0203-A6B2-22A39701AFA5",
          "bootID": "cf3af9ba-2f26-4c76-974f-3493302b8e97",
          "kernelVersion": "3.10.0-327.4.5.el7.x86_64",
          "osImage": "Red Hat Enterprise Linux",
          "containerRuntimeVersion": "docker://1.8.2-el7",
          "kubeletVersion": "v1.1.0-origin-1107-g4c8e6f4",
          "kubeProxyVersion": "v1.1.0-origin-1107-g4c8e6f4"
        }
      }
    },
    {
      "metadata": {
        "name": "osnode2s.devapps.unc.edu",
        "selfLink": "/api/v1/nodes/osnode2s.devapps.unc.edu",
        "uid": "7cb257e5-d00b-11e5-89fa-005056a68fb4",
        "resourceVersion": "2713",
        "creationTimestamp": "2016-02-10T15:32:25Z",
        "labels": {
          "kubernetes.io/hostname": "osnode2s.devapps.unc.edu",
          "region": "infra",
          "zone": "support"
        }
      },
      "spec": {
        "externalID": "osnode2s.devapps.unc.edu"
      },
      "status": {
        "capacity": {
          "cpu": "2",
          "memory": "12140216Ki",
          "pods": "50"
        },
        "conditions": [
          {
            "type": "Ready",
            "status": "True",
            "lastHeartbeatTime": "2016-02-10T16:21:58Z",
            "lastTransitionTime": "2016-02-10T15:32:25Z",
            "reason": "KubeletReady",
            "message": "kubelet is posting ready status"
          }
        ],
        "addresses": [
          {
            "type": "LegacyHostIP",
            "address": "152.19.229.209"
          },
          {
            "type": "InternalIP",
            "address": "152.19.229.209"
          }
        ],
        "daemonEndpoints": {
          "kubeletEndpoint": {
            "Port": 10250
          }
        },
        "nodeInfo": {
          "machineID": "4a46171ef4b54fbc91d2bfdac88b21e6",
          "systemUUID": "42266108-670E-26CE-3EED-654025581DAE",
          "bootID": "4f574744-293f-408f-bcc3-1240e0b9141f",
          "kernelVersion": "3.10.0-327.4.5.el7.x86_64",
          "osImage": "Red Hat Enterprise Linux",
          "containerRuntimeVersion": "docker://1.8.2-el7",
          "kubeletVersion": "v1.1.0-origin-1107-g4c8e6f4",
          "kubeProxyVersion": "v1.1.0-origin-1107-g4c8e6f4"
        }
      }
    }
  ]
}
Comment 11 Brenton Leanhardt 2016-02-22 08:07:31 EST
I'm resetting this to the component owner.  Unless somehow setting the nodeIP a certain way (that is still valid) results in a different Node configuration that breaks Metrics I don't think the nodeIP setting is the root of the problem.
Comment 12 Matt Wringe 2016-02-22 11:39:46 EST
@Brenton have you verified that the certificates being generated in both cases are valid for the same hostnames and ip addresses?

If the certificates used in the nodes are not valid for the IP address of the host, then its an invalid certificate for that node and Heapster will not connect to it.
Comment 13 Matt Wringe 2016-02-22 12:15:13 EST
@Boris are you seeing any difference in the SAN of the nodes certificate before and after adding the nodeIP value?

It should be under the 'X509v3 Subject Alternative Name' section when performing 'openssl x509 -text -in /etc/origin/node/server.crt'
Comment 14 Boris Kurktchiev 2016-02-22 12:52:59 EST
Nope no difference. I dont see how the certificate will change as the cert is generated during install and contains the DNS and IP address of the nodes.
Comment 15 Brenton Leanhardt 2016-02-22 14:02:57 EST
Boris, What IPs are in the 'X509v3 Subject Alternative Name' section in your certificate?  Is the IP you are setting in nodeIP or the default one that is selected when you don't set it missing in the Node certificate?
Comment 16 Boris Kurktchiev 2016-02-22 14:04:55 EST
The IPs in the SAN list are the correct IPs matching the individual nodes. I only have one IP per node so yes, IP in nodeIP matches that of SAN.
Comment 17 Matt Wringe 2016-02-25 16:46:20 EST
Ok, so, I am just trying to make sure I am not missing anything here.

With 'nodeIP' in the node's node-config.yaml then metrics work. If you stop the node, edit the node-config.yaml so that the 'nodeIP' is empty, and then restart, the gathering of metrics do not work.

The certificates between this scenario shouldn't matter as the certificates shouldn't be changing with just a restart like this. The certificates contain both the hostname and the ip address of the node it is running on.

And the value from api/v1/nodes with or without the nodeIP is exactly the same.

Is there anything else weird that you notice when you don't have the nodeIP specified?

Is the hostname of each node resolvable to that ip address from every other node in the cluster?

As fas as I can tell, this shouldn't be affecting anything with metrics, but there is obviously something happening here.
Comment 18 Boris Kurktchiev 2016-02-26 08:51:04 EST
with nodeiP not defined what i get is whats in my original ticket (the one marked duplicate of this one). My resource limited projects do not get metrics. In order to get that working, i have to add nodeIP and nodeIP only.
Comment 19 Matt Wringe 2016-02-26 09:53:21 EST
yes, which I believe what was determined was this was due to limited pods running on a separate node.

Are you able to run non-resource limited pods on that node and are able to see metrics?

Is there any other differences between this node and your other nodes? Or were they all setup in the same manner?
Comment 20 Boris Kurktchiev 2016-02-26 09:59:54 EST
They are the same nodes, part of the same cluster, setup at the same time. The only difference is that some or labeled for default users (resource limited) and some are labeled for "vip" users (no limits). Changing the labels around doesnt make a difference.
Comment 21 Matt Wringe 2016-02-26 10:10:40 EST
can you confirm that non-limited pods running on that node can or cannot see metrics?

I am trying to determine if its something which is limited to an issue with that particular node, or if its something specific to limited resources?

The metrics components are essentially oblivious as to whether a pod is limited or not. If its something with limited resources, then it probably something going on in the OpenShift side exposing resources.
Comment 22 Boris Kurktchiev 2016-02-26 10:14:26 EST
Swapping labels and moving projects to the node produces the same results. Limited projects get no metrics.
Comment 24 Solly Ross 2016-03-23 14:34:00 EDT
Can we get logs from the heapster container, especially logs with heapster run at a high verbosity (you can increase the verbosity by adding `--v=6` as an argument to the heapster command in the heapster ReplicationController, and then deleting the existing heapster pod).
Comment 25 Boris Kurktchiev 2016-03-23 14:52:06 EDT
As soon as I can restore my POC infra which broke pretty splendidly with the latest RPM release. Probably tomorrow sometime.
Comment 26 Boris Kurktchiev 2016-04-04 13:52:43 EDT
OK so I am trying to get this going again and I am running into a problem. I have followed the instructions here: https://docs.openshift.com/enterprise/3.1/install_config/cluster_metrics.html#metrics-reencrypting-route and now I am getting 503 when going to my metrics URL. The router is working just fine and routing so its not a router problem. The heapster logs are filled with:
E0404 13:47:57.015164       1 driver.go:234] Could not update tags: Put https://hawkular-metrics:443/hawkular/metrics/counters/simple%2F9040909b-f67e-11e5-baef-005056a68fb4%2Fuptime/tags: net/http: request canceled while waiting for connection

All the pods in the metrics project are up and running and reporting just fine so not sure how its not able to connect.
Comment 27 Matt Wringe 2016-04-04 14:12:57 EDT
Can you please attach the Hawkular Metrics and Cassandra logs.

What deployer options and secrets were used, what version of components.
Comment 28 Boris Kurktchiev 2016-04-04 14:19:53 EDT
https://gist.github.com/ebalsumgo/db7ae39f4f58659c33ca6562a68f7d83 hawkular logs
https://gist.github.com/ebalsumgo/02f4c6ca626c7d4e037fc65af0b1c5ee casandra logs


registry.access.redhat.com/openshift3/metrics-cassandra:3.1.1
registry.access.redhat.com/openshift3/metrics-hawkular-metrics:3.1.1
registry.access.redhat.com/openshift3/metrics-heapster:3.1.1
root@osmaster0s:~:
----> oc version
oc v3.1.1.6-33-g81eabcc
kubernetes v1.1.0-origin-1107-g4c8e6f4

The deployment was done as described in documentation, first with /dev/null in order to generate the self signed certs and then followed https://docs.openshift.com/enterprise/3.1/install_config/cluster_metrics.html#metrics-reencrypting-route in order to add my own front end certs.

The actual step process:
oc secrets new metrics-deployer nothing=/dev/null

oc process -f /usr/share/openshift/examples/infrastructure-templates/enterprise/metrics-deployer.yaml -v HAWKULAR_METRICS_HOSTNAME=ose-metrics.ose.devapps.unc.edu | oc create -f -
Comment 29 Matt Wringe 2016-04-04 14:25:48 EDT
Ok, there doesn't seem to be anything in the logs for the Hawkular Metrics or Cassandra that would be terribly concerning.

Can you connect directly to the IP address for the Hawkular Metrics instances? I believe it should be something like https://$IP_ADDRESS:8444/hawkular/metrics/status

If you could output the result for that it would be helpful.
Comment 30 Boris Kurktchiev 2016-04-04 14:27:38 EDT
root@osmaster0s:~:
----> curl -k https://10.1.1.2:8444/hawkular/metrics/status
{"MetricsService":"STARTED","Implementation-Version":"0.8.0.Final-redhat-1","Built-From-Git-SHA1":"826f08dd34912ad455a4cb2b34f2e79cd79ace9a"}
Comment 31 Matt Wringe 2016-04-04 14:40:59 EDT
Ok, that is good since it means that Hawkular Metrics and Cassandra are properly running.

I suspect your new route is not properly configured. It can be a bit picky about how its configured. You may want to look into the docs a bit more https://docs.openshift.com/enterprise/3.1/architecture/core_concepts/routes.html#secured-routes

Is there anything in the OpenShift logs about the route?
Comment 32 Boris Kurktchiev 2016-04-04 14:48:30 EDT
Again the route was created using the instruction supplied in the documentaion. Here is the full output
root@osmaster0s:~:
----> oc get route hawkular-metrics-reencrypt -o yaml
apiVersion: v1
kind: Route
metadata:
  creationTimestamp: 2016-03-24T20:04:17Z
  name: hawkular-metrics-reencrypt
  namespace: openshift-infra
  resourceVersion: "2779"
  selfLink: /oapi/v1/namespaces/openshift-infra/routes/hawkular-metrics-reencrypt
  uid: 96b241ba-f1fb-11e5-9b14-005056a6874f
spec:
  host: ose-metrics.ose.devapps.unc.edu
  port:
    targetPort: 8443
  tls:
    caCertificate: |-
      -----BEGIN CERTIFICATE-----
      edited
      -----END CERTIFICATE-----
    certificate: |-
      -----BEGIN CERTIFICATE-----
      edited
      -----END CERTIFICATE-----
    destinationCACertificate: |-
      -----BEGIN CERTIFICATE-----
      edited
      -----END CERTIFICATE-----
    key: |-
      -----BEGIN PRIVATE KEY-----
      edited
      -----END PRIVATE KEY-----
    termination: reencrypt
  to:
    kind: Service
    name: hawkular-metrics
status: {}

As far as I can tell it looks right.
Comment 33 Matt Wringe 2016-04-04 14:53:49 EDT
If you are running on OSE, can you try changing the targetPort to 8444? It looks like we might need to update the documentation.
Comment 34 Boris Kurktchiev 2016-04-04 14:58:21 EDT
alright! that seems to have fixed the problem of getting a 503. However, the heapster logs are still throwing:
E0404 14:57:57.069869       1 driver.go:234] Could not update tags: Put https://hawkular-metrics:443/hawkular/metrics/gauges/simple%2F9040909b-f67e-11e5-baef-005056a68fb4%2Fmemory%2Frequest/tags: net/http: request canceled while waiting for connection
Comment 35 Boris Kurktchiev 2016-04-04 15:03:40 EDT
I just restarted the pod just to be sure and same result:
Starting Heapster with the following arguments: --source=kubernetes:https://kubernetes.default.svc:443?useServiceAccount=true&kubeletHttps=true&kubeletPort=10250 --sink=hawkular:https://hawkular-metrics:443?tenant=_system&labelToTenant=pod_namespace&caCert=/hawkular-cert/hawkular-metrics-ca.certificate&user=hawkular&pass=RUpKImZiAdmIArE&filter=label(container_name:^/system.slice.*|^/user.slice) --logtostderr=true --tls_cert=/secrets/heapster.cert --tls_key=/secrets/heapster.key --tls_client_ca=/secrets/heapster.client-ca --allowed_users=system:master-proxy
2	I0404 15:02:44.931542       1 heapster.go:60] heapster --source=kubernetes:https://kubernetes.default.svc:443?useServiceAccount=true&kubeletHttps=true&kubeletPort=10250 --sink=hawkular:https://hawkular-metrics:443?tenant=_system&labelToTenant=pod_namespace&caCert=/hawkular-cert/hawkular-metrics-ca.certificate&user=hawkular&pass=RUpKImZiAdmIArE&filter=label(container_name:^/system.slice.*|^/user.slice) --logtostderr=true --tls_cert=/secrets/heapster.cert --tls_key=/secrets/heapster.key --tls_client_ca=/secrets/heapster.client-ca --allowed_users=system:master-proxy
3	I0404 15:02:45.001973       1 heapster.go:61] Heapster version 0.18.0
4	I0404 15:02:45.002608       1 kube_factory.go:168] Using Kubernetes client with master "https://kubernetes.default.svc:443" and version "v1"
5	I0404 15:02:45.002624       1 kube_factory.go:169] Using kubelet port 10250
6	I0404 15:02:45.002976       1 driver.go:491] Initialised Hawkular Sink with parameters {_system https://hawkular-metrics:443?tenant=_system&labelToTenant=pod_namespace&caCert=/hawkular-cert/hawkular-metrics-ca.certificate&user=hawkular&pass=RUpKImZiAdmIArE&filter=label(container_name:^/system.slice.*|^/user.slice) 0xc20817ec60 }
7	F0404 15:03:15.004547       1 heapster.go:67] Get https://hawkular-metrics:443/hawkular/metrics/metrics?type=gauge: net/http: request canceled while waiting for connection
Comment 36 Boris Kurktchiev 2016-04-04 15:30:08 EDT
and now seemingly the pod will terminate itself after the first request drop.
Comment 37 Matt Wringe 2016-04-04 15:52:23 EDT
For the OSE 3.1 release it was common for Heapster to restart itself a few times if Hawkular Metrics was not started (due to Heapster starting before Hawkular Metrics was fully started and available). This issue has been fixed since then and should not be an issue past the 3.1 release.

Does Heapster continuously fail to start? Or does it eventually stay up once Hawkular Metrics is fully running?
Comment 38 Boris Kurktchiev 2016-04-04 15:56:53 EDT
Well Heapster is continously restarting but now Hawkular pod is refusing to fire up... the pod is sitting in Pending mode with no errors in the ose logs or event logs
Comment 39 Matt Wringe 2016-04-04 16:19:33 EDT
Can you check the docker logs for Heapster? Is Cassandra fully started up? I thought a while ago everything was working with Hawkular Metrics and Cassandra, what changed?
Comment 40 Boris Kurktchiev 2016-04-04 16:21:38 EDT
The only thing that changed was me trying to give the pods a restart to make sure that was not the problem. Up until then I had not restarted all the pods. Cassandra is up, heapster just keeps restarting itself with the error from above and hawkular now just sits in pending and the only way to get it ot do anything is to do: oc delete --grace-period=0 hawkular-metrics-asdfa
Comment 41 Matt Wringe 2016-04-04 16:38:34 EDT
Can you please check the docker logs of the Hawkular Metrics instance, not the oc logs.
Comment 42 Boris Kurktchiev 2016-04-05 09:04:40 EDT
docker logs net me nothing because as I said the pod ends up sitting in Pending mode for not particular reason I can find. The docker daemon log doesnt contain anything either. Here is the output of some docker commands:
root@osnode2s:~:
----> docker inspect 534179c12ef6
[
{
    "Id": "534179c12ef643156a8d521f68e0cb5d1b1f14c365596e01fc6984cc1991dc15",
    "Created": "2016-04-05T12:58:56.277422086Z",
    "Path": "/pod",
    "Args": [],
    "State": {
        "Running": true,
        "Paused": false,
        "Restarting": false,
        "OOMKilled": false,
        "Dead": false,
        "Pid": 15946,
        "ExitCode": 0,
        "Error": "",
        "StartedAt": "2016-04-05T12:58:57.087960564Z",
        "FinishedAt": "0001-01-01T00:00:00Z"
    },
    "Image": "ecfbff48161cfa7acb4e1aba20243e9f70084bce66100e8e3e92bea356018e68",
    "NetworkSettings": {
        "Bridge": "",
        "EndpointID": "59d4828bb5dfefed6b750a1fe02ef31a7fa2b462e2cd36a6e89540371fc0b0b6",
        "Gateway": "10.1.1.1",
        "GlobalIPv6Address": "",
        "GlobalIPv6PrefixLen": 0,
        "HairpinMode": false,
        "IPAddress": "10.1.1.15",
        "IPPrefixLen": 24,
        "IPv6Gateway": "",
        "LinkLocalIPv6Address": "",
        "LinkLocalIPv6PrefixLen": 0,
        "MacAddress": "02:42:0a:01:01:0f",
        "NetworkID": "caf1defa999d82b256dd03c701ce1e5edc7b70b7c47d311c959df2f1820e7199",
        "PortMapping": null,
        "Ports": {},
        "SandboxKey": "/var/run/docker/netns/534179c12ef6",
        "SecondaryIPAddresses": null,
        "SecondaryIPv6Addresses": null
    },
    "ResolvConfPath": "/var/lib/docker/containers/534179c12ef643156a8d521f68e0cb5d1b1f14c365596e01fc6984cc1991dc15/resolv.conf",
    "HostnamePath": "/var/lib/docker/containers/534179c12ef643156a8d521f68e0cb5d1b1f14c365596e01fc6984cc1991dc15/hostname",
    "HostsPath": "/var/lib/docker/containers/534179c12ef643156a8d521f68e0cb5d1b1f14c365596e01fc6984cc1991dc15/hosts",
    "LogPath": "/var/lib/docker/containers/534179c12ef643156a8d521f68e0cb5d1b1f14c365596e01fc6984cc1991dc15/534179c12ef643156a8d521f68e0cb5d1b1f14c365596e01fc6984cc1991dc15-json.log",
    "Name": "/k8s_POD.f0242a85_hawkular-metrics-ws9dw_openshift-infra_27eb9198-fb2e-11e5-a919-005056a6874f_96e4ac70",
    "RestartCount": 0,
    "Driver": "devicemapper",
    "ExecDriver": "native-0.2",
    "MountLabel": "system_u:object_r:svirt_sandbox_file_t:s0:c5,c0",
    "ProcessLabel": "system_u:system_r:svirt_lxc_net_t:s0:c5,c0",
    "AppArmorProfile": "",
    "ExecIDs": null,
    "HostConfig": {
        "Binds": null,
        "ContainerIDFile": "",
        "LxcConf": null,
        "Memory": 0,
        "MemorySwap": -1,
        "CpuShares": 2,
        "CpuPeriod": 0,
        "CpusetCpus": "",
        "CpusetMems": "",
        "CpuQuota": 0,
        "BlkioWeight": 0,
        "OomKillDisable": false,
        "MemorySwappiness": null,
        "Privileged": false,
        "PortBindings": null,
        "Links": null,
        "PublishAllPorts": false,
        "Dns": [
            "172.30.0.1",
            "152.19.230.66",
            "152.19.240.8",
            "152.2.21.1",
            "152.2.253.100"
        ],
        "DnsSearch": [
            "openshift-infra.svc.cluster.local",
            "svc.cluster.local",
            "cluster.local",
            "devapps.unc.edu",
            "isis.unc.edu",
            "its.unc.edu",
            "unc.edu"
        ],
        "ExtraHosts": null,
        "VolumesFrom": null,
        "Devices": null,
        "NetworkMode": "default",
        "IpcMode": "",
        "PidMode": "",
        "UTSMode": "",
        "CapAdd": null,
        "CapDrop": null,
        "GroupAdd": null,
        "RestartPolicy": {
            "Name": "",
            "MaximumRetryCount": 0
        },
        "SecurityOpt": [
            "label:level:s0:c5,c0"
        ],
        "ReadonlyRootfs": false,
        "Ulimits": null,
        "LogConfig": {
            "Type": "json-file",
            "Config": {}
        },
        "CgroupParent": "",
        "ConsoleSize": [
            0,
            0
        ]
    },
    "GraphDriver": {
        "Name": "devicemapper",
        "Data": {
            "DeviceId": "208",
            "DeviceName": "docker-253:3-100809738-534179c12ef643156a8d521f68e0cb5d1b1f14c365596e01fc6984cc1991dc15",
            "DeviceSize": "107374182400"
        }
    },
    "Mounts": [],
    "Config": {
        "Hostname": "hawkular-metrics-ws9dw",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "ExposedPorts": null,
        "PublishService": "",
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "HAWKULAR_METRICS_SERVICE_PORT=443",
            "KUBERNETES_PORT_53_TCP=tcp://172.30.0.1:53",
            "KUBERNETES_PORT_53_TCP_PROTO=tcp",
            "HAWKULAR_CASSANDRA_PORT_9042_TCP_PORT=9042",
            "HAWKULAR_CASSANDRA_SERVICE_HOST=172.30.152.170",
            "HAWKULAR_CASSANDRA_SERVICE_PORT=9042",
            "HAWKULAR_CASSANDRA_PORT_9160_TCP_PORT=9160",
            "HAWKULAR_METRICS_PORT_443_TCP_PROTO=tcp",
            "HEAPSTER_PORT=tcp://172.30.106.27:80",
            "HEAPSTER_PORT_80_TCP=tcp://172.30.106.27:80",
            "HEAPSTER_PORT_80_TCP_PORT=80",
            "HAWKULAR_CASSANDRA_SERVICE_PORT_SSL_PORT=7001",
            "HAWKULAR_METRICS_SERVICE_PORT_HTTPS_ENDPOINT=443",
            "HEAPSTER_SERVICE_HOST=172.30.106.27",
            "KUBERNETES_PORT_443_TCP_PROTO=tcp",
            "KUBERNETES_PORT_53_TCP_PORT=53",
            "HAWKULAR_CASSANDRA_PORT_7000_TCP_PROTO=tcp",
            "HAWKULAR_CASSANDRA_PORT_7001_TCP=tcp://172.30.152.170:7001",
            "KUBERNETES_SERVICE_PORT_DNS_TCP=53",
            "KUBERNETES_PORT_443_TCP=tcp://172.30.0.1:443",
            "KUBERNETES_PORT_53_UDP_PORT=53",
            "KUBERNETES_PORT_53_TCP_ADDR=172.30.0.1",
            "HAWKULAR_CASSANDRA_PORT_9042_TCP=tcp://172.30.152.170:9042",
            "HAWKULAR_CASSANDRA_PORT_9160_TCP_ADDR=172.30.152.170",
            "HAWKULAR_CASSANDRA_PORT_7000_TCP_ADDR=172.30.152.170",
            "KUBERNETES_PORT_53_UDP_PROTO=udp",
            "HAWKULAR_CASSANDRA_SERVICE_PORT_CQL_PORT=9042",
            "HAWKULAR_CASSANDRA_SERVICE_PORT_TCP_PORT=7000",
            "HAWKULAR_CASSANDRA_PORT_9160_TCP=tcp://172.30.152.170:9160",
            "HAWKULAR_CASSANDRA_PORT_7001_TCP_ADDR=172.30.152.170",
            "HAWKULAR_METRICS_PORT_443_TCP_PORT=443",
            "KUBERNETES_PORT_443_TCP_ADDR=172.30.0.1",
            "KUBERNETES_PORT_53_UDP_ADDR=172.30.0.1",
            "HAWKULAR_CASSANDRA_PORT_9042_TCP_PROTO=tcp",
            "HAWKULAR_CASSANDRA_PORT_7001_TCP_PROTO=tcp",
            "HEAPSTER_PORT_80_TCP_PROTO=tcp",
            "KUBERNETES_SERVICE_PORT_HTTPS=443",
            "HAWKULAR_CASSANDRA_PORT_7000_TCP=tcp://172.30.152.170:7000",
            "KUBERNETES_SERVICE_PORT_DNS=53",
            "KUBERNETES_PORT=tcp://172.30.0.1:443",
            "HAWKULAR_CASSANDRA_PORT_7000_TCP_PORT=7000",
            "HAWKULAR_CASSANDRA_SERVICE_PORT_THIFT_PORT=9160",
            "HAWKULAR_METRICS_PORT=tcp://172.30.1.46:443",
            "HEAPSTER_PORT_80_TCP_ADDR=172.30.106.27",
            "KUBERNETES_PORT_53_UDP=udp://172.30.0.1:53",
            "HAWKULAR_CASSANDRA_PORT_9042_TCP_ADDR=172.30.152.170",
            "HAWKULAR_CASSANDRA_PORT_9160_TCP_PROTO=tcp",
            "KUBERNETES_PORT_443_TCP_PORT=443",
            "HAWKULAR_CASSANDRA_PORT=tcp://172.30.152.170:9042",
            "HAWKULAR_METRICS_PORT_443_TCP=tcp://172.30.1.46:443",
            "HAWKULAR_METRICS_PORT_443_TCP_ADDR=172.30.1.46",
            "KUBERNETES_SERVICE_HOST=172.30.0.1",
            "HAWKULAR_CASSANDRA_PORT_7001_TCP_PORT=7001",
            "HEAPSTER_SERVICE_PORT=80",
            "HAWKULAR_METRICS_SERVICE_HOST=172.30.1.46",
            "KUBERNETES_SERVICE_PORT=443",
            "container=docker",
            "PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin"
        ],
        "Cmd": null,
        "Image": "openshift3/ose-pod:v3.1.1.6",
        "Volumes": null,
        "VolumeDriver": "",
        "WorkingDir": "",
        "Entrypoint": [
            "/pod"
        ],
        "NetworkDisabled": false,
        "MacAddress": "",
        "OnBuild": null,
        "Labels": {
            "io.kubernetes.pod.name": "openshift-infra/hawkular-metrics-ws9dw",
            "io.kubernetes.pod.terminationGracePeriod": "30"
        }
    }
}
]
root@osnode2s:~:
----> docker logs 534179c12ef6
root@osnode2s:~:
----> docker ps
CONTAINER ID        IMAGE                                                                  COMMAND                  CREATED              STATUS              PORTS               NAMES
f3dbf1198e27        registry.access.redhat.com/openshift3/metrics-hawkular-metrics:3.1.1   "/opt/hawkular/script"   About a minute ago   Up About a minute                       k8s_hawkular-metrics.a349fda5_hawkular-metrics-ws9dw_openshift-infra_27eb9198-fb2e-11e5-a919-005056a6874f_e6454a18
534179c12ef6        openshift3/ose-pod:v3.1.1.6                                            "/pod"                   About a minute ago   Up About a minute                       k8s_POD.f0242a85_hawkular-metrics-ws9dw_openshift-infra_27eb9198-fb2e-11e5-a919-005056a6874f_96e4ac70
7f9a15fff5a0        openshift3/ose-pod:v3.1.1.6                                            "/pod"                   2 minutes ago        Up 2 minutes                            k8s_POD.9f460279_heapster-rx1xm_openshift-infra_0009c51e-fb2e-11e5-a919-005056a6874f_602fc948
9a1134128c96        registry.access.redhat.com/openshift3/metrics-cassandra:3.1.1          "/opt/apache-cassandr"   17 hours ago         Up 17 hours                             k8s_hawkular-cassandra-1.5a8cfde5_hawkular-cassandra-1-g7bcf_openshift-infra_b3e7ff00-fa9f-11e5-a919-005056a6874f_d6a14326
f432f9f19295        openshift3/ose-pod:v3.1.1.6                                            "/pod"                   17 hours ago         Up 17 hours                             k8s_POD.450271b2_hawkular-cassandra-1-g7bcf_openshift-infra_b3e7ff00-fa9f-11e5-a919-005056a6874f_d02e1540
cb50322632fb        openshift3/ose-haproxy-router:v3.1.1.6                                 "/usr/bin/openshift-r"   11 days ago          Up 11 days                              k8s_router.5e2a2e75_router-4-gzkw5_default_f154e1ed-f1fe-11e5-a0b4-005056a6874f_67b9f5e6
6f95bad6869c        openshift3/ose-pod:v3.1.1.6                                            "/pod"                   11 days ago          Up 11 days                              k8s_POD.e071dbf6_router-4-gzkw5_default_f154e1ed-f1fe-11e5-a0b4-005056a6874f_78a516e0
db52b896c7ba        openshift3/ose-docker-registry:v3.1.1.6                                "/bin/sh -c 'DOCKER_R"   11 days ago          Up 11 days                              k8s_registry.e2c08a9f_docker-registry-3-zevgm_default_2438891a-f1f8-11e5-9b14-005056a6874f_6bdf2e1e
fde4b2d65517        openshift3/ose-pod:v3.1.1.6                                            "/pod"                   11 days ago          Up 11 days                              k8s_POD.449bfd0f_docker-registry-3-zevgm_default_2438891a-f1f8-11e5-9b14-005056a6874f_007791be
root@osnode2s:~:
---->
Comment 43 Matt Wringe 2016-04-05 09:14:48 EDT
docker logs `docker ps | grep -i k8s_hawkular-metrics | awk '{print $1}'`

I don't need the pod logs but the logs for the container itself
Comment 44 Boris Kurktchiev 2016-04-05 09:16:03 EDT
ok after a lot of futsing and waiting I got docker to see it as up (OSE still doesnt see it as up):
root@osnode2s:~:
----> docker logs f3dbf1198e27
/opt/hawkular/auth ~
Picked up JAVA_TOOL_OPTIONS: -Duser.home=/home/jboss -Duser.name=jboss
Certificate was added to keystore
[Storing hawkular-metrics.truststore]
~
=========================================================================

  JBoss Bootstrap Environment

  JBOSS_HOME: /opt/eap

  JAVA: /usr/lib/jvm/java-1.8.0/bin/java

  JAVA_OPTS:  -server -XX:+UseCompressedOops -verbose:gc -Xloggc:"/opt/eap/standalone/log/gc.log" -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=3M -XX:-TraceClassUnloading -Xms1303m -Xmx1303m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.logmanager -Djava.awt.headless=true -Djboss.modules.policy-permissions=true -Xbootclasspath/p:/opt/eap/jboss-modules.jar:/opt/eap/modules/system/layers/base/org/jboss/logmanager/main/jboss-logmanager-1.5.4.Final-redhat-1.jar:/opt/eap/modules/system/layers/base/org/jboss/logmanager/ext/main/javax.json-1.0.4.jar:/opt/eap/modules/system/layers/base/org/jboss/logmanager/ext/main/jboss-logmanager-ext-1.0.0.Alpha2-redhat-1.jar -Djava.util.logging.manager=org.jboss.logmanager.LogManager -javaagent:/opt/eap/jolokia.jar=port=8778,protocol=https,caCert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt,clientPrincipal=cn=system:master-proxy,useSslClientAuthentication=true,extraClientCheck=true,host=0.0.0.0,discoveryEnabled=false -Djava.security.egd=file:/dev/./urandom

=========================================================================

Picked up JAVA_TOOL_OPTIONS: -Duser.home=/home/jboss -Duser.name=jboss
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
I> No access restrictor found, access to all MBean is allowed
Jolokia: Agent started with URL https://10.1.1.15:8778/jolokia/
08:59:14,780 INFO  [org.jboss.modules] (main) JBoss Modules version 1.3.7.Final-redhat-1
08:59:15,476 INFO  [org.jboss.msc] (main) JBoss MSC version 1.1.5.Final-redhat-1
08:59:15,572 INFO  [org.jboss.as] (MSC service thread 1-4) JBAS015899: JBoss EAP 6.4.4.GA (AS 7.5.4.Final-redhat-4) starting
08:59:15,579 DEBUG [org.jboss.as.config] (MSC service thread 1-4) Configured system properties:
	KUBERNETES_MASTER_URL = https://kubernetes.default.svc:443
	[Standalone] =
	awt.toolkit = sun.awt.X11.XToolkit
	file.encoding = ANSI_X3.4-1968
	file.encoding.pkg = sun.io
	file.separator = /
	hawkular-metrics.cassandra-nodes = hawkular-cassandra
	hawkular-metrics.cassandra-use-ssl = true
	hawkular-metrics.openshift.auth-methods = openshift-oauth,htpasswd
	hawkular-metrics.openshift.htpasswd-file = /secrets/hawkular-metrics.htpasswd.file
	hawkular.metrics.allowed-cors-access-control-allow-headers = authorization
	hawkular.metrics.default-ttl = 7
	java.awt.graphicsenv = sun.awt.X11GraphicsEnvironment
	java.awt.headless = true
	java.awt.printerjob = sun.print.PSPrinterJob
	java.class.path = /opt/eap/jboss-modules.jar:/opt/eap/jolokia.jar
	java.class.version = 52.0
	java.endorsed.dirs = /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/endorsed
	java.ext.dirs = /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/ext:/usr/java/packages/lib/ext
	java.home = /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre
	java.io.tmpdir = /tmp
	java.library.path = /usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
	java.net.preferIPv4Stack = true
	java.runtime.name = OpenJDK Runtime Environment
	java.runtime.version = 1.8.0_71-b15
	java.security.egd = file:/dev/./urandom
	java.specification.name = Java Platform API Specification
	java.specification.vendor = Oracle Corporation
	java.specification.version = 1.8
	java.util.logging.manager = org.jboss.logmanager.LogManager
	java.vendor = Oracle Corporation
	java.vendor.url = http://java.oracle.com/
	java.vendor.url.bug = http://bugreport.sun.com/bugreport/
	java.version = 1.8.0_71
	java.vm.info = mixed mode
	java.vm.name = OpenJDK 64-Bit Server VM
	java.vm.specification.name = Java Virtual Machine Specification
	java.vm.specification.vendor = Oracle Corporation
	java.vm.specification.version = 1.8
	java.vm.vendor = Oracle Corporation
	java.vm.version = 25.71-b15
	javax.management.builder.initial = org.jboss.as.jmx.PluggableMBeanServerBuilder
	javax.net.ssl.keyStore = /opt/hawkular/auth/hawkular-metrics.keystore
	javax.net.ssl.keyStorePassword = <redacted>
	javax.net.ssl.trustStore = /opt/hawkular/auth/hawkular-metrics.truststore
	javax.net.ssl.trustStorePassword = <redacted>
	javax.xml.datatype.DatatypeFactory = __redirected.__DatatypeFactory
	javax.xml.parsers.DocumentBuilderFactory = __redirected.__DocumentBuilderFactory
	javax.xml.parsers.SAXParserFactory = __redirected.__SAXParserFactory
	javax.xml.stream.XMLEventFactory = __redirected.__XMLEventFactory
	javax.xml.stream.XMLInputFactory = __redirected.__XMLInputFactory
	javax.xml.stream.XMLOutputFactory = __redirected.__XMLOutputFactory
	javax.xml.transform.TransformerFactory = __redirected.__TransformerFactory
	javax.xml.validation.SchemaFactory:http://www.w3.org/2001/XMLSchema = __redirected.__SchemaFactory
	javax.xml.xpath.XPathFactory:http://java.sun.com/jaxp/xpath/dom = __redirected.__XPathFactory
	jboss.bind.address = 0.0.0.0
	jboss.home.dir = /opt/eap
	jboss.host.name = hawkular-metrics-ws9dw
	jboss.modules.dir = /opt/eap/modules
	jboss.modules.policy-permissions = true
	jboss.modules.system.pkgs = org.jboss.logmanager
	jboss.node.name = hawkular-metrics-ws9dw
	jboss.qualified.host.name = hawkular-metrics-ws9dw
	jboss.server.base.dir = /opt/eap/standalone
	jboss.server.config.dir = /opt/eap/standalone/configuration
	jboss.server.data.dir = /opt/eap/standalone/data
	jboss.server.deploy.dir = /opt/eap/standalone/data/content
	jboss.server.log.dir = /opt/eap/standalone/log
	jboss.server.name = hawkular-metrics-ws9dw
	jboss.server.persist.config = true
	jboss.server.temp.dir = /opt/eap/standalone/tmp
	jolokia.agent = https://10.1.1.15:8778/jolokia/
	line.separator =

	logging.configuration = file:/opt/eap/standalone/configuration/logging.properties
	module.path = /opt/eap/modules
	org.apache.catalina.connector.CoyoteAdapter.ALLOW_BACKSLASH = true
	org.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH = true
	org.jboss.boot.log.file = /opt/eap/standalone/log/server.log
	org.jboss.resolver.warning = true
	org.xml.sax.driver = __redirected.__XMLReaderFactory
	os.arch = amd64
	os.name = Linux
	os.version = 3.10.0-327.10.1.el7.x86_64
	path.separator = :
	sun.arch.data.model = 64
	sun.boot.class.path = /opt/eap/jboss-modules.jar:/opt/eap/modules/system/layers/base/org/jboss/logmanager/main/jboss-logmanager-1.5.4.Final-redhat-1.jar:/opt/eap/modules/system/layers/base/org/jboss/logmanager/ext/main/javax.json-1.0.4.jar:/opt/eap/modules/system/layers/base/org/jboss/logmanager/ext/main/jboss-logmanager-ext-1.0.0.Alpha2-redhat-1.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/resources.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/rt.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/sunrsasign.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/jsse.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/jce.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/charsets.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/jfr.jar:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/classes
	sun.boot.library.path = /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.71-2.b15.el7_2.x86_64/jre/lib/amd64
	sun.cpu.endian = little
	sun.cpu.isalist =
	sun.io.unicode.encoding = UnicodeLittle
	sun.java.command = /opt/eap/jboss-modules.jar -mp /opt/eap/modules -jaxpmodule javax.xml.jaxp-provider org.jboss.as.standalone -Djboss.home.dir=/opt/eap -Djboss.server.base.dir=/opt/eap/standalone -Djavax.net.ssl.keyStore=/opt/hawkular/auth/hawkular-metrics.keystore -Djavax.net.ssl.keyStorePassword=3R5k6L7fSb5GwET -Djavax.net.ssl.trustStore=/opt/hawkular/auth/hawkular-metrics.truststore -Djavax.net.ssl.trustStorePassword=XrfaqNSiMnwLoIq -b 0.0.0.0 -Dhawkular-metrics.cassandra-nodes=hawkular-cassandra -Dhawkular-metrics.cassandra-use-ssl -Dorg.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH=true -Dorg.apache.catalina.connector.CoyoteAdapter.ALLOW_BACKSLASH=true -Dhawkular-metrics.openshift.auth-methods=openshift-oauth,htpasswd -Dhawkular-metrics.openshift.htpasswd-file=/secrets/hawkular-metrics.htpasswd.file -Dhawkular.metrics.allowed-cors-access-control-allow-headers=authorization -Dhawkular.metrics.default-ttl=7 -DKUBERNETES_MASTER_URL=https://kubernetes.default.svc:443
	sun.java.launcher = SUN_STANDARD
	sun.jnu.encoding = ANSI_X3.4-1968
	sun.management.compiler = HotSpot 64-Bit Tiered Compilers
	sun.os.patch.level = unknown
	user.country = US
	user.dir = /home/jboss
	user.home = /home/jboss
	user.language = en
	user.name = jboss
	user.timezone = America/New_York
08:59:15,579 DEBUG [org.jboss.as.config] (MSC service thread 1-4) VM Arguments: -Duser.home=/home/jboss -Duser.name=jboss -D[Standalone] -XX:+UseCompressedOops -verbose:gc -Xloggc:/opt/eap/standalone/log/gc.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=3M -XX:-TraceClassUnloading -Xms1303m -Xmx1303m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.logmanager -Djava.awt.headless=true -Djboss.modules.policy-permissions=true -Xbootclasspath/p:/opt/eap/jboss-modules.jar:/opt/eap/modules/system/layers/base/org/jboss/logmanager/main/jboss-logmanager-1.5.4.Final-redhat-1.jar:/opt/eap/modules/system/layers/base/org/jboss/logmanager/ext/main/javax.json-1.0.4.jar:/opt/eap/modules/system/layers/base/org/jboss/logmanager/ext/main/jboss-logmanager-ext-1.0.0.Alpha2-redhat-1.jar -Djava.util.logging.manager=org.jboss.logmanager.LogManager -javaagent:/opt/eap/jolokia.jar=port=8778,protocol=https,caCert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt,clientPrincipal=cn=system:master-proxy,useSslClientAuthentication=true,extraClientCheck=true,host=0.0.0.0,discoveryEnabled=false -Djava.security.egd=file:/dev/./urandom -Dorg.jboss.boot.log.file=/opt/eap/standalone/log/server.log -Dlogging.configuration=file:/opt/eap/standalone/configuration/logging.properties
08:59:17,675 INFO  [org.xnio] (MSC service thread 1-3) XNIO Version 3.0.14.GA-redhat-1
08:59:17,683 INFO  [org.xnio.nio] (MSC service thread 1-3) XNIO NIO Implementation Version 3.0.14.GA-redhat-1
08:59:17,691 INFO  [org.jboss.as.server] (Controller Boot Thread) JBAS015888: Creating http management service using socket-binding (management-http)
08:59:17,704 INFO  [org.jboss.remoting] (MSC service thread 1-3) JBoss Remoting version 3.3.5.Final-redhat-1
Comment 45 Boris Kurktchiev 2016-04-05 10:07:10 EDT
Alright! so after rebooting the entire infrastructure a few times, it all finally came back up. The good news is that as it sits right now with re-encrypt route the original issue in the ticket seems to have been solved. I do not have NodeIP defined in my nodes config files.

I have not tested to see what happens if I revert to the normal created route or if one of the last 3.1.* releases fixed the problem.
Comment 46 Boris Kurktchiev 2016-04-07 09:06:30 EDT
Well... things were working and then I walked in this morning to this message: Error fetching cpu/usage for container hawkular-cassandra-1.Failed to perform operation due to an error: All host(s) tried for query failed (no host was tried)

Nothing has been touched on the platform other than creating usage. Infrastructure wise its the same as it was...
Comment 47 Boris Kurktchiev 2016-04-07 09:15:14 EDT
The heapster pod's log is filled with:
E0407 08:46:33.977906       1 driver.go:311] Hawkular returned status code 500, error message: Failed to perform operation due to an error: All host(s) tried for query failed (no host was tried)
Comment 48 Boris Kurktchiev 2016-04-07 09:17:10 EDT
Looks like this might be caused by casandra:
ERROR 04:15:03 Exception in thread Thread[MemtableFlushWriter:140,5,main]
728	java.lang.RuntimeException: Insufficient disk space to write 8932212 bytes
729		at org.apache.cassandra.io.util.DiskAwareRunnable.getWriteDirectory(DiskAwareRunnable.java:29) ~[apache-cassandra-2.2.1.redhat-2.jar:2.2.1.redhat-2]
730		at org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:349) ~[apache-cassandra-2.2.1.redhat-2.jar:2.2.1.redhat-2]
731		at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-2.2.1.redhat-2.jar:2.2.1.redhat-2]
732		at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297) ~[guava-16.0.jar:na]
733		at org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1173) ~[apache-cassandra-2.2.1.redhat-2.jar:2.2.1.redhat-2]
734		at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_71]
735		at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_71]
736		at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_71]
737	ERROR 04:25:02 Stopping gossiper
738	WARN  04:25:02 Stopping gossip by operator request
739	INFO  04:25:02 Announcing shutdown
740	INFO  04:25:02 Node hawkular-cassandra-1-g7bcf/10.1.1.4 state jump to normal
741	ERROR 04:25:04 Stopping RPC server
742	INFO  04:25:04 Stop listening to thrift clients
743	ERROR 04:25:04 Stopping native transport
744	INFO  04:25:04 Stop listening for CQL clients
745	ERROR 04:25:04 Failed to persist commits to disk. Commit disk failure policy is stop; terminating thread
746	org.apache.cassandra.io.FSWriteError: java.io.IOException: No space left on device
747		at org.apache.cassandra.db.commitlog.MemoryMappedSegment.write(MemoryMappedSegment.java:100) ~[apache-cassandra-2.2.1.redhat-2.jar:2.2.1.redhat-2]
748		at org.apache.cassandra.db.commitlog.CommitLogSegment.sync(CommitLogSegment.java:281) ~[apache-cassandra-2.2.1.redhat-2.jar:2.2.1.redhat-2]
749		at org.apache.cassandra.db.commitlog.CommitLog.sync(CommitLog.java:238) ~[apache-cassandra-2.2.1.redhat-2.jar:2.2.1.redhat-2]
750		at org.apache.cassandra.db.commitlog.AbstractCommitLogService$1.run(AbstractCommitLogService.java:93) ~[apache-cassandra-2.2.1.redhat-2.jar:2.2.1.redhat-2]
751		at java.lang.Thread.run(Thread.java:745) [na:1.8.0_71]
752	Caused by: java.io.IOException: No space left on device
753		at java.nio.MappedByteBuffer.force0(Native Method) ~[na:1.8.0_71]
754		at java.nio.MappedByteBuffer.force(MappedByteBuffer.java:203) ~[na:1.8.0_71]
755		at org.apache.cassandra.utils.SyncUtil.force(SyncUtil.java:93) ~[apache-cassandra-2.2.1.redhat-2.jar:2.2.1.redhat-2]
756		at org.apache.cassandra.db.commitlog.MemoryMappedSegment.write(MemoryMappedSegment.java:96) ~[apache-cassandra-2.2.1.redhat-2.jar:2.2.1.redhat-2]
757		... 4 common frames omitted
758	INFO  05:02:53 Enqueuing flush of size_estimates: 34202 (0%) on-heap, 0 (0%) off-heap
759	INFO  05:02:53 Enqueuing flush of peers: 152 (0%) on-heap, 0 (0%) off-heap
760	INFO  05:02:53 Enqueuing flush of sstable_activity: 45152 (0%) on-heap, 0 (0%) off-heap


Shouldnt it be cleaning up/overwriting any data? I can give it a new PV with more space, but it would be nice to know how to capacity plan for this.
Comment 49 Boris Kurktchiev 2016-04-07 16:08:06 EDT
Yes so this is definetly the cause for metrics breaking. What is the suggested way to replace the current PVC with a new one that has more storage? I am ok with loosing data at this stage of the game, I just want to have metrics running for longer than a day :)
Comment 50 Steve Speicher 2016-04-12 16:45:50 EDT
(In reply to Boris Kurktchiev from comment #49)
> Yes so this is definetly the cause for metrics breaking. What is the
> suggested way to replace the current PVC with a new one that has more
> storage? I am ok with loosing data at this stage of the game, I just want to
> have metrics running for longer than a day :)

Dealing with Cassandra filling up the PVS is currently being worked in another bug https://bugzilla.redhat.com/show_bug.cgi?id=1316275
Comment 51 chunchen 2016-06-07 23:28:08 EDT
According to comment #45, and it's not reproduced with latest metrics images, so mark it as verified.
Comment 53 errata-xmlrpc 2016-06-27 11:05:30 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1343

Note You need to log in before you can comment on or make changes to this bug.