Bug 1581844 - Hypervisor show not showing load average on certain compute nodes
Summary: Hypervisor show not showing load average on certain compute nodes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-openstackclient
Version: 10.0 (Newton)
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: z9
: 10.0 (Newton)
Assignee: Julie Pichon
QA Contact: Shai Revivo
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-23 18:33 UTC by jhardee
Modified: 2021-12-10 16:18 UTC (History)
17 users (show)

Fixed In Version: python-openstackclient-3.2.1-4.el7ost
Doc Type: Bug Fix
Doc Text:
Previously, the regular expression used to match and extract the load average from Nova hypervisors load average API was incorrect. As a result, the load average would not display if the number of users was 1. With this update, the regular expression matches all possible load average formats, and the load average displays correctly in all cases.
Clone Of:
Environment:
Last Closed: 2018-09-17 16:59:20 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-11398 0 None None None 2021-12-10 16:18:21 UTC
Red Hat Product Errata RHBA-2018:2671 0 None None None 2018-09-17 17:00:21 UTC

Description jhardee 2018-05-23 18:33:12 UTC
Description of problem:

Running "openstack hypervisor show" outputs differ between compute nodes.  Some compute nodes showing uptime data, some compute nodes are not showing uptime data output.


Version-Release number of selected component (if applicable):
openstack-nova-compute-14.1.0-3.el7ost.noarch


How reproducible: 
2 compute nodes of many.  One environment.


Steps to Reproduce: 
1. Deploy overcloud 
2. Run "openstack hypervisor show" on multiple compute nodes

Actual results: Not showing uptime data on certain compute nodes


Expected results: uptime data shown on all compute nodes.


Additional info:

Comment 2 Jose 2018-05-23 19:08:18 UTC
Any idea on what is causing this bug?

Comment 5 Artom Lifshitz 2018-05-24 19:26:03 UTC
The load_average field in the openstackclient output comes from the hypervisor-uptime API [1].

The first thing I would do to try to narrow down the exact cause of this is try querying the same API with the nova client, so:

  $ nova hypervisor-uptime <hypervisor uuid>

Internally, all the libvirt driver does is run the 'uptime' command on the compute host. I would also try SSHing to the compute host and running 'uptime' as a non-root user:

  $ uptime

Once we have those two pieces of information, we can investigate further based on the results.

Cheers!

[1] https://developer.openstack.org/api-ref/compute/#show-hypervisor-uptime

Comment 6 Jose 2018-05-25 13:20:52 UTC
[stack@cldvcpdrc ~]$ nova hypervisor-uptime cldvcpsrva001.cloudmandic.com.br
+---------------------+-----------------------------------------------------------------------+
| Property            | Value                                                                 |
+---------------------+-----------------------------------------------------------------------+
| hypervisor_hostname | cldvcpsrva001.cloudmandic.com.br                                      |
| id                  | 30                                                                    |
| state               | up                                                                    |
| status              | enabled                                                               |
| uptime              |  10:19:45 up 72 days, 17:04,  1 user,  load average: 1.30, 1.36, 1.28 |
|                     |                                                                       |
+---------------------+-----------------------------------------------------------------------+

[stack@cldvcpdrc ~]$ openstack hypervisor show cldvcpsrva001.cloudmandic.com.br
+----------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| Field                | Value                                                                                                                               |
+----------------------+-------------------------------------------------------------------------------------------------------------------------------------+
| aggregates           | [u'xeon-E5-2620', u'windows', u'aurek']                                                                                             |
| cpu_info             | {"vendor": "Intel", "model": "Broadwell", "arch": "x86_64", "features": ["pge", "avx", "xsaveopt", "clflush", "sep", "rtm",         |
|                      | "tsc_adjust", "vme", "dtes64", "invpcid", "tsc", "fsgsbase", "xsave", "smap", "bmi2", "vmx", "erms", "xtpr", "cmov", "hle", "smep", |
|                      | "pcid", "est", "pat", "monitor", "smx", "pbe", "lm", "msr", "adx", "3dnowprefetch", "nx", "fxsr", "syscall", "tm", "sse4.1", "pae", |
|                      | "sse4.2", "pclmuldq", "acpi", "fma", "pni", "tsc-deadline", "popcnt", "mmx", "osxsave", "cx8", "mce", "de", "rdtscp", "ht", "dca",  |
|                      | "lahf_lm", "abm", "rdseed", "pdcm", "mca", "pdpe1gb", "mbm_local", "sse", "f16c", "pse", "ds", "invtsc", "mbm_total", "tm2",        |
|                      | "avx2", "aes", "sse2", "ss", "ds_cpl", "arat", "bmi1", "apic", "ssse3", "fpu", "cx16", "pse36", "mtrr", "movbe", "rdrand", "cmt",   |
|                      | "x2apic"], "topology": {"cores": 8, "cells": 2, "threads": 2, "sockets": 1}}                                                        |
| current_workload     | 0                                                                                                                                   |
| disk_available_least | 17831                                                                                                                               |
| free_disk_gb         | 19509                                                                                                                               |
| free_ram_mb          | 323362                                                                                                                              |
| host_ip              | 10.252.2.25                                                                                                                         |
| hypervisor_hostname  | cldvcpsrva001.cloudmandic.com.br                                                                                                    |
| hypervisor_type      | QEMU                                                                                                                                |
| hypervisor_version   | 2009000                                                                                                                             |
| id                   | 30                                                                                                                                  |
| local_gb             | 20009                                                                                                                               |
| local_gb_used        | 500                                                                                                                                 |
| memory_mb            | 524066                                                                                                                              |
| memory_mb_used       | 200704                                                                                                                              |
| running_vms          | 15                                                                                                                                  |
| service_host         | cldvcpsrva001.cloudmandic.com.br                                                                                                    |
| service_id           | 135                                                                                                                                 |
| state                | up                                                                                                                                  |
| status               | enabled                                                                                                                             |
| vcpus                | 32                                                                                                                                  |
| vcpus_used           | 74                                                                                                                                  |
+----------------------+-------------------------------------------------------------------------------------------------------------------------------------+

[heat-admin@cldvcpsrva001 ~]$ uptime
 10:20:25 up 72 days, 17:05,  2 users,  load average: 0,99, 1,27, 1,25

Comment 7 Artom Lifshitz 2018-05-25 17:30:04 UTC
This is a bug in openstackclient that was fixed upstream in the ocata (OSP11) cycle [1]. I've posted a backport to OSP10 [2].

[1] https://review.openstack.org/#/c/353555/
[2] https://code.engineering.redhat.com/gerrit/#/c/139897/

Comment 15 Alex McLeod 2018-09-03 08:01:48 UTC
Hi there,

If this bug requires doc text for errata release, please set the 'Doc Type' and provide draft text according to the template in the 'Doc Text' field.

The documentation team will review, edit, and approve the text.

If this bug does not require doc text, please set the 'requires_doc_text' flag to -.

Thanks,
Alex

Comment 17 errata-xmlrpc 2018-09-17 16:59:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2671


Note You need to log in before you can comment on or make changes to this bug.