Bug 1846881 - [RFE] pCPU usage information should be included in Horizon dashboard and openstack CLI
Summary: [RFE] pCPU usage information should be included in Horizon dashboard and open...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-django-horizon
Version: 16.0 (Train)
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: z3
: 17.1
Assignee: Tatiana Ovchinnikova
QA Contact: Jan Jasek
URL:
Whiteboard:
: 2016832 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-06-15 08:00 UTC by Meiyan Zheng
Modified: 2024-10-01 16:39 UTC (History)
36 users (show)

Fixed In Version: python-django-horizon-19.4.1-17.1.20230621085935.el9ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-05-22 20:39:11 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 890525 0 None MERGED Use Placement API along with the hypervisor stats 2024-01-04 17:06:26 UTC
OpenStack gerrit 906004 0 None MERGED Add allocation ratios to Placement stats 2024-03-04 19:46:18 UTC
Red Hat Issue Tracker OSP-31205 0 None None None 2024-01-17 21:06:30 UTC
Red Hat Issue Tracker OSP-795 0 None None None 2021-11-18 15:20:06 UTC
Red Hat Knowledge Base (Solution) 7017657 0 None None None 2023-06-07 12:06:51 UTC
Red Hat Product Errata RHBA-2024:2741 0 None None None 2024-05-22 20:39:16 UTC

Description Meiyan Zheng 2020-06-15 08:00:35 UTC
Description of problem:

When checking Hypervisor status with Admin --> Compute --> Hypervisors in dashboard, pCPU usage information is not included when configured cpu_dedicated_set in nova.conf, such as:  

[compute]
cpu_shared_set = 0,1,4,5
cpu_dedicated_set = 2,3,6,7

https://specs.openstack.org/openstack/nova-specs/specs/train/implemented/cpu-resources.html


Version-Release number of selected component (if applicable):
- RHOSP16 

How reproducible:

Steps to Reproduce:
1. Configure vCPU and pCPU settings in nova.conf:
[compute]
cpu_shared_set = 0,1,4,5
cpu_dedicated_set = 2,3,6,7

2. Create instances on above compute node 

3. Check Hypervisors status in dashboard with Admin --> Compute --> Hypervisors

Actual results:
Only show total vCPU numbers

Expected results:
pCPU usage also should be included 

Additional info:

Comment 7 Artom Lifshitz 2021-11-01 13:44:00 UTC
From the Compute point of view information about vCPUs and pCPUs is consumable via the Placement API, and is accessible on the command line with the `openstack resource provider` series of commands, as Francois pointed out in comment #5.

We've made a point of removing usage information from the `hypervisors` API [1] as it was "frequently misleading or outright wrong [and] can be better queried from placement" - and do not intend to re-add anything back to it.

It is up to the Horizon team if and how they want to present the information from the Placement resource providers APIs in the UI.

[1] https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#maximum-in-wallaby

Comment 8 Radomir Dopieralski 2021-11-03 13:38:52 UTC
*** Bug 2016832 has been marked as a duplicate of this bug. ***

Comment 10 XinhuaLi 2022-04-18 03:42:45 UTC
Hi Team,

Good day.
So may understand that the fix would be included in RHOSP 18.0 ?

Regards
Sam

Comment 11 Radomir Dopieralski 2022-04-19 07:45:18 UTC
Sadly, it's not a fix, it's a feature that requires new code.

We have it initially planned for osp18, but as you can see, it hasn't been ack-ed by PM yet, so it may still change.

Comment 38 errata-xmlrpc 2024-05-22 20:39:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenStack Platform 17.1.3 bug fix and enhancement advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:2741


Note You need to log in before you can comment on or make changes to this bug.