Bug 1851797 - Failed to create VM with watchdog by setting flavor metadata
Summary: Failed to create VM with watchdog by setting flavor metadata
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-glance
Version: 16.1 (Train)
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: beta
: 16.2 (Train on RHEL 8.4)
Assignee: Cyril Roelandt
QA Contact: Mike Abrams
RHOS Documentation Team
URL:
Whiteboard: libvirt_OSP_INT
Depends On:
Blocks: 1929840
TreeView+ depends on / blocked
 
Reported: 2020-06-29 02:35 UTC by chhu
Modified: 2022-09-05 13:21 UTC (History)
14 users (show)

Fixed In Version: openstack-glance-19.0.4-2.20210212110604.5bbd356.el8
Doc Type: Bug Fix
Doc Text:
This update fixes an Image service (glance) configuration error that prevented users from creating a virtual machine with watchdog by setting flavor metadata.
Clone Of:
: 1929840 (view as bug list)
Environment:
Last Closed: 2021-09-15 07:08:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
nova-conductor.log (6.10 KB, text/plain)
2020-07-06 09:46 UTC, chhu
no flags Details
nova-scheduler.log (898 bytes, text/plain)
2020-07-06 09:47 UTC, chhu
no flags Details
flavor_watchdog.png (95.03 KB, image/png)
2020-07-07 01:45 UTC, chhu
no flags Details
flavor-watchdog-update.png (47.30 KB, image/png)
2020-07-07 01:50 UTC, chhu
no flags Details
flavor-watchdog-update-step2.png (46.16 KB, image/png)
2020-07-07 01:52 UTC, chhu
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1887099 0 None None None 2020-07-10 00:44:25 UTC
OpenStack gerrit 740384 0 None MERGED Fix metadefs for compute-watchdog 2021-02-17 10:53:39 UTC
OpenStack gerrit 749091 0 None MERGED Fix metadefs for compute-watchdog 2021-02-17 10:53:39 UTC
Red Hat Issue Tracker OSP-745 0 None None None 2022-09-05 13:21:30 UTC
Red Hat Product Errata RHEA-2021:3483 0 None None None 2021-09-15 07:09:06 UTC

Description chhu 2020-06-29 02:35:56 UTC
Description of problem:
Failed to create VM with watchdog by setting flavor metadata

Version-Release number of selected component (if applicable):
openstack-nova-compute-20.2.1-0.20200528080027.1e95025.el8ost.noarch
libvirt-daemon-kvm-6.0.0-23.module+el8.2.1+6955+1e1fca42.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Create a image/volume in OSP16.1, set the image/volume metedata hw_watchdog_action: pause, create flavor: m2, start the VM from image/volume with watchdog successfully, the xml is as below:
    <watchdog model='i6300esb' action='pause'>
      <alias name='watchdog0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </watchdog>

2. Set the flavor:m2 with hw_watchdog_action: pause, try to start the VM from the same image/volume with m2 on same compute node, failed with message:
   "No valid host was found. There are not enough hosts available"
-----------------------------------------------------------------------
[heat-admin@overcloud-controller-0 ~]$ sudo tail -f /var/log/containers/nova/nova-scheduler.log
2020-06-29 02:24:10.513 47 INFO nova.filters [req-a8e4515d-44ef-4020-83d5-9888a5776b8b 317f1c1fe439476dbfb991ec7768d23f 63a5b71d92f74e0ba553e6acca2c747c - default default] Filter AggregateInstanceExtraSpecsFilter returned 0 hosts
2020-06-29 02:24:10.514 47 INFO nova.filters [req-a8e4515d-44ef-4020-83d5-9888a5776b8b 317f1c1fe439476dbfb991ec7768d23f 63a5b71d92f74e0ba553e6acca2c747c - default default] Filtering removed all hosts for the request with instance ID '554daf2f-88f1-4171-ad01-fc236958219d'. Filter results: ['AvailabilityZoneFilter: (start: 4, end: 1)', 'ComputeFilter: (start: 1, end: 1)', 'ComputeCapabilitiesFilter: (start: 1, end: 1)', 'ImagePropertiesFilter: (start: 1, end: 1)', 'ServerGroupAntiAffinityFilter: (start: 1, end: 1)', 'ServerGroupAffinityFilter: (start: 1, end: 1)', 'NUMATopologyFilter: (start: 1, end: 1)', 'AggregateInstanceExtraSpecsFilter: (start: 1, end: 0)']  
------------------------------------------------------------------------

Actual results:
In step2, failed to start VM with flavor metadata: hw_watchdog_action: pause

Expected results:
In step2, can start VM with flavor metadata: hw_watchdog_action: pause successfully

Comment 1 smooney 2020-07-03 14:35:08 UTC
there is not enough info in this bug to triage it
when filing bug you should always include sosreports if available
the current failure is due to the AggregateInstanceExtraSpecsFilter which likely is related to how its configure

at a minimum  we need the metadata for the aggregate and the flavor info but really we need the controller  logs

Comment 2 chhu 2020-07-06 09:45:48 UTC
Please see the information as below:

1.  Create a image in OSP16.1, set the image metedata hw_watchdog_action: pause, create flavor: m2, 
    start the VM from image/volume with watchdog successfully, the xml is as below:
    <watchdog model='i6300esb' action='pause'>
      <alias name='watchdog0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </watchdog>

- image: r8 metadata
| properties       | direct_url='file:///var/lib/glance/images/181ea5e5-c345-4200-b205-8a6dd8d66d91', hw_watchdog_action='pause', os_hash_algo='sha512', os_hash_value='527d0a141dc1968b39d5528cf10223bb7849c77d0bc3cc67ed8024789b4c61aec6feaa444cd3ad0a305d80fd4ed1ebdc8f164527dc3d1437d3c04321fd76a78d', os_hidden='False', stores='default_backend' |

- flavor: m2 metadata (no item)
$ openstack flavor show m2
+----------------------------+-------+
| Field                      | Value |
+----------------------------+-------+
| OS-FLV-DISABLED:disabled   | False |
......
| properties                 |       |
......
  
2. Set the flavor:m2 with hw_watchdog_action: pause, 
   try to start the VM from the same image with m2 on same compute node, failed with message in nova-conductor.log:
   "No valid host was found. There are not enough hosts available"

- flavor: m2 metadata
$ openstack flavor show m2
+----------------------------+---------------------------------+
| Field                      | Value                           |
+----------------------------+---------------------------------+
| OS-FLV-DISABLED:disabled   | False                           |
| OS-FLV-EXT-DATA:ephemeral  | 0                               |
| access_project_ids         | None                            |
| description                | None                            |
| disk                       | 10                              |
| extra_specs                | {'hw_watchdog_action': 'pause'} |
| id                         | 7                               |
| name                       | m2                              |
| os-flavor-access:is_public | True                            |
| properties                 | hw_watchdog_action='pause'      |
| ram                        | 2048                            |
| rxtx_factor                | 1.0                             |
| swap                       | 0                               |
| vcpus                      | 2                               |
+----------------------------+---------------------------------+

$ cat nova-scheduler.log 
2020-07-06 09:39:38.916 50 INFO nova.filters [req-b271c72f-99f6-4d83-927d-1448c2b7bf1e 317f1c1fe439476dbfb991ec7768d23f 63a5b71d92f74e0ba553e6acca2c747c - default default] Filter AggregateInstanceExtraSpecsFilter returned 0 hosts
2020-07-06 09:39:38.917 50 INFO nova.filters [req-b271c72f-99f6-4d83-927d-1448c2b7bf1e 317f1c1fe439476dbfb991ec7768d23f 63a5b71d92f74e0ba553e6acca2c747c - default default] Filtering removed all hosts for the request with instance ID '474ca5db-73f5-436f-9e78-0df943edfc4c'. Filter results: ['AvailabilityZoneFilter: (start: 4, end: 1)', 'ComputeFilter: (start: 1, end: 1)', 'ComputeCapabilitiesFilter: (start: 1, end: 1)', 'ImagePropertiesFilter: (start: 1, end: 1)', 'ServerGroupAntiAffinityFilter: (start: 1, end: 1)', 'ServerGroupAffinityFilter: (start: 1, end: 1)', 'NUMATopologyFilter: (start: 1, end: 1)', 'AggregateInstanceExtraSpecsFilter: (start: 1, end: 0)']

More details in attached log file. Thank you!

Comment 3 chhu 2020-07-06 09:46:21 UTC
Created attachment 1700006 [details]
nova-conductor.log

Comment 4 chhu 2020-07-06 09:47:34 UTC
Created attachment 1700007 [details]
nova-scheduler.log

Comment 5 chhu 2020-07-06 10:05:07 UTC
More information:

$ openstack aggregate list
+----+------+-------------------+
| ID | Name | Availability Zone |
+----+------+-------------------+
|  1 | numa | numa              |
|  2 | vgpu | vgpu              |
+----+------+-------------------+

In step 1 and 2, both try to start VM from image and select Availability Zone: nova in web console.

Comment 6 smooney 2020-07-06 11:23:47 UTC
hw_watchdog_action is the way it would be set in an image not the valid form for a flavor.

flavors have namespaced extra specs in this case hw:watchdog_action  images do not support namespacs
so we flatten them converting hw:watchdog_action to hw_watchdog_action


so as specifed the hw_watchdog_action in the flaovr should be ignored with regrades to setting the values in the xml.

however as hw_watchdog_action is an unnamespaced extra spec it will be chekced by the AggregateInstanceExtraSpecsFilter
and if it is not listed on a host aggreate it will reject the host.

so im closing this as not a bug as the flavor is invaild and hte AggregateInstanceExtraSpecsFilter is correctly filtering the
host. The AggregateInstanceExtraSpecsFilter checks all unnamespaced extra specs or those that start with aggregate_instance_extra_specs:

if you change hw_watchdog_action to hw:watchdog_action it should work.

Comment 7 chhu 2020-07-07 01:37:39 UTC
(In reply to smooney from comment #6)
> hw_watchdog_action is the way it would be set in an image not the valid form
> for a flavor.
> 
> flavors have namespaced extra specs in this case hw:watchdog_action  images
> do not support namespacs
> so we flatten them converting hw:watchdog_action to hw_watchdog_action
> 
> 
> so as specifed the hw_watchdog_action in the flaovr should be ignored with
> regrades to setting the values in the xml.
> 
> however as hw_watchdog_action is an unnamespaced extra spec it will be
> chekced by the AggregateInstanceExtraSpecsFilter
> and if it is not listed on a host aggreate it will reject the host.
> 

Hi, Sean

This is the problem, I updated the flavor metadata on OSP dashboard,
"Update Flavor Metadata -> Watchdog Behavior -> Watchdog Action -> Click + ",
it added the "hw_watchdog_action" in flavor.

I think at least the dashboard need fix, to add hw:watchdog_action but not hw_watchdog_action,
we can't provide a way to customer, and then say you can't do things like that.

Regards,
Chenli Hu

Comment 8 chhu 2020-07-07 01:45:23 UTC
Created attachment 1700092 [details]
flavor_watchdog.png

Comment 9 chhu 2020-07-07 01:50:41 UTC
Created attachment 1700093 [details]
flavor-watchdog-update.png

Comment 10 chhu 2020-07-07 01:52:05 UTC
Created attachment 1700094 [details]
flavor-watchdog-update-step2.png

Comment 11 chhu 2020-07-07 06:05:55 UTC
I move it to openstack-dashboard-theme, please correct it if it's not suitable, thank you!

Comment 12 smooney 2020-07-07 12:49:12 UTC
this is a glance bug
https://github.com/openstack/glance/blob/master/etc/metadefs/compute-watchdog.json

    "resource_type_associations": [
        {
            "name": "OS::Glance::Image"
        },
        {
            "name": "OS::Cinder::Volume",
            "properties_target": "image"
        },
        {
            "name": "OS::Nova::Flavor"
        }
    ],

should be

   "resource_type_associations": [
        {
            "name": "OS::Glance::Image",
            "prefix": "hw_"
        },
        {
            "name": "OS::Nova::Flavor",
            "prefix": "hw:"
        }
    ],

and the hw_ prefix should be dropped from the property.

https://github.com/openstack/glance/blob/master/etc/metadefs/compute-watchdog.json#L20

horizon generates the ui form the glance metadefs.

Comment 13 Cyril Roelandt 2020-07-09 17:48:46 UTC
Are we supposed to remove OS::Cinder::Volume as well? Or is the patch below OK?

To be honest, I'm not sure I understand the issue very well. Can you open a bug upstream? If so, I'll make sure the relevant patch gets backported.

diff --git a/etc/metadefs/compute-watchdog.json b/etc/metadefs/compute-watchdog.json
index a8e9e43a..cbe39696 100644
--- a/etc/metadefs/compute-watchdog.json
+++ b/etc/metadefs/compute-watchdog.json
@@ -6,18 +6,20 @@
     "protected": true,
     "resource_type_associations": [
         {
-            "name": "OS::Glance::Image"
+            "name": "OS::Glance::Image",
+            "prefix": "hw_"
         },
         {
             "name": "OS::Cinder::Volume",
             "properties_target": "image"
         },
         {
-            "name": "OS::Nova::Flavor"
+            "name": "OS::Nova::Flavor",
+            "prefix": "hw:"
         }
     ],
     "properties": {
-        "hw_watchdog_action": {
+        "watchdog_action": {
             "title": "Watchdog Action",
             "description": "For the libvirt driver, you can enable and set the behavior of a virtual hardware watchdog device for each flavor. Watchdog devices keep an eye on the guest server, and carry out the configured action, if the server hangs. The watchdog uses the i6300esb device (emulating a PCI Intel 6300ESB). If hw_watchdog_action is not specified, the watchdog is disabled. Watchdog behavior set using a specific image's properties will override behavior set using flavors.",
             "type": "string",

Comment 14 smooney 2020-07-09 20:34:41 UTC
image metadata stored in the volume is valid but it would have the same prefix as image e.g. "hw_"
im not sure if you need to specify that or not.

basicaly horizon uses the glance metadta api to retirve a catalog of metadata keys and in some cases validations which it then uses to dynamically generate uis.

so if you ever wondered where the ui you get for setting image metadata or flavor extra specs comes form its glances metadef api.
glance was selected to maintain the catalog of metadata definitons just after i started working on openstack about 6 years ago.
https://github.com/openstack/glance/commit/1c242032fbb26fed3a82691abb030583b4f8940b
this file was added in that inital commit and it looks like its always been wrong.

the watchdog is a very rarely used feature and even more rare for the feature to be configured via horizon.

so basically since the metadef was incorrect the

based on the example response in the api docs i belive that

volume definition should look like this

{
          "name": "OS::Cinder::Volume",
          "prefix": "hw_",
          "properties_target": "image",
}

see https://docs.openstack.org/glance/latest/user/glancemetadefcatalogapi.html#retrieve-namespace

and yes feel free to file an upstream bug to track this.

this is a glance feature that many people dont know is there.
the orginial intent was that clients could hit that endpoint and automaticaly fiture out what metadtaa was valid and generate valiation, commandlines and documentation/ui elements.
which is why in correct behavior in horizon needs to be fixed in glance.

Comment 15 Cyril Roelandt 2020-07-10 00:58:05 UTC
OK I sent a patch upstream (see https://review.opendev.org/#/c/740384) and added you to the reviewer list.

Comment 22 Cyril Roelandt 2020-11-10 20:11:33 UTC
@Giulio: Yes, that is not going to be fixed upstream, so it's a downstream-only backport.

Comment 26 Mike Abrams 2021-05-03 08:15:57 UTC
fixed in version present:

(undercloud) [stack@undercloud-0 ~]$ openstack server list
+--------------------------------------+--------------+--------+------------------------+----------------+------------+
| ID                                   | Name         | Status | Networks               | Image          | Flavor     |
+--------------------------------------+--------------+--------+------------------------+----------------+------------+
| 1ad66836-99b1-4e5d-ab78-e521a39780e9 | controller-2 | ACTIVE | ctlplane=192.168.24.44 | overcloud-full | controller |
| 6749fcfb-ceff-4e3c-a618-b0d019dbb425 | controller-1 | ACTIVE | ctlplane=192.168.24.35 | overcloud-full | controller |
| 92b0213c-ed0f-4afd-9b80-d72a52741920 | controller-0 | ACTIVE | ctlplane=192.168.24.8  | overcloud-full | controller |
| 195a1d33-d0aa-477a-8b7c-c4c5c83b9cd3 | ceph-1       | ACTIVE | ctlplane=192.168.24.24 | overcloud-full | ceph       |
| 7470264a-7530-46ce-a5c4-9fd8102a360e | compute-1    | ACTIVE | ctlplane=192.168.24.53 | overcloud-full | compute    |
| 020f924d-4a11-4ab7-a119-b84dd0beac1b | ceph-0       | ACTIVE | ctlplane=192.168.24.32 | overcloud-full | ceph       |
| 3cd4d8fe-fcfe-4278-af6b-a50e58618796 | compute-0    | ACTIVE | ctlplane=192.168.24.22 | overcloud-full | compute    |
| 77b006d0-7565-4c83-abfe-5aa4a19028cc | ceph-2       | ACTIVE | ctlplane=192.168.24.30 | overcloud-full | ceph       |
+--------------------------------------+--------------+--------+------------------------+----------------+------------+
(undercloud) [stack@undercloud-0 ~]$ ssh -t heat-admin.24.8 "sudo podman exec -it -u root glance_api sh -c 'rpm -qa openstack-glance '"
Warning: Permanently added '192.168.24.8' (ECDSA) to the list of known hosts.
openstack-glance-19.0.4-2.20210216215005.5bbd356.el8ost.1.noarch
Connection to 192.168.24.8 closed.
(undercloud) [stack@undercloud-0 ~]$ rhos-release -L
Installed repositories (rhel-8.4):
  16.2
  ceph-4
  ceph-osd-4
  rhel-8.4
(undercloud) [stack@undercloud-0 ~]$ cat /var/lib/rhos-release/latest-installed 
16.2  -p RHOS-16.2-RHEL-8-20210409.n.0
(undercloud) [stack@undercloud-0 ~]$

Comment 27 Mike Abrams 2021-05-03 09:00:16 UTC
### update metadata properties for hw_watchdog_action to pause:

(overcloud) [stack@undercloud-0 ~]$ glance image-update 0a713897-9ca0-4d5d-bd3e-15fd58c53ab1 --property hw_watchdog_action=pause
+--------------------+----------------------------------------------------------------------------------+
| Property           | Value                                                                            |
+--------------------+----------------------------------------------------------------------------------+
| checksum           | f8ab98ff5e73ebab884d80c9dc9c7290                                                 |
| container_format   | bare                                                                             |
| created_at         | 2021-05-03T08:23:25Z                                                             |
| direct_url         | rbd://55d6fcc0-1b38-4104-b4e3-6b2d51684abf/images/0a713897-9ca0-4d5d-bd3e-15fd58 |
|                    | c53ab1/snap                                                                      |
| disk_format        | qcow2                                                                            |
| hw_watchdog_action | pause                                                                            |
| id                 | 0a713897-9ca0-4d5d-bd3e-15fd58c53ab1                                             |
| locations          | [{"url": "rbd://55d6fcc0-1b38-4104-b4e3-6b2d51684abf/images/0a713897-9ca0-4d5d-b |
|                    | d3e-15fd58c53ab1/snap", "metadata": {"store": "default_backend"}}]               |
| min_disk           | 0                                                                                |
| min_ram            | 0                                                                                |
| name               | cirros-test                                                                      |
| os_hash_algo       | sha512                                                                           |
| os_hash_value      | f0fd1b50420dce4ca382ccfbb528eef3a38bbeff00b54e95e3876b9bafe7ed2d6f919ca35d9046d4 |
|                    | 37c6d2d8698b1174a335fbd66035bb3edc525d2cdb187232                                 |
| os_hidden          | False                                                                            |
| owner              | 37a276001bec47da8a5737a873060314                                                 |
| protected          | False                                                                            |
| size               | 13267968                                                                         |
| status             | active                                                                           |
| stores             | default_backend                                                                  |
| tags               | []                                                                               |
| updated_at         | 2021-05-03T08:28:50Z                                                             |
| virtual_size       | Not available                                                                    |
| visibility         | shared                                                                           |
+--------------------+----------------------------------------------------------------------------------+

### boot VM to the hw_watchdog_action=pause image:

(overcloud) [stack@undercloud-0 ~]$ nova boot --nic net-id=1e330cdd-70d6-4a17-8a82-0deb4bf2b712 --flavor m2 --image 0a713897-9ca0-4d5d-bd3e-15fd58c53ab1 test-004
+--------------------------------------+----------------------------------------------------+
| Property                             | Value                                              |
+--------------------------------------+----------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                             |
| OS-EXT-AZ:availability_zone          |                                                    |
| OS-EXT-SRV-ATTR:host                 | -                                                  |
| OS-EXT-SRV-ATTR:hostname             | test-004                                           |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                  |
| OS-EXT-SRV-ATTR:instance_name        |                                                    |
| OS-EXT-SRV-ATTR:kernel_id            |                                                    |
| OS-EXT-SRV-ATTR:launch_index         | 0                                                  |
| OS-EXT-SRV-ATTR:ramdisk_id           |                                                    |
| OS-EXT-SRV-ATTR:reservation_id       | r-gi6b1c6n                                         |
| OS-EXT-SRV-ATTR:root_device_name     | -                                                  |
| OS-EXT-SRV-ATTR:user_data            | -                                                  |
| OS-EXT-STS:power_state               | 0                                                  |
| OS-EXT-STS:task_state                | scheduling                                         |
| OS-EXT-STS:vm_state                  | building                                           |
| OS-SRV-USG:launched_at               | -                                                  |
| OS-SRV-USG:terminated_at             | -                                                  |
| accessIPv4                           |                                                    |
| accessIPv6                           |                                                    |
| adminPass                            | GhWe6kCR4nML                                       |
| config_drive                         |                                                    |
| created                              | 2021-05-03T08:47:49Z                               |
| description                          | -                                                  |
| flavor:disk                          | 10                                                 |
| flavor:ephemeral                     | 0                                                  |
| flavor:extra_specs                   | {}                                                 |
| flavor:original_name                 | m2                                                 |
| flavor:ram                           | 2048                                               |
| flavor:swap                          | 2048                                               |
| flavor:vcpus                         | 2                                                  |
| hostId                               |                                                    |
| host_status                          |                                                    |
| id                                   | 16c7156b-4c3b-40a7-94ac-fa5b134d9426               |
| image                                | cirros-test (0a713897-9ca0-4d5d-bd3e-15fd58c53ab1) |
| key_name                             | -                                                  |
| locked                               | False                                              |
| locked_reason                        | -                                                  |
| metadata                             | {}                                                 |
| name                                 | test-004                                           |
| os-extended-volumes:volumes_attached | []                                                 |
| progress                             | 0                                                  |
| security_groups                      | default                                            |
| server_groups                        | []                                                 |
| status                               | BUILD                                              |
| tags                                 | []                                                 |
| tenant_id                            | 37a276001bec47da8a5737a873060314                   |
| trusted_image_certificates           | -                                                  |
| updated                              | 2021-05-03T08:47:49Z                               |
| user_id                              | 4519a8c03ded4cf2b9e80e4c177c74dc                   |
+--------------------------------------+----------------------------------------------------+
### VM is active:

(overcloud) [stack@undercloud-0 ~]$ nova list
+--------------------------------------+----------+--------+------------+-------------+------------------------------------------+
| ID                                   | Name     | Status | Task State | Power State | Networks                                 |
+--------------------------------------+----------+--------+------------+-------------+------------------------------------------+
| 0d251d46-e5db-45d0-962f-b3f3a1c2323d | test-002 | ACTIVE | -          | Running     | storage=172.17.3.172                     |
| 6e09dee8-cb33-4538-b2b2-967c34fbdc89 | test-003 | ACTIVE | -          | Running     | storage=172.17.3.180                     |
| 16c7156b-4c3b-40a7-94ac-fa5b134d9426 | test-004 | ACTIVE | -          | Running     | storage=172.17.3.226                     |
| 7479c6a1-0200-4a8d-a5cb-8d73b38f5177 | test1    | ACTIVE | -          | Running     | nova=10.0.0.164, 2620:52:0:13b8::1000:6d |
+--------------------------------------+----------+--------+------------+-------------+------------------------------------------+
(overcloud) [stack@undercloud-0 ~]$ openstack flavor show m2
+----------------------------+--------------------------------------+
| Field                      | Value                                |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled   | False                                |
| OS-FLV-EXT-DATA:ephemeral  | 0                                    |
| access_project_ids         | None                                 |
| description                | None                                 |
| disk                       | 10                                   |
| extra_specs                | {}                                   |
| id                         | e5300b6a-40d3-4b89-94d9-575f51de0289 |
| name                       | m2                                   |
| os-flavor-access:is_public | True                                 |
| properties                 |                                      |
| ram                        | 2048                                 |
| rxtx_factor                | 1.0                                  |
| swap                       | 2048                                 |
| vcpus                      | 2                                    |
+----------------------------+--------------------------------------+
(overcloud) [stack@undercloud-0 ~]$ 

=== no issues in nova-scheduler.log:

(overcloud) [stack@undercloud-0 ~]$ sudo tail -30 /var/log/containers/nova/nova-scheduler.log
2021-05-03 08:55:49.736 14 DEBUG oslo_concurrency.lockutils [req-f7e3fce5-f414-412f-ae78-1213615ace8f - - - - -] Lock "host_instance" released by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: held 0.001s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:339
2021-05-03 08:55:49.736 16 DEBUG nova.scheduler.host_manager [req-f7e3fce5-f414-412f-ae78-1213615ace8f - - - - -] Successfully synced instances from host 'undercloud-0.redhat.local'. sync_instance_info /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:960
2021-05-03 08:55:49.737 16 DEBUG oslo_concurrency.lockutils [req-f7e3fce5-f414-412f-ae78-1213615ace8f - - - - -] Lock "host_instance" released by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: held 0.001s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:339
2021-05-03 08:55:49.739 17 DEBUG oslo_concurrency.lockutils [req-f7e3fce5-f414-412f-ae78-1213615ace8f - - - - -] Lock "host_instance" acquired by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:327
2021-05-03 08:55:49.738 15 DEBUG oslo_concurrency.lockutils [req-f7e3fce5-f414-412f-ae78-1213615ace8f - - - - -] Lock "host_instance" acquired by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:327
2021-05-03 08:55:49.739 17 DEBUG nova.scheduler.host_manager [req-f7e3fce5-f414-412f-ae78-1213615ace8f - - - - -] Successfully synced instances from host 'undercloud-0.redhat.local'. sync_instance_info /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:960
2021-05-03 08:55:49.740 17 DEBUG oslo_concurrency.lockutils [req-f7e3fce5-f414-412f-ae78-1213615ace8f - - - - -] Lock "host_instance" released by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: held 0.001s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:339
2021-05-03 08:55:49.740 15 DEBUG nova.scheduler.host_manager [req-f7e3fce5-f414-412f-ae78-1213615ace8f - - - - -] Successfully synced instances from host 'undercloud-0.redhat.local'. sync_instance_info /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:960
2021-05-03 08:55:49.741 15 DEBUG oslo_concurrency.lockutils [req-f7e3fce5-f414-412f-ae78-1213615ace8f - - - - -] Lock "host_instance" released by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: held 0.003s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:339
2021-05-03 08:55:50.262 15 DEBUG oslo_service.periodic_task [req-73267992-910c-48f4-8f88-fedec0f81f65 - - - - -] Running periodic task SchedulerManager._run_periodic_tasks run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:217
2021-05-03 08:56:00.263 16 DEBUG oslo_service.periodic_task [req-da9546e6-f165-4377-8b68-1d94b8242fc3 - - - - -] Running periodic task SchedulerManager._run_periodic_tasks run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:217
2021-05-03 08:56:12.264 14 DEBUG oslo_service.periodic_task [req-bc0780c2-3e61-4b84-b04c-5a775ba19884 - - - - -] Running periodic task SchedulerManager._run_periodic_tasks run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:217
2021-05-03 08:56:34.263 17 DEBUG oslo_service.periodic_task [req-42bd6b32-7a4b-46c5-b76f-e944d109d496 - - - - -] Running periodic task SchedulerManager._run_periodic_tasks run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:217
2021-05-03 08:56:51.262 15 DEBUG oslo_service.periodic_task [req-73267992-910c-48f4-8f88-fedec0f81f65 - - - - -] Running periodic task SchedulerManager._run_periodic_tasks run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:217
2021-05-03 08:57:02.260 16 DEBUG oslo_service.periodic_task [req-da9546e6-f165-4377-8b68-1d94b8242fc3 - - - - -] Running periodic task SchedulerManager._run_periodic_tasks run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:217
2021-05-03 08:57:14.266 14 DEBUG oslo_service.periodic_task [req-bc0780c2-3e61-4b84-b04c-5a775ba19884 - - - - -] Running periodic task SchedulerManager._run_periodic_tasks run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:217
2021-05-03 08:57:35.263 17 DEBUG oslo_service.periodic_task [req-42bd6b32-7a4b-46c5-b76f-e944d109d496 - - - - -] Running periodic task SchedulerManager._run_periodic_tasks run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:217
2021-05-03 08:57:52.215 16 DEBUG oslo_concurrency.lockutils [req-3aa26953-9954-45ed-8006-fc7f23e6b15d - - - - -] Lock "host_instance" acquired by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:327
2021-05-03 08:57:52.215 17 DEBUG oslo_concurrency.lockutils [req-3aa26953-9954-45ed-8006-fc7f23e6b15d - - - - -] Lock "host_instance" acquired by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:327
2021-05-03 08:57:52.217 16 DEBUG nova.scheduler.host_manager [req-3aa26953-9954-45ed-8006-fc7f23e6b15d - - - - -] Successfully synced instances from host 'undercloud-0.redhat.local'. sync_instance_info /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:960
2021-05-03 08:57:52.217 17 DEBUG nova.scheduler.host_manager [req-3aa26953-9954-45ed-8006-fc7f23e6b15d - - - - -] Successfully synced instances from host 'undercloud-0.redhat.local'. sync_instance_info /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:960
2021-05-03 08:57:52.218 16 DEBUG oslo_concurrency.lockutils [req-3aa26953-9954-45ed-8006-fc7f23e6b15d - - - - -] Lock "host_instance" released by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: held 0.003s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:339
2021-05-03 08:57:52.218 17 DEBUG oslo_concurrency.lockutils [req-3aa26953-9954-45ed-8006-fc7f23e6b15d - - - - -] Lock "host_instance" released by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: held 0.003s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:339
2021-05-03 08:57:52.219 14 DEBUG oslo_concurrency.lockutils [req-3aa26953-9954-45ed-8006-fc7f23e6b15d - - - - -] Lock "host_instance" acquired by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:327
2021-05-03 08:57:52.219 14 DEBUG nova.scheduler.host_manager [req-3aa26953-9954-45ed-8006-fc7f23e6b15d - - - - -] Successfully synced instances from host 'undercloud-0.redhat.local'. sync_instance_info /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:960
2021-05-03 08:57:52.220 14 DEBUG oslo_concurrency.lockutils [req-3aa26953-9954-45ed-8006-fc7f23e6b15d - - - - -] Lock "host_instance" released by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: held 0.001s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:339
2021-05-03 08:57:52.221 15 DEBUG oslo_concurrency.lockutils [req-3aa26953-9954-45ed-8006-fc7f23e6b15d - - - - -] Lock "host_instance" acquired by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:327
2021-05-03 08:57:52.222 15 DEBUG nova.scheduler.host_manager [req-3aa26953-9954-45ed-8006-fc7f23e6b15d - - - - -] Successfully synced instances from host 'undercloud-0.redhat.local'. sync_instance_info /usr/lib/python3.6/site-packages/nova/scheduler/host_manager.py:960
2021-05-03 08:57:52.223 15 DEBUG oslo_concurrency.lockutils [req-3aa26953-9954-45ed-8006-fc7f23e6b15d - - - - -] Lock "host_instance" released by "nova.scheduler.host_manager.HostManager.sync_instance_info" :: held 0.001s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:339
2021-05-03 08:57:52.262 15 DEBUG oslo_service.periodic_task [req-73267992-910c-48f4-8f88-fedec0f81f65 - - - - -] Running periodic task SchedulerManager._run_periodic_tasks run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:217
(overcloud) [stack@undercloud-0 ~]$ 

VERIFIED

Comment 30 errata-xmlrpc 2021-09-15 07:08:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenStack Platform (RHOSP) 16.2 enhancement advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2021:3483


Note You need to log in before you can comment on or make changes to this bug.