Bug 1929840 - Failed to create VM with watchdog by setting flavor metadata
Summary: Failed to create VM with watchdog by setting flavor metadata
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-glance
Version: 13.0 (Queens)
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: z16
: 13.0 (Queens)
Assignee: Cyril Roelandt
QA Contact: Mike Abrams
Andy Stillman
URL:
Whiteboard:
Depends On: 1851797
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-02-17 18:44 UTC by Cyril Roelandt
Modified: 2022-08-30 12:14 UTC (History)
15 users (show)

Fixed In Version: openstack-glance-16.0.1-12.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1851797
Environment:
Last Closed: 2021-06-16 10:58:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-1642 0 None None None 2022-08-30 12:14:31 UTC
Red Hat Product Errata RHBA-2021:2385 0 None Closed [bug] tempest試験項目[telemetry_tempest_plugin.scenario.test_gnocchi.GnocchiGabbiTest.test_live]がFAILED 2022-06-03 09:25:59 UTC

Description Cyril Roelandt 2021-02-17 18:44:45 UTC
+++ This bug was initially created as a clone of Bug #1851797 +++

Description of problem:
Failed to create VM with watchdog by setting flavor metadata

Version-Release number of selected component (if applicable):
openstack-nova-compute-20.2.1-0.20200528080027.1e95025.el8ost.noarch
libvirt-daemon-kvm-6.0.0-23.module+el8.2.1+6955+1e1fca42.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Create a image/volume in OSP16.1, set the image/volume metedata hw_watchdog_action: pause, create flavor: m2, start the VM from image/volume with watchdog successfully, the xml is as below:
    <watchdog model='i6300esb' action='pause'>
      <alias name='watchdog0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </watchdog>

2. Set the flavor:m2 with hw_watchdog_action: pause, try to start the VM from the same image/volume with m2 on same compute node, failed with message:
   "No valid host was found. There are not enough hosts available"
-----------------------------------------------------------------------
[heat-admin@overcloud-controller-0 ~]$ sudo tail -f /var/log/containers/nova/nova-scheduler.log
2020-06-29 02:24:10.513 47 INFO nova.filters [req-a8e4515d-44ef-4020-83d5-9888a5776b8b 317f1c1fe439476dbfb991ec7768d23f 63a5b71d92f74e0ba553e6acca2c747c - default default] Filter AggregateInstanceExtraSpecsFilter returned 0 hosts
2020-06-29 02:24:10.514 47 INFO nova.filters [req-a8e4515d-44ef-4020-83d5-9888a5776b8b 317f1c1fe439476dbfb991ec7768d23f 63a5b71d92f74e0ba553e6acca2c747c - default default] Filtering removed all hosts for the request with instance ID '554daf2f-88f1-4171-ad01-fc236958219d'. Filter results: ['AvailabilityZoneFilter: (start: 4, end: 1)', 'ComputeFilter: (start: 1, end: 1)', 'ComputeCapabilitiesFilter: (start: 1, end: 1)', 'ImagePropertiesFilter: (start: 1, end: 1)', 'ServerGroupAntiAffinityFilter: (start: 1, end: 1)', 'ServerGroupAffinityFilter: (start: 1, end: 1)', 'NUMATopologyFilter: (start: 1, end: 1)', 'AggregateInstanceExtraSpecsFilter: (start: 1, end: 0)']  
------------------------------------------------------------------------

Actual results:
In step2, failed to start VM with flavor metadata: hw_watchdog_action: pause

Expected results:
In step2, can start VM with flavor metadata: hw_watchdog_action: pause successfully

--- Additional comment from  on 2020-07-03 14:35:08 UTC ---

there is not enough info in this bug to triage it
when filing bug you should always include sosreports if available
the current failure is due to the AggregateInstanceExtraSpecsFilter which likely is related to how its configure

at a minimum  we need the metadata for the aggregate and the flavor info but really we need the controller  logs

--- Additional comment from  on 2020-07-06 09:45:48 UTC ---

Please see the information as below:

1.  Create a image in OSP16.1, set the image metedata hw_watchdog_action: pause, create flavor: m2, 
    start the VM from image/volume with watchdog successfully, the xml is as below:
    <watchdog model='i6300esb' action='pause'>
      <alias name='watchdog0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </watchdog>

- image: r8 metadata
| properties       | direct_url='file:///var/lib/glance/images/181ea5e5-c345-4200-b205-8a6dd8d66d91', hw_watchdog_action='pause', os_hash_algo='sha512', os_hash_value='527d0a141dc1968b39d5528cf10223bb7849c77d0bc3cc67ed8024789b4c61aec6feaa444cd3ad0a305d80fd4ed1ebdc8f164527dc3d1437d3c04321fd76a78d', os_hidden='False', stores='default_backend' |

- flavor: m2 metadata (no item)
$ openstack flavor show m2
+----------------------------+-------+
| Field                      | Value |
+----------------------------+-------+
| OS-FLV-DISABLED:disabled   | False |
......
| properties                 |       |
......
  
2. Set the flavor:m2 with hw_watchdog_action: pause, 
   try to start the VM from the same image with m2 on same compute node, failed with message in nova-conductor.log:
   "No valid host was found. There are not enough hosts available"

- flavor: m2 metadata
$ openstack flavor show m2
+----------------------------+---------------------------------+
| Field                      | Value                           |
+----------------------------+---------------------------------+
| OS-FLV-DISABLED:disabled   | False                           |
| OS-FLV-EXT-DATA:ephemeral  | 0                               |
| access_project_ids         | None                            |
| description                | None                            |
| disk                       | 10                              |
| extra_specs                | {'hw_watchdog_action': 'pause'} |
| id                         | 7                               |
| name                       | m2                              |
| os-flavor-access:is_public | True                            |
| properties                 | hw_watchdog_action='pause'      |
| ram                        | 2048                            |
| rxtx_factor                | 1.0                             |
| swap                       | 0                               |
| vcpus                      | 2                               |
+----------------------------+---------------------------------+

$ cat nova-scheduler.log 
2020-07-06 09:39:38.916 50 INFO nova.filters [req-b271c72f-99f6-4d83-927d-1448c2b7bf1e 317f1c1fe439476dbfb991ec7768d23f 63a5b71d92f74e0ba553e6acca2c747c - default default] Filter AggregateInstanceExtraSpecsFilter returned 0 hosts
2020-07-06 09:39:38.917 50 INFO nova.filters [req-b271c72f-99f6-4d83-927d-1448c2b7bf1e 317f1c1fe439476dbfb991ec7768d23f 63a5b71d92f74e0ba553e6acca2c747c - default default] Filtering removed all hosts for the request with instance ID '474ca5db-73f5-436f-9e78-0df943edfc4c'. Filter results: ['AvailabilityZoneFilter: (start: 4, end: 1)', 'ComputeFilter: (start: 1, end: 1)', 'ComputeCapabilitiesFilter: (start: 1, end: 1)', 'ImagePropertiesFilter: (start: 1, end: 1)', 'ServerGroupAntiAffinityFilter: (start: 1, end: 1)', 'ServerGroupAffinityFilter: (start: 1, end: 1)', 'NUMATopologyFilter: (start: 1, end: 1)', 'AggregateInstanceExtraSpecsFilter: (start: 1, end: 0)']

More details in attached log file. Thank you!

--- Additional comment from  on 2020-07-06 09:46:21 UTC ---



--- Additional comment from  on 2020-07-06 09:47:34 UTC ---



--- Additional comment from  on 2020-07-06 10:05:07 UTC ---

More information:

$ openstack aggregate list
+----+------+-------------------+
| ID | Name | Availability Zone |
+----+------+-------------------+
|  1 | numa | numa              |
|  2 | vgpu | vgpu              |
+----+------+-------------------+

In step 1 and 2, both try to start VM from image and select Availability Zone: nova in web console.

--- Additional comment from  on 2020-07-06 11:23:47 UTC ---

hw_watchdog_action is the way it would be set in an image not the valid form for a flavor.

flavors have namespaced extra specs in this case hw:watchdog_action  images do not support namespacs
so we flatten them converting hw:watchdog_action to hw_watchdog_action


so as specifed the hw_watchdog_action in the flaovr should be ignored with regrades to setting the values in the xml.

however as hw_watchdog_action is an unnamespaced extra spec it will be chekced by the AggregateInstanceExtraSpecsFilter
and if it is not listed on a host aggreate it will reject the host.

so im closing this as not a bug as the flavor is invaild and hte AggregateInstanceExtraSpecsFilter is correctly filtering the
host. The AggregateInstanceExtraSpecsFilter checks all unnamespaced extra specs or those that start with aggregate_instance_extra_specs:

if you change hw_watchdog_action to hw:watchdog_action it should work.

--- Additional comment from  on 2020-07-07 01:37:39 UTC ---

(In reply to smooney from comment #6)
> hw_watchdog_action is the way it would be set in an image not the valid form
> for a flavor.
> 
> flavors have namespaced extra specs in this case hw:watchdog_action  images
> do not support namespacs
> so we flatten them converting hw:watchdog_action to hw_watchdog_action
> 
> 
> so as specifed the hw_watchdog_action in the flaovr should be ignored with
> regrades to setting the values in the xml.
> 
> however as hw_watchdog_action is an unnamespaced extra spec it will be
> chekced by the AggregateInstanceExtraSpecsFilter
> and if it is not listed on a host aggreate it will reject the host.
> 

Hi, Sean

This is the problem, I updated the flavor metadata on OSP dashboard,
"Update Flavor Metadata -> Watchdog Behavior -> Watchdog Action -> Click + ",
it added the "hw_watchdog_action" in flavor.

I think at least the dashboard need fix, to add hw:watchdog_action but not hw_watchdog_action,
we can't provide a way to customer, and then say you can't do things like that.

Regards,
Chenli Hu

--- Additional comment from  on 2020-07-07 01:45:23 UTC ---



--- Additional comment from  on 2020-07-07 01:50:41 UTC ---



--- Additional comment from  on 2020-07-07 01:52:05 UTC ---



--- Additional comment from  on 2020-07-07 06:05:55 UTC ---

I move it to openstack-dashboard-theme, please correct it if it's not suitable, thank you!

--- Additional comment from  on 2020-07-07 12:49:12 UTC ---

this is a glance bug
https://github.com/openstack/glance/blob/master/etc/metadefs/compute-watchdog.json

    "resource_type_associations": [
        {
            "name": "OS::Glance::Image"
        },
        {
            "name": "OS::Cinder::Volume",
            "properties_target": "image"
        },
        {
            "name": "OS::Nova::Flavor"
        }
    ],

should be

   "resource_type_associations": [
        {
            "name": "OS::Glance::Image",
            "prefix": "hw_"
        },
        {
            "name": "OS::Nova::Flavor",
            "prefix": "hw:"
        }
    ],

and the hw_ prefix should be dropped from the property.

https://github.com/openstack/glance/blob/master/etc/metadefs/compute-watchdog.json#L20

horizon generates the ui form the glance metadefs.

--- Additional comment from Cyril Roelandt on 2020-07-09 17:48:46 UTC ---

Are we supposed to remove OS::Cinder::Volume as well? Or is the patch below OK?

To be honest, I'm not sure I understand the issue very well. Can you open a bug upstream? If so, I'll make sure the relevant patch gets backported.

diff --git a/etc/metadefs/compute-watchdog.json b/etc/metadefs/compute-watchdog.json
index a8e9e43a..cbe39696 100644
--- a/etc/metadefs/compute-watchdog.json
+++ b/etc/metadefs/compute-watchdog.json
@@ -6,18 +6,20 @@
     "protected": true,
     "resource_type_associations": [
         {
-            "name": "OS::Glance::Image"
+            "name": "OS::Glance::Image",
+            "prefix": "hw_"
         },
         {
             "name": "OS::Cinder::Volume",
             "properties_target": "image"
         },
         {
-            "name": "OS::Nova::Flavor"
+            "name": "OS::Nova::Flavor",
+            "prefix": "hw:"
         }
     ],
     "properties": {
-        "hw_watchdog_action": {
+        "watchdog_action": {
             "title": "Watchdog Action",
             "description": "For the libvirt driver, you can enable and set the behavior of a virtual hardware watchdog device for each flavor. Watchdog devices keep an eye on the guest server, and carry out the configured action, if the server hangs. The watchdog uses the i6300esb device (emulating a PCI Intel 6300ESB). If hw_watchdog_action is not specified, the watchdog is disabled. Watchdog behavior set using a specific image's properties will override behavior set using flavors.",
             "type": "string",

--- Additional comment from  on 2020-07-09 20:34:41 UTC ---

image metadata stored in the volume is valid but it would have the same prefix as image e.g. "hw_"
im not sure if you need to specify that or not.

basicaly horizon uses the glance metadta api to retirve a catalog of metadata keys and in some cases validations which it then uses to dynamically generate uis.

so if you ever wondered where the ui you get for setting image metadata or flavor extra specs comes form its glances metadef api.
glance was selected to maintain the catalog of metadata definitons just after i started working on openstack about 6 years ago.
https://github.com/openstack/glance/commit/1c242032fbb26fed3a82691abb030583b4f8940b
this file was added in that inital commit and it looks like its always been wrong.

the watchdog is a very rarely used feature and even more rare for the feature to be configured via horizon.

so basically since the metadef was incorrect the

based on the example response in the api docs i belive that

volume definition should look like this

{
          "name": "OS::Cinder::Volume",
          "prefix": "hw_",
          "properties_target": "image",
}

see https://docs.openstack.org/glance/latest/user/glancemetadefcatalogapi.html#retrieve-namespace

and yes feel free to file an upstream bug to track this.

this is a glance feature that many people dont know is there.
the orginial intent was that clients could hit that endpoint and automaticaly fiture out what metadtaa was valid and generate valiation, commandlines and documentation/ui elements.
which is why in correct behavior in horizon needs to be fixed in glance.

--- Additional comment from Cyril Roelandt on 2020-07-10 00:58:05 UTC ---

OK I sent a patch upstream (see https://review.opendev.org/#/c/740384) and added you to the reviewer list.

--- Additional comment from RHEL Program Management on 2020-09-24 20:39:29 UTC ---

This bugzilla has been removed from the release since it  does not have an acked release flag. For details, see https://mojo.redhat.com/docs/DOC-1144661#jive_content_id_OSP_Release_Planning.'

--- Additional comment from RHEL Program Management on 2020-10-01 19:35:46 UTC ---

This bugzilla has been removed from the release since it  does not have an acked release flag. For details, see https://mojo.redhat.com/docs/DOC-1144661#jive_content_id_OSP_Release_Planning.'

--- Additional comment from RHEL Program Management on 2020-10-01 19:43:42 UTC ---

This bugzilla has been removed from the release since it  does not have an acked release flag. For details, see https://mojo.redhat.com/docs/DOC-1144661#jive_content_id_OSP_Release_Planning.'

--- Additional comment from RHEL Program Management on 2020-10-05 15:24:12 UTC ---

This item has been properly Triaged and planned for the release, and Target Release is now set to match the release flag. For details, see https://mojo.redhat.com/docs/DOC-1195410

--- Additional comment from Giulio Fidente on 2020-11-03 09:55:58 UTC ---

Cyril, I don't see the stable/train backport, is that on purpose? If so I think we need the downstream cherry-pick

--- Additional comment from RHEL Program Management on 2020-11-09 17:02:09 UTC ---

This item has been properly Triaged and planned for the release, and Target Release is now set to match the release flag. For details, see https://mojo.redhat.com/docs/DOC-1195410

--- Additional comment from Cyril Roelandt on 2020-11-10 20:11:33 UTC ---

@Giulio: Yes, that is not going to be fixed upstream, so it's a downstream-only backport.

Comment 15 Mike Abrams 2021-06-06 07:01:23 UTC
=== fixed in version present
(undercloud) [stack@undercloud-0 ~]$ rpm -qa openstack-glance
openstack-glance-16.0.1-12.el7ost.noarch
(undercloud) [stack@undercloud-0 ~]$
(undercloud) [stack@undercloud-0 ~]$ rhos-release -L
Installed repositories (rhel-7.9):
  13
  ceph-3
  ceph-osd-3
  rhel-7.9
(undercloud) [stack@undercloud-0 ~]$ cat /var/lib/rhos-release/latest-installed 
13  -p 2021-05-13.1
(undercloud) [stack@undercloud-0 ~]$

=== import image
(overcloud) [stack@undercloud-0 ~]$ glance image-create-via-import --container-format ami --disk-format ami --name cirros-test --import-method web-download --uri http://download.cirros-cloud.net/0.5.1/cirros-0.5.1-x86_64-disk.img
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | None                                 |
| container_format | ami                                  |
| created_at       | 2021-06-06T06:45:32Z                 |
| disk_format      | ami                                  |
| id               | d95222ba-48de-4692-ac54-fb91e5462170 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros-test                          |
| owner            | eed49df33d8a479aa93ee001cf928cdc     |
| protected        | False                                |
| size             | None                                 |
| status           | importing                            |
| tags             | []                                   |
| updated_at       | 2021-06-06T06:45:34Z                 |
| virtual_size     | None                                 |
| visibility       | shared                               |
+------------------+--------------------------------------+
(overcloud) [stack@undercloud-0 ~]$ glance image-show d95222ba-48de-4692-ac54-fb91e5462170 | grep status
| status           | active                                                          |
(overcloud) [stack@undercloud-0 ~]$

=== set watchdog action
(overcloud) [stack@undercloud-0 ~]$ glance image-update d95222ba-48de-4692-ac54-fb91e5462170 --property hw_watchdog_action=pause
+--------------------+-----------------------------------------------------------------+
| Property           | Value                                                           |
+--------------------+-----------------------------------------------------------------+
| checksum           | 1d3062cd89af34e419f7100277f38b2b                                |
| container_format   | ami                                                             |
| created_at         | 2021-06-06T06:45:32Z                                            |
| direct_url         | swift+config://ref1/glance/d95222ba-48de-4692-ac54-fb91e5462170 |
| disk_format        | ami                                                             |
| hw_watchdog_action | pause                                                           |
| id                 | d95222ba-48de-4692-ac54-fb91e5462170                            |
| min_disk           | 0                                                               |
| min_ram            | 0                                                               |
| name               | cirros-test                                                     |
| owner              | eed49df33d8a479aa93ee001cf928cdc                                |
| protected          | False                                                           |
| size               | 16338944                                                        |
| status             | active                                                          |
| tags               | []                                                              |
| updated_at         | 2021-06-06T06:47:08Z                                            |
| virtual_size       | None                                                            |
| visibility         | shared                                                          |
+--------------------+-----------------------------------------------------------------+
(overcloud) [stack@undercloud-0 ~]$

=== boot VM to the hw_watchdog_action=pause image
(overcloud) [stack@undercloud-0 ~]$ glance image-list
+--------------------------------------+-------------+
| ID                                   | Name        |
+--------------------------------------+-------------+
| d95222ba-48de-4692-ac54-fb91e5462170 | cirros-test |
+--------------------------------------+-------------+
(overcloud) [stack@undercloud-0 ~]$ neutron net-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+--------+----------------------------------+--------------------------------------------------+
| id                                   | name   | tenant_id                        | subnets                                          |
+--------------------------------------+--------+----------------------------------+--------------------------------------------------+
| 6c60e560-d88e-4587-85e1-7d0517257a5f | public | eed49df33d8a479aa93ee001cf928cdc | bef7ab90-bf45-4730-add7-eef531721c9f 10.0.0.0/24 |
+--------------------------------------+--------+----------------------------------+--------------------------------------------------+
(overcloud) [stack@undercloud-0 ~]$ nova flavor-list
+--------------------------------------+---------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| ID                                   | Name    | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | Description |
+--------------------------------------+---------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| b97f88d8-9d11-4193-8580-d18069920b3a | m2.tiny | 1024      | 10   | 0         | 2048 | 1     | 1.0         | True      | -           |
+--------------------------------------+---------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
(overcloud) [stack@undercloud-0 ~]$ nova boot --nic net-id=6c60e560-d88e-4587-85e1-7d0517257a5f --flavor m2.tiny --image d95222ba-48de-4692-ac54-fb91e5462170 cirros-test-001
+--------------------------------------+----------------------------------------------------+
| Property                             | Value                                              |
+--------------------------------------+----------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                             |
| OS-EXT-AZ:availability_zone          |                                                    |
| OS-EXT-SRV-ATTR:host                 | -                                                  |
| OS-EXT-SRV-ATTR:hostname             | cirros-test-001                                    |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                  |
| OS-EXT-SRV-ATTR:instance_name        |                                                    |
| OS-EXT-SRV-ATTR:kernel_id            |                                                    |
| OS-EXT-SRV-ATTR:launch_index         | 0                                                  |
| OS-EXT-SRV-ATTR:ramdisk_id           |                                                    |
| OS-EXT-SRV-ATTR:reservation_id       | r-93eov2ld                                         |
| OS-EXT-SRV-ATTR:root_device_name     | -                                                  |
| OS-EXT-SRV-ATTR:user_data            | -                                                  |
| OS-EXT-STS:power_state               | 0                                                  |
| OS-EXT-STS:task_state                | scheduling                                         |
| OS-EXT-STS:vm_state                  | building                                           |
| OS-SRV-USG:launched_at               | -                                                  |
| OS-SRV-USG:terminated_at             | -                                                  |
| accessIPv4                           |                                                    |
| accessIPv6                           |                                                    |
| adminPass                            | oAdeXj8krscF                                       |
| config_drive                         |                                                    |
| created                              | 2021-06-06T06:53:02Z                               |
| description                          | -                                                  |
| flavor:disk                          | 10                                                 |
| flavor:ephemeral                     | 0                                                  |
| flavor:extra_specs                   | {}                                                 |
| flavor:original_name                 | m2.tiny                                            |
| flavor:ram                           | 1024                                               |
| flavor:swap                          | 2048                                               |
| flavor:vcpus                         | 1                                                  |
| hostId                               |                                                    |
| host_status                          |                                                    |
| id                                   | 10f2259a-52a1-4e02-a47b-89ec625c6561               |
| image                                | cirros-test (d95222ba-48de-4692-ac54-fb91e5462170) |
| key_name                             | -                                                  |
| locked                               | False                                              |
| metadata                             | {}                                                 |
| name                                 | cirros-test-001                                    |
| os-extended-volumes:volumes_attached | []                                                 |
| progress                             | 0                                                  |
| security_groups                      | default                                            |
| status                               | BUILD                                              |
| tags                                 | []                                                 |
| tenant_id                            | eed49df33d8a479aa93ee001cf928cdc                   |
| updated                              | 2021-06-06T06:53:02Z                               |
| user_id                              | abb669b50d674eadaf4874e87723cdbe                   |
+--------------------------------------+----------------------------------------------------+
(overcloud) [stack@undercloud-0 ~]$ nova list
+--------------------------------------+-----------------+--------+------------+-------------+----------+
| ID                                   | Name            | Status | Task State | Power State | Networks |
+--------------------------------------+-----------------+--------+------------+-------------+----------+
| 10f2259a-52a1-4e02-a47b-89ec625c6561 | cirros-test-001 | BUILD  | spawning   | NOSTATE     |          |
+--------------------------------------+-----------------+--------+------------+-------------+----------+
(overcloud) [stack@undercloud-0 ~]$ nova list
+--------------------------------------+-----------------+--------+------------+-------------+-------------------+
| ID                                   | Name            | Status | Task State | Power State | Networks          |
+--------------------------------------+-----------------+--------+------------+-------------+-------------------+
| 10f2259a-52a1-4e02-a47b-89ec625c6561 | cirros-test-001 | ACTIVE | -          | Running     | public=10.0.0.217 |
+--------------------------------------+-----------------+--------+------------+-------------+-------------------+
(overcloud) [stack@undercloud-0 ~]$ 

=== no errors in nova-scheduler.log
(undercloud) [stack@undercloud-0 ~]$ openstack server list
+--------------------------------------+----------------+--------+------------------------+----------------+------------+
| ID                                   | Name           | Status | Networks               | Image          | Flavor     |
+--------------------------------------+----------------+--------+------------------------+----------------+------------+
| 03628e93-c82a-4545-8689-a947f0a3daf5 | controller13-1 | ACTIVE | ctlplane=192.168.24.10 | overcloud-full | controller |
| 7deb5f5e-bba4-4b5d-8023-3a6edbd87d98 | compute13-1    | ACTIVE | ctlplane=192.168.24.38 | overcloud-full | compute    |
| 225e84a6-c0fa-4c25-8f01-b1b6a1e7aaa8 | controller13-2 | ACTIVE | ctlplane=192.168.24.14 | overcloud-full | controller |
| 702ad596-c84a-4176-a1c2-5e74bc7c2b0d | controller13-0 | ACTIVE | ctlplane=192.168.24.7  | overcloud-full | controller |
| 9eaeccee-ee38-4b41-a2b5-84cbe54dde8c | compute13-0    | ACTIVE | ctlplane=192.168.24.16 | overcloud-full | compute    |
+--------------------------------------+----------------+--------+------------------------+----------------+------------+
(undercloud) [stack@undercloud-0 ~]$ for i in 7 10 14; do echo "=== 192.168.24.$i"; ssh -t heat-admin.24.$i "sudo egrep -i 'warn|err' /var/log/containers/nova/nova-scheduler.log"; done
=== 192.168.24.7
Warning: Permanently added '192.168.24.7' (ECDSA) to the list of known hosts.
Connection to 192.168.24.7 closed.
=== 192.168.24.10
Warning: Permanently added '192.168.24.10' (ECDSA) to the list of known hosts.
Connection to 192.168.24.10 closed.
=== 192.168.24.14
Warning: Permanently added '192.168.24.14' (ECDSA) to the list of known hosts.
Connection to 192.168.24.14 closed.
(undercloud) [stack@undercloud-0 ~]$

Comment 19 errata-xmlrpc 2021-06-16 10:58:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenStack Platform 13.0 bug fix and enhancement advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:2385


Note You need to log in before you can comment on or make changes to this bug.