Bug 2182371 - [17.1][OVN HWOFFLOAD][CX-5, CX-6 & Bluefield-2] Vms with offload vfs takes a long time to boot up [10 min+]
Summary: [17.1][OVN HWOFFLOAD][CX-5, CX-6 & Bluefield-2] Vms with offload vfs takes a ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openvswitch
Version: 17.1 (Wallaby)
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ga
: 17.1
Assignee: Haresh Khandelwal
QA Contact: Miguel Angel Nieto
URL:
Whiteboard:
Depends On: 2172622 2217881
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-03-28 12:56 UTC by Miguel Angel Nieto
Modified: 2023-08-21 14:36 UTC (History)
18 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
There is currently a known issue with guest instances that use Mellanox ConnectX-5, ConnectX-6, and Bluefield-2 NICs with offload (switchdev) ports. It takes a long time to initialize the system when you reboot the operating system from the guest directly, for example, by using the command `sudo systemctl reboot --reboot-arg=now`. If the instance is configured with two Virtual Functions (VFs) from the same Physical Function (PF), the initialization of one of the VFs might fail and cause a longer initialization time. + Workaround: Reboot the guest instance in a timely manner by using the OpenStack API instead of rebooting the guest instance directly.
Clone Of:
Environment:
Last Closed: 2023-08-16 01:14:34 UTC
Target Upstream Version:
Embargoed:
gurpsing: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker NFV-2793 0 None None None 2023-03-28 13:30:50 UTC
Red Hat Issue Tracker OSP-23787 0 None None None 2023-03-28 12:57:45 UTC
Red Hat Product Errata RHEA-2023:4577 0 None None None 2023-08-16 01:15:11 UTC

Description Miguel Angel Nieto 2023-03-28 12:56:47 UTC
Description of problem:

When I spawn a vf with ovs hw offload vfs, it boots up the first time very fast (10-20 seconds). But when I reboot it, it takes a long time that can be up to 10 minutes. What i see in the logs is that this is unrelated to dhcp, it is related with mlx5 drivers. This time is proportional with the number of offload vfs, if there are 2 vfs, it takes much longer. I have tested several guest rhel versions (7.6, 8.7, 9.2), and it happens with all of them.
This is dmesg output. We can see that the gap in the timeline happens with mlx5 drivers.

GUEST logs (rhel 9.2)
[    3.874337] sr 0:0:0:0: Attached scsi CD-ROM sr0
[    3.966794] mlx5_core 0000:04:00.0: Rate limit: 127 rates are supported, range: 0Mbps to 97656Mbps
[   27.919765] mlx5_core 0000:04:00.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0)
[  123.899080] mlx5_core 0000:04:00.0: Supported tc offload range - chains: 1, prios: 1
[  123.928305] SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
[  123.932072] XFS (vda4): Mounting V5 Filesystem
..
..
..
[  125.168237] [drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0
[  125.173757] virtio_gpu virtio0: [drm] drm_plane_enable_fb_damage_clips() not called
[  125.173785] Console: switching to colour frame buffer device 160x50
[  125.183133] virtio_gpu virtio0: [drm] fb0: virtio_gpudrmfb frame buffer device
[  143.735873] mlx5_core 0000:04:00.0 eth0: Link up
[  144.094779] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[  190.800001] ISO 9660 Extensions: Microsoft Joliet Level 3
[  190.804790] ISO 9660 Extensions: RRIP_1991A


This is the nic configured:
[cloud-user@tempest-testnfvoffload-server-1473061677 ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8942 qdisc mq state UP group default qlen 1000
    link/ether fa:16:3e:6c:11:c2 brd ff:ff:ff:ff:ff:ff
    altname enp4s0
    inet 20.20.220.155/24 brd 20.20.220.255 scope global dynamic noprefixroute eth0
       valid_lft 42593sec preferred_lft 42593sec
    inet6 fe80::f816:3eff:fe6c:11c2/64 scope link 
       valid_lft forever preferred_lft forever

[cloud-user@tempest-testnfvoffload-server-1473061677 ~]$ time ethtool -i eth0
driver: mlx5_core
version: 5.14.0-283.el9.x86_64
firmware-version: 16.35.2000 (MT_0000000008)
expansion-rom-version: 
bus-info: 0000:04:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes


OSP logs
**********************
(overcloud) [stack@undercloud-0 ~]$ openstack server list --all-projects
+--------------------------------------+------------------------------------------+--------+------------------------------------------------------+----------------------------------------------------+--------------------+
| ID                                   | Name                                     | Status | Networks                                             | Image                                              | Flavor             |
+--------------------------------------+------------------------------------------+--------+------------------------------------------------------+----------------------------------------------------+--------------------+
| 0708333a-05e8-42e6-af96-d9085d1b7044 | tempest-TestNfvOffload-server-1473061677 | ACTIVE | mellanox-geneve-provider=10.46.228.41, 20.20.220.155 | rhel-guest-image-nfv-3-9.2-20230306.4.x86_64.qcow2 | nfv_qe_base_flavor |
| 9c7c1158-f272-452b-b54d-37644987ac90 | tempest-TestNfvOffload-server-1582254805 | ACTIVE | mellanox-geneve-provider=10.46.228.33, 20.20.220.152 | rhel-guest-image-nfv-3-9.2-20230306.4.x86_64.qcow2 | nfv_qe_base_flavor |
+--------------------------------------+------------------------------------------+--------+------------------------------------------------------+----------------------------------------------------+--------------------+
(overcloud) [stack@undercloud-0 ~]$ openstack port list
+--------------------------------------+-------------------------------+-------------------+------------------------------------------------------------------------------+--------+
| ID                                   | Name                          | MAC Address       | Fixed IP Addresses                                                           | Status |
+--------------------------------------+-------------------------------+-------------------+------------------------------------------------------------------------------+--------+
| 192698fe-c80e-4549-bbae-be7aef9257d1 |                               | fa:16:3e:f6:34:71 | ip_address='10.46.228.33', subnet_id='59afd61a-54af-4f9d-a5cc-c8275ed09206'  | N/A    |
| 361a7279-8f49-4f09-9d9d-2fd6a860f9e0 | tempest-port-smoke-1919032438 | fa:16:3e:79:2f:c5 | ip_address='20.20.220.152', subnet_id='7518ee7f-886b-4c44-ad06-171691d29089' | ACTIVE |
| 88c34cdf-39cb-4e67-8c6c-8fcef2e18f72 |                               | fa:16:3e:c8:ff:49 | ip_address='10.46.228.35', subnet_id='59afd61a-54af-4f9d-a5cc-c8275ed09206'  | ACTIVE |
| 914603d5-a441-43bd-bf5c-23a9b5eb9be9 |                               | fa:16:3e:22:ae:67 | ip_address='20.20.220.100', subnet_id='7518ee7f-886b-4c44-ad06-171691d29089' | DOWN   |
| c27e914d-0334-444b-8534-b14d95b95af4 | tempest-port-smoke-400397490  | fa:16:3e:6c:11:c2 | ip_address='20.20.220.155', subnet_id='7518ee7f-886b-4c44-ad06-171691d29089' | ACTIVE |
| e98594b0-f817-485f-8731-726c4c551815 |                               | fa:16:3e:67:4d:c9 | ip_address='20.20.220.254', subnet_id='7518ee7f-886b-4c44-ad06-171691d29089' | ACTIVE |
| ff1c06d4-bc4b-4a4a-9dcb-3acb52b7b7c9 |                               | fa:16:3e:c1:10:62 | ip_address='10.46.228.41', subnet_id='59afd61a-54af-4f9d-a5cc-c8275ed09206'  | N/A    |
+--------------------------------------+-------------------------------+-------------------+------------------------------------------------------------------------------+--------+

(overcloud) [stack@undercloud-0 ~]$ openstack port show c27e914d-0334-444b-8534-b14d95b95af4
+-------------------------+-------------------------------------------------------------------------------------------------------+
| Field                   | Value                                                                                                 |
+-------------------------+-------------------------------------------------------------------------------------------------------+
| admin_state_up          | UP                                                                                                    |
| allowed_address_pairs   |                                                                                                       |
| binding_host_id         | computehwoffload-r730.localdomain                                                                     |
| binding_profile         | capabilities='['switchdev']', pci_slot='0000:04:00.7', pci_vendor_info='15b3:1018', physical_network= |
| binding_vif_details     | connectivity='l2', port_filter='True'                                                                 |
| binding_vif_type        | ovs                                                                                                   |
| binding_vnic_type       | direct                                                                                                |
| created_at              | 2023-03-28T12:21:12Z                                                                                  |
| data_plane_status       | None                                                                                                  |
| description             |                                                                                                       |
| device_id               | 0708333a-05e8-42e6-af96-d9085d1b7044                                                                  |
| device_owner            | compute:nova                                                                                          |
| device_profile          | None                                                                                                  |
| dns_assignment          | None                                                                                                  |
| dns_domain              | None                                                                                                  |
| dns_name                | None                                                                                                  |
| extra_dhcp_opts         |                                                                                                       |
| fixed_ips               | ip_address='20.20.220.155', subnet_id='7518ee7f-886b-4c44-ad06-171691d29089'                          |
| id                      | c27e914d-0334-444b-8534-b14d95b95af4                                                                  |
| ip_allocation           | None                                                                                                  |
| mac_address             | fa:16:3e:6c:11:c2                                                                                     |
| name                    | tempest-port-smoke-400397490                                                                          |
| network_id              | 2eac687d-ef51-48bf-821c-e67a4065f915                                                                  |
| numa_affinity_policy    | None                                                                                                  |
| port_security_enabled   | False                                                                                                 |
| project_id              | 75a9e2807bc5491e9388d6751b997163                                                                      |
| propagate_uplink_status | None                                                                                                  |
| qos_network_policy_id   | None                                                                                                  |
| qos_policy_id           | None                                                                                                  |
| resource_request        | None                                                                                                  |
| revision_number         | 5                                                                                                     |
| security_group_ids      |                                                                                                       |
| status                  | ACTIVE                                                                                                |
| tags                    |                                                                                                       |
| trunk_details           | None                                                                                                  |
| updated_at              | 2023-03-28T12:21:25Z                                                                                  |
+-------------------------+-------------------------------------------------------------------------------------------------------+


Compute config
****************************
14: enp4s0f0np0: <BROADCAST,MULTICAST,PROMISC,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master mx-bond state UP mode DEFAULT group default qlen 1000
    link/ether ec:0d:9a:7d:7d:2a brd ff:ff:ff:ff:ff:ff
    vf 0     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 1     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 2     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 3     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 4     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 5     link/ether fa:16:3e:6c:11:c2 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 6     link/ether d6:3b:1b:17:15:c2 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 7     link/ether 92:e2:16:fc:ad:1b brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 8     link/ether 16:e6:56:5a:97:af brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 9     link/ether 46:96:b4:4f:e4:d0 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
15: enp4s0f1np1: <BROADCAST,MULTICAST,PROMISC,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master mx-bond state UP mode DEFAULT group default qlen 1000
    link/ether ec:0d:9a:7d:7d:2a brd ff:ff:ff:ff:ff:ff permaddr ec:0d:9a:7d:7d:2b
    vf 0     link/ether a2:3e:a2:a0:3b:96 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 1     link/ether de:70:ff:d3:16:28 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 2     link/ether 0e:ec:5e:cb:6c:da brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 3     link/ether c6:6a:61:02:7e:05 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 4     link/ether 02:6a:69:c3:af:29 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 5     link/ether 62:d7:72:1d:bb:6e brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 6     link/ether e2:9e:48:60:8d:5a brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 7     link/ether 76:82:6a:81:8f:2f brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 8     link/ether ae:fc:14:90:1d:a9 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off
    vf 9     link/ether a6:77:08:db:a0:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off

[root@computehwoffload-r730 tripleo-admin]# ethtool -i enp4s0f0np0
driver: mlx5_core
version: 5.14.0-284.el9.x86_64
firmware-version: 16.35.2000 (MT_0000000008)
expansion-rom-version: 
bus-info: 0000:04:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes





Version-Release number of selected component (if applicable):
RHOS-17.1-RHEL-9-20230315.n.1

How reproducible:
1. deploy ovn hwoffload templates
2. spawn a vm with 1 or more offload vfs
3. reboot the vm
4. check dmesg to see how long it took



Actual results:
Reboots takes a very long and random time 


Expected results:
It should reboot faster


Additional info:
i will update more logs

Comment 13 Pradipta Kumar Sahoo 2023-05-17 12:26:32 UTC
In the NFV Perf lab, this issue is also reproduced while using Mellanox Cx6 100G card. Please find the below console log. Please let me know if there is any workaround available to fix the issue.

$ openstack console log show VM-SRIOV-PERF202 |grep mlx5_core
/usr/lib/python3.9/site-packages/openstack/config/cloud_region.py:452: UserWarning: You have a configured API_VERSION with 'latest' in it. In the context of openstacksdk this doesn't make any sense.
  warnings.warn(
[    3.044942] mlx5_core 0000:05:00.0: firmware version: 16.31.1014
[    3.205229] mlx5_core 0000:05:00.0: Rate limit: 127 rates are supported, range: 0Mbps to 97656Mbps
[   28.400464] mlx5_core 0000:05:00.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0)
[  153.443461] mlx5_core 0000:05:00.0: Supported tc offload range - chains: 1, prios: 1
[  153.453334] mlx5_core 0000:06:00.0: firmware version: 16.31.1014
[  153.645515] mlx5_core 0000:06:00.0: Rate limit: 127 rates are supported, range: 0Mbps to 97656Mbps
[  182.879464] mlx5_core 0000:06:00.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0)
[  333.147479] mlx5_core 0000:06:00.0: mlx5e_create_flow_steering:1280:(pid 491): Failed to create arfs tables, err=-12
[  334.163455] mlx5_core 0000:06:00.0: Supported tc offload range - chains: 1, prios: 1
M[K[[0;1;31m[  352.904469] mlx5_core 0000:05:00.0 eth1: Link up
[  365.831471] mlx5_core 0000:06:00.0 eth2: Link up

$ openstack console log show VM-SRIOV-PERF202 |grep Network
[[0;32m  OK  [0m] Reached target [0;1;39mNetwork[0m.
[[0;32m  OK  [0m] Stopped target [0;1;39mNetwork[0m.
[[0;32m  OK  [0m] Reached target [0;1;39mPreparation for Network[0m.
         Starting [0;1;39mNetwork Manager[0m...
[[0;32m  OK  [0m] Started [0;1;39mNetwork Manager[0m.
[[0;32m  OK  [0m] Reached target [0;1;39mNetwork[0m.
         Starting [0;1;39mNetwork Manager Wait Online[0m...
         Starting [0;1;39mNetwork Manager Script Dispatcher Service[0m...
[[0;32m  OK  [0m] Started [0;1;39mNetwork Manager Script Dispatcher Service[0m.
[[0m[0;31m*     [0m] A start job is running for Network Manager Wait Online (4s / no limit)
M[K[[0;1;31m*[0m[0;31m*    [0m] A start job is running for Network Manager Wait Online (4s / no limit)
M[K[[0;31m*[0;1;31m*[0m[0;31m*   [0m] A start job is running for Network Manager Wait Online (5s / no limit)
M[K[ [0;31m*[0;1;31m*[0m[0;31m*  [0m] A start job is running for Network Manager Wait Online (5s / no limit)
M[K[  [0;31m*[0;1;31m*[0m[0;31m* [0m] A start job is running for Network Manager Wait Online (6s / no limit)
M[K[   [0;31m*[0;1;31m*[0m[0;31m*[0m] A start job is running for Network Manager Wait Online (6s / no limit)
M[K[    [0;31m*[0;1;31m*[0m] A start job is running for Network Manager Wait Online (7s / no limit)
M[K[     [0;31m*[0m] A start job is running for Network Manager Wait Online (7s / no limit)
M[K[    [0;31m*[0;1;31m*[0m] A start job is running for Network Manager Wait Online (8s / no limit)
M[K[   [0;31m*[0;1;31m*[0m[0;31m*[0m] A start job is running for Network Manager Wait Online (8s / no limit)
M[K[  [0;31m*[0;1;31m*[0m[0;31m* [0m] A start job is running for Network Manager Wait Online (9s / no limit)
M[K[ [0;31m*[0;1;31m*[0m[0;31m*  [0m] A start job is running for Network Manager Wait Online (9s / no limit)
M[K[[0;31m*[0;1;31m*[0m[0;31m*   [0m] A start job is running for Network ___nager Wait Online (10s / no limit)
M[K[[0;1;31m*[0m[0;31m*    [0m] A start job is running for Network ___nager Wait Online (10s / no limit)
M[K[[0m[0;31m*     [0m] A start job is running for Network ___nager Wait Online (11s / no limit)
M[K[[0;1;31m*[0m[0;31m*    [0m] A start job is running for Network ___nager Wait Online (11s / no limit)
M[K[[0;31m*[0;1;31m*[0m[0;31m*   [0m] A start job is running for Network ___nager Wait Online (12s / no limit)
M[K[ [0;31m*[0;1;31m*[0m[0;31m*  [0m] A start job is running for Network ___nager Wait Online (12s / no limit)
M[K[  [0;31m*[0;1;31m*[0m[0;31m* [0m] A start job is running for Network ___nager Wait Online (13s / no limit)
M[K[   [0;31m*[0;1;31m*[0m[0;31m*[0m] A start job is running for Network ___nager Wait Online (13s / no limit)
M[K[    [0;31m*[0;1;31m*[0m] A start job is running for Network ___nager Wait Online (14s / no limit)
M[K[     [0;31m*[0m] A start job is running for Network ___nager Wait Online (14s / no limit)
M[K[    [0;31m*[0;1;31m*[0m] A start job is running for Network ___nager Wait Online (15s / no limit)
M[K[   [0;31m*[0;1;31m*[0m[0;31m*[0m] A start job is running for Network ___nager Wait Online (15s / no limit)
M[K[  [0;31m*[0;1;31m*[0m[0;31m* [0m] A start job is running for Network ___nager Wait Online (16s / no limit)
M[K[ [0;31m*[0;1;31m*[0m[0;31m*  [0m] A start job is running for Network ___nager Wait Online (16s / no limit)
M[K[[0;31m*[0;1;31m*[0m[0;31m*   [0m] A start job is running for Network ___nager Wait Online (17s / no limit)
M[K[[0;1;31m*[0m[0;31m*    [0m] A start job is running for Network ___nager Wait Online (17s / no limit)
M[K[[0m[0;31m*     [0m] A start job is running for Network ___nager Wait Online (18s / no limit)
*[0m[0;31m*    [0m] A start job is running for Network ___nager Wait Online (18s / no limit)
[0;1;31m*[0m[0;31m*   [0m] A start job is running for Network ___nager Wait Online (19s / no limit)
M[K[ [0;31m*[0;1;31m*[0m[0;31m*  [0m] A start job is running for Network ___nager Wait Online (19s / no limit)
M[K[  [0;31m*[0;1;31m*[0m[0;31m* [0m] A start job is running for Network ___nager Wait Online (20s / no limit)
M[K[   [0;31m*[0;1;31m*[0m[0;31m*[0m] A start job is running for Network ___nager Wait Online (20s / no limit)
M[K[    [0;31m*[0;1;31m*[0m] A start job is running for Network ___nager Wait Online (21s / no limit)
M[K[     [0;31m*[0m] A start job is running for Network ___nager Wait Online (21s / no limit)
M[K[    [0;31m*[0;1;31m*[0m] A start job is running for Network ___nager Wait Online (22s / no limit)
M[K[   [0;31m*[0;1;31m*[0m[0;31m*[0m] A start job is running for Network ___nager Wait Online (22s / no limit)
M[K[  [0;31m*[0;1;31m*[0m[0;31m* [0m] A start job is running for Network ___nager Wait Online (23s / no limit)
M[K[ [0;31m*[0;1;31m*[0m[0;31m*  [0m] A start job is running for Network ___nager Wait Online (23s / no limit)
M[K[[0;31m*[0;1;31m*[0m[0;31m*   [0m] A start job is running for Network ___nager Wait Online (24s / no limit)
M[K[[0;1;31m*[0m[0;31m*    [0m] A start job is running for Network ___nager Wait Online (24s / no limit)
M[K[[0m[0;31m*     [0m] A start job is running for Network ___nager Wait Online (25s / no limit)
M[K[[0;1;31m*[0m[0;31m*    [0m] A start job is running for Network ___nager Wait Online (25s / no limit)
M[K[ [0;31m*[0;1;31m*[0m[0;31m*  [0m] A start job is running for Network ___nager Wait Online (26s / no limit)
M[K[[0;32m  OK  [0m] Finished [0;1;39mNetwork Manager Wait Online[0m.
         Starting [0;1;39mNetwork Manager Script Dispatcher Service[0m...
[[0;32m  OK  [0m] Started [0;1;39mNetwork Manager Script Dispatcher Service[0m.
[[0;32m  OK  [0m] Reached target [0;1;39mNetwork is Online[0m.

Comment 14 Miguel Angel Nieto 2023-05-17 14:46:24 UTC
Instead of rebooting from inside the vm, you can execute "openstack server reboot". Doing in this way, it boots up very fast

(In reply to Pradipta Kumar Sahoo from comment #13)
> In the NFV Perf lab, this issue is also reproduced while using Mellanox Cx6
> 100G card. Please find the below console log. Please let me know if there is
> any workaround available to fix the issue.
> 
> $ openstack console log show VM-SRIOV-PERF202 |grep mlx5_core
> /usr/lib/python3.9/site-packages/openstack/config/cloud_region.py:452:
> UserWarning: You have a configured API_VERSION with 'latest' in it. In the
> context of openstacksdk this doesn't make any sense.
>   warnings.warn(
> [    3.044942] mlx5_core 0000:05:00.0: firmware version: 16.31.1014
> [    3.205229] mlx5_core 0000:05:00.0: Rate limit: 127 rates are supported,
> range: 0Mbps to 97656Mbps
> [   28.400464] mlx5_core 0000:05:00.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048)
> RxCqeCmprss(0)
> [  153.443461] mlx5_core 0000:05:00.0: Supported tc offload range - chains:
> 1, prios: 1
> [  153.453334] mlx5_core 0000:06:00.0: firmware version: 16.31.1014
> [  153.645515] mlx5_core 0000:06:00.0: Rate limit: 127 rates are supported,
> range: 0Mbps to 97656Mbps
> [  182.879464] mlx5_core 0000:06:00.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048)
> RxCqeCmprss(0)
> [  333.147479] mlx5_core 0000:06:00.0: mlx5e_create_flow_steering:1280:(pid
> 491): Failed to create arfs tables, err=-12
> [  334.163455] mlx5_core 0000:06:00.0: Supported tc offload range - chains:
> 1, prios: 1
> M[K[[0;1;31m[  352.904469] mlx5_core 0000:05:00.0 eth1: Link up
> [  365.831471] mlx5_core 0000:06:00.0 eth2: Link up
> 
> $ openstack console log show VM-SRIOV-PERF202 |grep Network
> [[0;32m  OK  [0m] Reached target [0;1;39mNetwork[0m.
> [[0;32m  OK  [0m] Stopped target [0;1;39mNetwork[0m.
> [[0;32m  OK  [0m] Reached target [0;1;39mPreparation for Network[0m.
>          Starting [0;1;39mNetwork Manager[0m...
> [[0;32m  OK  [0m] Started [0;1;39mNetwork Manager[0m.
> [[0;32m  OK  [0m] Reached target [0;1;39mNetwork[0m.
>          Starting [0;1;39mNetwork Manager Wait Online[0m...
>          Starting [0;1;39mNetwork Manager Script Dispatcher Service[0m...
> [[0;32m  OK  [0m] Started [0;1;39mNetwork Manager Script Dispatcher
> Service[0m.
> [[0m[0;31m*     [0m] A start job is running for Network Manager Wait Online
> (4s / no limit)
> M[K[[0;1;31m*[0m[0;31m*    [0m] A start job is running for Network Manager
> Wait Online (4s / no limit)
> M[K[[0;31m*[0;1;31m*[0m[0;31m*   [0m] A start job is running for Network
> Manager Wait Online (5s / no limit)
> M[K[ [0;31m*[0;1;31m*[0m[0;31m*  [0m] A start job is running for Network
> Manager Wait Online (5s / no limit)
> M[K[  [0;31m*[0;1;31m*[0m[0;31m* [0m] A start job is running for Network
> Manager Wait Online (6s / no limit)
> M[K[   [0;31m*[0;1;31m*[0m[0;31m*[0m] A start job is running for Network
> Manager Wait Online (6s / no limit)
> M[K[    [0;31m*[0;1;31m*[0m] A start job is running for Network Manager Wait
> Online (7s / no limit)
> M[K[     [0;31m*[0m] A start job is running for Network Manager Wait Online
> (7s / no limit)
> M[K[    [0;31m*[0;1;31m*[0m] A start job is running for Network Manager Wait
> Online (8s / no limit)
> M[K[   [0;31m*[0;1;31m*[0m[0;31m*[0m] A start job is running for Network
> Manager Wait Online (8s / no limit)
> M[K[  [0;31m*[0;1;31m*[0m[0;31m* [0m] A start job is running for Network
> Manager Wait Online (9s / no limit)
> M[K[ [0;31m*[0;1;31m*[0m[0;31m*  [0m] A start job is running for Network
> Manager Wait Online (9s / no limit)
> M[K[[0;31m*[0;1;31m*[0m[0;31m*   [0m] A start job is running for Network
> ___nager Wait Online (10s / no limit)
> M[K[[0;1;31m*[0m[0;31m*    [0m] A start job is running for Network ___nager
> Wait Online (10s / no limit)
> M[K[[0m[0;31m*     [0m] A start job is running for Network ___nager Wait
> Online (11s / no limit)
> M[K[[0;1;31m*[0m[0;31m*    [0m] A start job is running for Network ___nager
> Wait Online (11s / no limit)
> M[K[[0;31m*[0;1;31m*[0m[0;31m*   [0m] A start job is running for Network
> ___nager Wait Online (12s / no limit)
> M[K[ [0;31m*[0;1;31m*[0m[0;31m*  [0m] A start job is running for Network
> ___nager Wait Online (12s / no limit)
> M[K[  [0;31m*[0;1;31m*[0m[0;31m* [0m] A start job is running for Network
> ___nager Wait Online (13s / no limit)
> M[K[   [0;31m*[0;1;31m*[0m[0;31m*[0m] A start job is running for Network
> ___nager Wait Online (13s / no limit)
> M[K[    [0;31m*[0;1;31m*[0m] A start job is running for Network ___nager
> Wait Online (14s / no limit)
> M[K[     [0;31m*[0m] A start job is running for Network ___nager Wait Online
> (14s / no limit)
> M[K[    [0;31m*[0;1;31m*[0m] A start job is running for Network ___nager
> Wait Online (15s / no limit)
> M[K[   [0;31m*[0;1;31m*[0m[0;31m*[0m] A start job is running for Network
> ___nager Wait Online (15s / no limit)
> M[K[  [0;31m*[0;1;31m*[0m[0;31m* [0m] A start job is running for Network
> ___nager Wait Online (16s / no limit)
> M[K[ [0;31m*[0;1;31m*[0m[0;31m*  [0m] A start job is running for Network
> ___nager Wait Online (16s / no limit)
> M[K[[0;31m*[0;1;31m*[0m[0;31m*   [0m] A start job is running for Network
> ___nager Wait Online (17s / no limit)
> M[K[[0;1;31m*[0m[0;31m*    [0m] A start job is running for Network ___nager
> Wait Online (17s / no limit)
> M[K[[0m[0;31m*     [0m] A start job is running for Network ___nager Wait
> Online (18s / no limit)
> *[0m[0;31m*    [0m] A start job is running for Network ___nager Wait Online
> (18s / no limit)
> [0;1;31m*[0m[0;31m*   [0m] A start job is running for Network ___nager Wait
> Online (19s / no limit)
> M[K[ [0;31m*[0;1;31m*[0m[0;31m*  [0m] A start job is running for Network
> ___nager Wait Online (19s / no limit)
> M[K[  [0;31m*[0;1;31m*[0m[0;31m* [0m] A start job is running for Network
> ___nager Wait Online (20s / no limit)
> M[K[   [0;31m*[0;1;31m*[0m[0;31m*[0m] A start job is running for Network
> ___nager Wait Online (20s / no limit)
> M[K[    [0;31m*[0;1;31m*[0m] A start job is running for Network ___nager
> Wait Online (21s / no limit)
> M[K[     [0;31m*[0m] A start job is running for Network ___nager Wait Online
> (21s / no limit)
> M[K[    [0;31m*[0;1;31m*[0m] A start job is running for Network ___nager
> Wait Online (22s / no limit)
> M[K[   [0;31m*[0;1;31m*[0m[0;31m*[0m] A start job is running for Network
> ___nager Wait Online (22s / no limit)
> M[K[  [0;31m*[0;1;31m*[0m[0;31m* [0m] A start job is running for Network
> ___nager Wait Online (23s / no limit)
> M[K[ [0;31m*[0;1;31m*[0m[0;31m*  [0m] A start job is running for Network
> ___nager Wait Online (23s / no limit)
> M[K[[0;31m*[0;1;31m*[0m[0;31m*   [0m] A start job is running for Network
> ___nager Wait Online (24s / no limit)
> M[K[[0;1;31m*[0m[0;31m*    [0m] A start job is running for Network ___nager
> Wait Online (24s / no limit)
> M[K[[0m[0;31m*     [0m] A start job is running for Network ___nager Wait
> Online (25s / no limit)
> M[K[[0;1;31m*[0m[0;31m*    [0m] A start job is running for Network ___nager
> Wait Online (25s / no limit)
> M[K[ [0;31m*[0;1;31m*[0m[0;31m*  [0m] A start job is running for Network
> ___nager Wait Online (26s / no limit)
> M[K[[0;32m  OK  [0m] Finished [0;1;39mNetwork Manager Wait Online[0m.
>          Starting [0;1;39mNetwork Manager Script Dispatcher Service[0m...
> [[0;32m  OK  [0m] Started [0;1;39mNetwork Manager Script Dispatcher
> Service[0m.
> [[0;32m  OK  [0m] Reached target [0;1;39mNetwork is Online[0m.

Comment 17 Amir Tzin (Mellanox) 2023-05-30 13:55:26 UTC
Related Nvidia networking ticket #3486582

Comment 19 Anthony Harivel 2023-06-07 09:44:56 UTC
The NFV QE team (Miguel) and I have proceed to a test yesterday on the NFV QE lab with a machine with HWOffloading and 2 vfs of the same Mellanox NIC. 

When powered-on for the first time, the VM is booting fast with the 2 interfaces available. 
dmesg is showing No problem and the timing is ok: 
[    3.768456] mlx5_core 0000:05:00.0: Rate limit: 127 rates are supported, range: 0Mbps to 97656Mbps
[    3.779766] mlx5_core 0000:05:00.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0)
[    3.916796] mlx5_core 0000:05:00.0: Supported tc offload range - chains: 1, prios: 1
[    3.925747] mlx5_core 0000:06:00.0: firmware version: 16.35.2000
[    4.155743] mlx5_core 0000:06:00.0: Rate limit: 127 rates are supported, range: 0Mbps to 97656Mbps
[    4.184669] mlx5_core 0000:06:00.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0)
[    4.320879] mlx5_core 0000:06:00.0: Supported tc offload range - chains: 1, prios: 1

But when we reboot the VM from inside the VM (i.e sudo reboot):
[  206.937833] mlx5_core 0000:05:00.0: Supported tc offload range - chains: 1, prios: 1
[  222.443961] mlx5_core 0000:06:00.0: firmware version: 16.35.2000
[  222.672245] mlx5_core 0000:06:00.0: Rate limit: 127 rates are supported, range: 0Mbps to 97656Mbps
[  252.943801] mlx5_core 0000:06:00.0: mlx5e_create_netdev:5484:(pid 498): mlx5e_priv_init failed, err=-12
[  252.945817] mlx5_core 0000:06:00.0: mlx5e_probe:5739:(pid 498): mlx5e_create_netdev failed
[  252.947586] mlx5_core.eth: probe of mlx5_core.eth.1 failed with error -12

Note: that this shows more output because I've given more time to systemd udev timeout, otherwise systemd udev is killing the kworker process because it takes too long time.

Once finally booted, the VM shows only one interface. 

The error (-12) is -ENOMEM : Out of memory. This potentially come from one of the memory allocation in the mlx5e_priv_init() in the mlx driver.
Both Host/Guest have plenty of memory free (checked with free -h). 

Final test on the rebooted VM, when I do: 
$ modprobe -r mlx5_ib
$ modprobe -r mlx5_core
and then: 
$ modprobe mlx5_core

Both interface is showing up and are functional (ip addresses are set on both interface).

Comment 25 Miguel Angel Nieto 2023-06-15 09:01:18 UTC
Issue happening too with BlueField-2 nics


        "businfo": "pci@0000:17:00.0",
        "vendor": "Mellanox Technologies",
        "product": "MT42822 BlueField-2 integrated ConnectX-6 Dx network controller",
        "firmware": "24.35.2000 (MT_0000000765)",

[    2.426770] sr 0:0:0:0: Attached scsi CD-ROM sr0
[    2.427095] mlx5_core 0000:05:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(2048) RxCqeCmprss(0)
[  122.426012] SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
[  122.428219] XFS (vda4): Mounting V5 Filesystem
[  122.438033] XFS (vda4): Ending clean mount
[  152.539807] mlx5_core 0000:05:00.0: Supported tc offload range - chains: 1, prios: 1
[  152.552222] mlx5_core 0000:06:00.0: firmware version: 24.35.2000
[  152.791784] mlx5_core 0000:06:00.0: Rate limit: 127 rates are supported, range: 0Mbps to 24414Mbps
[  180.762476] mlx5_core 0000:06:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(2048) RxCqeCmprss(0)
[  247.522425] mlx5_core 0000:06:00.0: mlx5e_probe:7467:(pid 516): mlx5e_resume failed, -12
[  247.529399] mlx5_core.eth: probe of mlx5_core.eth.1 failed with error -12
[  247.568577] systemd-journald[357]: Received SIGTERM from PID 1 (systemd).
[  247.600277] systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIB
CRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
[  247.603877] systemd[1]: Detected virtualization kvm.
[  247.604504] systemd[1]: Detected architecture x86-64.
-----
[  248.086587] virtio_gpu virtio0: [drm] drm_plane_enable_fb_damage_clips() not called
[  248.088355] Console: switching to colour frame buffer device 160x50
[  248.092136] virtio_gpu virtio0: [drm] fb0: virtio_gpudrmfb frame buffer device
[  279.266203] mlx5_core 0000:05:00.0 eth1: Link up
[  286.762163] IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
[  492.768727] INFO: task kworker/u32:13:107 blocked for more than 122 seconds.
[  492.769363]       Tainted: G           OE    --------  ---  5.14.0-284.11.1.el9_2.x86_64 #1
[  492.770158] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  492.770880] task:kworker/u32:13  state:D stack:    0 pid:  107 ppid:     2 flags:0x00004000


[cloud-user@trex ~]$ lspci | grep Mell
05:00.0 Ethernet controller: Mellanox Technologies ConnectX Family mlx5Gen Virtual Function (rev 01)
06:00.0 Ethernet controller: Mellanox Technologies ConnectX Family mlx5Gen Virtual Function (rev 01)

Jun 15 04:27:43 trex kernel: mlxfw: loading out-of-tree module taints kernel.
Jun 15 04:27:43 trex kernel: mlxfw: loading out-of-tree module taints kernel.
Jun 15 04:27:43 trex kernel: mlxfw: module verification failed: signature and/or required key missing - tainting kernel
Jun 15 04:27:43 trex kernel: mlx5_core 0000:05:00.0: firmware version: 24.35.2000
Jun 15 04:27:43 trex kernel: mlx5_core 0000:05:00.0: Rate limit: 127 rates are supported, range: 0Mbps to 24414Mbps
Jun 15 04:27:43 trex kernel: mlx5_core 0000:05:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(2048) RxCqeCmprss(0)
Jun 15 04:27:43 trex kernel: mlx5_core 0000:05:00.0: Supported tc offload range - chains: 1, prios: 1
Jun 15 04:27:43 trex kernel: mlx5_core 0000:06:00.0: firmware version: 24.35.2000
Jun 15 04:27:44 trex kernel: mlx5_core 0000:06:00.0: Rate limit: 127 rates are supported, range: 0Mbps to 24414Mbps
Jun 15 04:27:44 trex kernel: mlx5_core 0000:06:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(2048) RxCqeCmprss(0)
Jun 15 04:27:44 trex kernel: mlx5_core 0000:06:00.0: Supported tc offload range - chains: 1, prios: 1
Jun 15 04:27:46 trex kernel: mlx5_core 0000:05:00.0 eth1: Link up
Jun 15 04:27:46 trex kernel: mlx5_core 0000:06:00.0 eth2: Link up
Jun 15 04:28:05 trex root[1990]: openibd: Set node_desc for mlx5_0: trex HCA-1
Jun 15 04:28:05 trex root[1992]: openibd: Set node_desc for mlx5_1: trex HCA-2
Jun 15 04:35:48 trex kernel: mlxfw: loading out-of-tree module taints kernel.
Jun 15 04:35:48 trex kernel: mlxfw: loading out-of-tree module taints kernel.
Jun 15 04:35:48 trex kernel: mlxfw: module verification failed: signature and/or required key missing - tainting kernel
Jun 15 04:35:48 trex kernel: mlx5_core 0000:05:00.0: firmware version: 24.35.2000
Jun 15 04:35:48 trex kernel: mlx5_core 0000:05:00.0: Rate limit: 127 rates are supported, range: 0Mbps to 24414Mbps
Jun 15 04:35:48 trex kernel: mlx5_core 0000:05:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(2048) RxCqeCmprss(0)
Jun 15 04:38:18 trex kernel: mlx5_core 0000:05:00.0: Supported tc offload range - chains: 1, prios: 1
Jun 15 04:38:18 trex kernel: mlx5_core 0000:06:00.0: firmware version: 24.35.2000
Jun 15 04:38:19 trex kernel: mlx5_core 0000:06:00.0: Rate limit: 127 rates are supported, range: 0Mbps to 24414Mbps
Jun 15 04:38:47 trex kernel: mlx5_core 0000:06:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(2048) RxCqeCmprss(0)
Jun 15 04:39:53 trex kernel: mlx5_core 0000:06:00.0: mlx5e_probe:7467:(pid 516): mlx5e_resume failed, -12
Jun 15 04:39:53 trex kernel: mlx5_core.eth: probe of mlx5_core.eth.1 failed with error -12
Jun 15 04:40:14 trex root[1360]: openibd: Set node_desc for mlx5_0: trex HCA-1
Jun 15 04:40:14 trex root[1362]: openibd: Set node_desc for mlx5_1: trex HCA-2
Jun 15 04:40:25 trex kernel: mlx5_core 0000:05:00.0 eth1: Link up

Comment 40 Miguel Angel Nieto 2023-07-14 08:25:05 UTC
Tested RHOS-17.1-RHEL-9-20230712.n.1

[tripleo-admin@computehwoffload-r730 ~]$ uname -r
5.14.0-284.23.1.el9_2.x86_64

Connect X5 testing
******************
Spawn a vm with 2 Connect x5 vfs (ovs hwoffload setup)

       "vendor": "Mellanox Technologies",
        "product": "MT27800 Family [ConnectX-5]",
        "firmware": "16.27.6120 (DEL0000000015)",

[root@trex cloud-user]# lspci | grep Mellanox
05:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function]
06:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function]


Rebooting the vm with openstack API takes 20 seconds
I do not see errors related to CX5 in the vm, only one error that seems unrelated
[root@trex cloud-user]# dmesg  | grep error
[    2.146001] shpchp 0000:01:00.0: pci_hp_register failed with error -16

Rebooting the vm with ssh takes 14 seconds
I do not see errors related to CX5 in the vm, only one error that seems unrelated
[root@trex cloud-user]# dmesg | grep error
[    2.144956] shpchp 0000:01:00.0: pci_hp_register failed with error -16

No errors in compute
[root@computehwoffload-r730 tripleo-admin]# dmesg | grep error
[root@computehwoffload-r730 tripleo-admin]# 


Connect X6 testing
******************
Spawn a vm with 2 Connect x6 vfs (ovs hwoffload setup)
        "vendor": "Mellanox Technologies",
        "product": "MT42822 BlueField-2 integrated ConnectX-6 Dx network controller",
        "firmware": "24.35.2000 (MT_0000000765)",

[cloud-user@trex ~]$ lspci | grep Mellanox
05:00.0 Ethernet controller: Mellanox Technologies ConnectX Family mlx5Gen Virtual Function (rev 01)
06:00.0 Ethernet controller: Mellanox Technologies ConnectX Family mlx5Gen Virtual Function (rev 01)


Rebooting the vm with openstack API takes 8 seconds

I do not see errors related to CX6 in the vm, only one error that seems unrelated
[root@trex cloud-user]# dmesg | grep error
[    1.472844] shpchp 0000:01:00.0: pci_hp_register failed with error -16


Rebooting the vm with ssh takes 8 seconds
I do not see errors related to CX6 in the vm, only one error that seems unrelated
[cloud-user@trex ~]$ dmesg | grep error
[    1.465660] shpchp 0000:01:00.0: pci_hp_register failed with error -16

Comment 41 Miguel Angel Nieto 2023-07-14 08:50:25 UTC
I forgot to mention about ping latency

CX5
***
Rebooting vms with: openstack api
[cloud-user@testpmd-hwoffload-dut ~]$ ping 10.10.179.163
PING 10.10.179.163 (10.10.179.163) 56(84) bytes of data.
64 bytes from 10.10.179.163: icmp_seq=1 ttl=64 time=6.06 ms
64 bytes from 10.10.179.163: icmp_seq=2 ttl=64 time=0.533 ms
64 bytes from 10.10.179.163: icmp_seq=3 ttl=64 time=0.075 ms
64 bytes from 10.10.179.163: icmp_seq=4 ttl=64 time=0.056 ms


Rebooting vms with: ssh
[cloud-user@testpmd-hwoffload-dut ~]$ ping 10.10.179.163
PING 10.10.179.163 (10.10.179.163) 56(84) bytes of data.
64 bytes from 10.10.179.163: icmp_seq=1 ttl=64 time=5.65 ms
64 bytes from 10.10.179.163: icmp_seq=2 ttl=64 time=0.098 ms
64 bytes from 10.10.179.163: icmp_seq=3 ttl=64 time=0.057 ms
64 bytes from 10.10.179.163: icmp_seq=4 ttl=64 time=0.068 ms


CX6
***
Rebooting vms with: openstack api
[cloud-user@trex ~]$ ping 10.10.38.164
PING 10.10.38.164 (10.10.38.164) 56(84) bytes of data.
64 bytes from 10.10.38.164: icmp_seq=1 ttl=64 time=7.25 ms
64 bytes from 10.10.38.164: icmp_seq=2 ttl=64 time=0.087 ms
64 bytes from 10.10.38.164: icmp_seq=3 ttl=64 time=0.051 ms
64 bytes from 10.10.38.164: icmp_seq=4 ttl=64 time=0.048 ms


Rebooting vms with: ssh
[cloud-user@trex ~]$ ping 10.10.38.164
PING 10.10.38.164 (10.10.38.164) 56(84) bytes of data.
64 bytes from 10.10.38.164: icmp_seq=1 ttl=64 time=5.36 ms
64 bytes from 10.10.38.164: icmp_seq=2 ttl=64 time=0.085 ms
64 bytes from 10.10.38.164: icmp_seq=3 ttl=64 time=0.045 ms
64 bytes from 10.10.38.164: icmp_seq=4 ttl=64 time=0.047 ms
^C

Comment 50 errata-xmlrpc 2023-08-16 01:14:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Release of components for Red Hat OpenStack Platform 17.1 (Wallaby)), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2023:4577


Note You need to log in before you can comment on or make changes to this bug.