RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1813889 - osp16.1, rhel8.2: ipmitool commands via vbmc (virtualbmc) take too long and cause overcloud introspection to fail
Summary: osp16.1, rhel8.2: ipmitool commands via vbmc (virtualbmc) take too long and c...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: ipmitool
Version: 8.2
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: rc
: 8.0
Assignee: Vaclav Dolezal
QA Contact: Kernel-QE - Hardware
URL:
Whiteboard:
: 1813468 (view as bug list)
Depends On:
Blocks: 1813468 1814398
TreeView+ depends on / blocked
 
Reported: 2020-03-16 12:21 UTC by Waldemar Znoinski
Modified: 2020-03-29 20:28 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-03-19 10:23:53 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
messages (have some vbmc logs) (328.71 KB, application/zip)
2020-03-16 13:20 UTC, Waldemar Znoinski
no flags Details
ipmitool vvverbose (16.37 KB, text/plain)
2020-03-16 13:24 UTC, Waldemar Znoinski
no flags Details

Description Waldemar Znoinski 2020-03-16 12:21:26 UTC
Description of problem:
- introspection of osp16.1 overcloud (rhel8.2) fails when using virtualbmc
the error during introspection:
2020-03-16 08:54:48.313 | Error contacting Ironic server: Node 8f4a4c72-967d-4f2b-b330-6c8513f69202 is locked by host undercloud-0.redhat.local, please retry after the current operation is completed. (HTTP 409). Attempt 6 of 6
2020-03-16 08:54:48.315 | Node 8f4a4c72-967d-4f2b-b330-6c8513f69202 is locked by host undercloud-0.redhat.local, please retry after the current operation is completed. (HTTP 409)

the 'lock' is caused by previous ironic/ipmitool command still running,
while tested manually any of ipmitool command (whether status, power on/off or else) take over 2mins to finish:
[stack@undercloud-0 ~]$ time ipmitool -I lanplus -H 172.16.0.95 -L ADMINISTRATOR -p 6232 -U admin -v -R 12 -N 5 -P password chassis status                                                                                                    
+ ipmitool -I lanplus -H 172.16.0.95 -L ADMINISTRATOR -p 6232 -U admin -v -R 12 -N 5 -P password chassis status
Unable to Get Channel Cipher Suites
Running Get PICMG Properties my_addr 0x20, transit 0, target 0x20
Error response 0xc1 from Get PICMG Properities
Running Get VSO Capabilities my_addr 0x20, transit 0, target 0x20
Invalid completion code received: Invalid command
Discovered IPMB address 0x0
System Power         : on
Power Overload       : false
Power Interlock      : inactive
Main Power Fault     : false
Power Control Fault  : false
Power Restore Policy : always-off
Last Power Event     : 
Chassis Intrusion    : inactive
Front-Panel Lockout  : inactive
Drive Fault          : false
Cooling/Fan Fault    : false

real    2m6.684s
user    0m0.001s
sys     0m0.009s



- CI job showing the issue:
https://rhos-qe-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/QE/view/OSP16/job/phase1-16.1_director-rhel-8.2-virthost-1cont_1comp_1ceph-ipv4-geneve-ceph/13/artifact/.sh/05-ooo-overcloud.log/*view*/

Version-Release number of selected component (if applicable):
- rhel & osp:
(vbmc) [stack@undercloud-0 ~]$ cat /etc/*release
NAME="Red Hat Enterprise Linux"
VERSION="8.2 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.2"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.2 Beta (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8.2:beta"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.2
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.2 Beta"
Red Hat Enterprise Linux release 8.2 Beta (Ootpa)
Red Hat OpenStack Platform release 16.0.1 (Train)
Red Hat Enterprise Linux release 8.2 Beta (Ootpa)

- director images:
(vbmc) [stack@undercloud-0 ~]$ yum list installed | grep -i rhosp-director
rhosp-director-images.noarch                      16.1-20200311.1.el8ost                          @rhelosp-16.1-trunk
rhosp-director-images-ipa-x86_64.noarch           16.1-20200311.1.el8ost                          @rhelosp-16.1-trunk
rhosp-director-images-x86_64.noarch               16.1-20200311.1.el8ost                          @rhelosp-16.1-trunk

- ipmitool:
(vbmc) [stack@undercloud-0 ~]$ yum list installed | grep -i ipmitool
ipmitool.x86_64                                   1.8.18-14.el8                                   @rhosp-rhel-8.2-appstream

- virtualbmc:
(vbmc) [stack@undercloud-0 ~]$ pip freeze | grep -i bmc
virtualbmc==2.0.0

originally 1.6.0 is installed and was showing this issue, upgraded to 2.0.0 and the issue is still there

- libvirt (on the hypervisor, rhel7.6):
[root@sealusa22 libvirt]# yum list installed | grep -i libvirt
fence-virtd-libvirt.x86_64      0.3.2-13.el7            @rhelosp-rhel-7.6-server
libvirt.x86_64                  4.5.0-23.el7_7.5        @rhelosp-rhel-7.6-server
libvirt-bash-completion.x86_64  4.5.0-23.el7_7.5        @rhelosp-rhel-7.6-server
libvirt-client.x86_64           4.5.0-23.el7_7.5        @rhelosp-rhel-7.6-server
libvirt-daemon.x86_64           4.5.0-23.el7_7.5        @rhelosp-rhel-7.6-server
libvirt-daemon-config-network.x86_64
libvirt-daemon-config-nwfilter.x86_64
libvirt-daemon-driver-interface.x86_64
libvirt-daemon-driver-lxc.x86_64
libvirt-daemon-driver-network.x86_64
libvirt-daemon-driver-nodedev.x86_64
libvirt-daemon-driver-nwfilter.x86_64
libvirt-daemon-driver-qemu.x86_64
libvirt-daemon-driver-secret.x86_64
libvirt-daemon-driver-storage.x86_64
libvirt-daemon-driver-storage-core.x86_64
libvirt-daemon-driver-storage-disk.x86_64
libvirt-daemon-driver-storage-gluster.x86_64
libvirt-daemon-driver-storage-iscsi.x86_64
libvirt-daemon-driver-storage-logical.x86_64
libvirt-daemon-driver-storage-mpath.x86_64
libvirt-daemon-driver-storage-rbd.x86_64
libvirt-daemon-driver-storage-scsi.x86_64
libvirt-daemon-kvm.x86_64       4.5.0-23.el7_7.5        @rhelosp-rhel-7.6-server
libvirt-devel.x86_64            4.5.0-23.el7_7.5        @rhelosp-rhel-7.6-server
libvirt-glib.x86_64             1.0.0-1.el7             @rhelosp-rhel-7.6-server
libvirt-libs.x86_64             4.5.0-23.el7_7.5        @rhelosp-rhel-7.6-server
libvirt-python.x86_64           4.5.0-1.el7             @rhelosp-rhel-7.6-server


- sample  vbmc config for overcloud node:
(vbmc) [stack@undercloud-0 ~]$ cat ~/.vbmc/controller-0/config
[VirtualBMC]
username = admin
password = password
address = ::ffff:172.16.0.95
port = 6230
domain_name = controller-0
libvirt_uri = qemu+ssh://root.0.1/system?no_verify=1&no_tty=1
active = True




How reproducible:
100%

Steps to Reproduce:
1. introspect overcloud nodes provided by vbmc
2.
3.

Actual results:
introspection fails

Expected results:
introspection to finish

Additional info:
I have a machine stacked showing the the issue which may be used for troubleshooting/fixing

Comment 1 Bob Fournier 2020-03-16 13:02:35 UTC
Hi, can you provide access to the undercloud so we can look at logs?

Comment 2 Dmitry Tantsur 2020-03-16 13:05:22 UTC
Any time I see unexpected delays, I suspect DNS. Could you check that the reverse DNS lookup / DNS lookup for unknown hosts doesn't hang for long?

Comment 3 Dmitry Tantsur 2020-03-16 13:08:27 UTC
Additionally, I could not find vbmc logs in the undercloud tarball, could you provide them?

Comment 4 Bob Fournier 2020-03-16 13:11:01 UTC
Looks like same issue as https://bugzilla.redhat.com/show_bug.cgi?id=1813468

Comment 5 Ilya Etingof 2020-03-16 13:11:02 UTC
Additionally, may be pasting the result of `ipmitool -v -v -v ...` would be helpful as well.

Comment 6 Waldemar Znoinski 2020-03-16 13:20:13 UTC
Created attachment 1670542 [details]
messages (have some vbmc logs)

Comment 7 Harald Jensås 2020-03-16 13:23:02 UTC
(In reply to Bob Fournier from comment #4)
> Looks like same issue as https://bugzilla.redhat.com/show_bug.cgi?id=1813468

It's two different BMC's. RHBZ#1813468 ^^ uses https://opendev.org/openstack/openstack-virtual-baremetal/src/branch/master/openstack_virtual_baremetal , while this bug is using https://opendev.org/openstack/virtualbmc.

Comment 8 Waldemar Znoinski 2020-03-16 13:24:23 UTC
Created attachment 1670543 [details]
ipmitool vvverbose

Comment 9 Waldemar Znoinski 2020-03-16 13:26:41 UTC
@Bob - I'm working with Dmitry on the affected undercloud now
@Ilia - I've attached output of ipmitool -v -v -v...

Comment 10 Dmitry Tantsur 2020-03-16 13:30:37 UTC
More specifically, ipmitool seems to loop on repeated instances of


>> Sending IPMI command payload
>>    netfn   : 0x06
>>    command : 0x54
>>    data    : 0x0e 0x00 0x80 

BUILDING A v2 COMMAND
Local RqAddr 0x20 transit 0:0 target 0x20:0 bridgePossible 0


Nothing interesting happens in virtualbmc logs at the same time.

Comment 11 Dmitry Tantsur 2020-03-16 13:31:55 UTC
A curious fact: if I remove the retries, it returns MUCH faster:


$ time ipmitool -I lanplus -H 172.16.0.95 -L ADMINISTRATOR -p 6232 -U admin -P password chassis status                                                                                                    
Unable to Get Channel Cipher Suites
System Power         : on
Power Overload       : false
Power Interlock      : inactive
Main Power Fault     : false
Power Control Fault  : false
Power Restore Policy : always-off
Last Power Event     : 
Chassis Intrusion    : inactive
Front-Panel Lockout  : inactive
Drive Fault          : false
Cooling/Fan Fault    : false

real	0m10.506s
user	0m0.003s
sys	0m0.004s

Comment 12 Bob Fournier 2020-03-16 13:34:45 UTC
Thanks Waldemar, seeing lots of these log messages in the attached file.

Mar 16 08:33:38 undercloud-0 podman[44739]: 2020-03-16 08:33:38.158446199 +0000 UTC m=+0.080653190 container create b4d6dceaa17242e222a571ef752a90b195592f95c6fd6e7de8aadee414ce2522 (image=undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp16-openstack-heat-api:16.1_20200311.2, name=heat_api)
Mar 16 08:33:38 undercloud-0 systemd[1]: Reloading.
Mar 16 08:33:38 undercloud-0 systemd[1]: machine.slice: Failed to set cpuset.cpus: Device or resource busy
Mar 16 08:33:38 undercloud-0 systemd[1]: machine.slice: Failed to set cpuset.mems: Device or resource busy
Mar 16 08:33:38 undercloud-0 systemd[1]: libpod-00e589f7d1e6aa3e2061c1daed2901b1bf05edb21f6907285e2f75f2f19cd911.scope: Failed to set cpuset.cpus: No space left on device
Mar 16 08:33:38 undercloud-0 systemd[1]: libpod-00e589f7d1e6aa3e2061c1daed2901b1bf05edb21f6907285e2f75f2f19cd911.scope: Failed to set cpuset.mems: No space left on device
Mar 16 08:33:38 undercloud-0 systemd[1]: libpod-3671ea66cdbe673e84cf2cf7214541ef0406936aacbad7aeed15885de271e43a.scope: Failed to set cpuset.cpus: No space left on device
Mar 16 08:33:38 undercloud-0 systemd[1]: libpod-3671ea66cdbe673e84cf2cf7214541ef0406936aacbad7aeed15885de271e43a.scope: Failed to set cpuset.mems: No space left on device
Mar 16 08:33:38 undercloud-0 systemd[1]: libpod-119a6ecd7cd5f97d6c1490a4d1f92045482821f9322ca062476a0e5dc00f3e1b.scope: Failed to set cpuset.cpus: No space left on device
Mar 16 08:33:38 undercloud-0 systemd[1]: libpod-119a6ecd7cd5f97d6c1490a4d1f92045482821f9322ca062476a0e5dc00f3e1b.scope: Failed to set cpuset.mems: No space left on device
Mar 16 08:33:38 undercloud-0 systemd[1]: libpod-0bba30f62117d2265b790eae3119f2ddff4921ebe18607525c82d1180f52b426.scope: Failed to set cpuset.cpus: No space left on device
Mar 16 08:33:38 undercloud-0 systemd[1]: libpod-0bba30f62117d2265b790eae3119f2ddff4921ebe18607525c82d1180f52b426.scope: Failed to set cpuset.mems: No space left on device
Mar 16 08:33:38 undercloud-0 systemd[1]: libpod-253ed5fc2e548f6395e88471f98b0374a250f7906e85edfabe88558a4b1373a4.scope: Failed to set cpuset.cpus: No space left on device
Mar 16 08:33:38 undercloud-0 systemd[1]: libpod-253ed5fc2e548f6395e88471f98b0374a250f7906e85edfabe88558a4b1373a4.scope: Failed to set cpuset.mems: No space left on device
Mar 16 08:33:38 undercloud-0 systemd[1]: libpod-479032d577dc634024288771b4365394f90da501a3cee983b15081e2b5bd477d.scope: Failed to set cpuset.cpus: No space left on device
Mar 16 08:33:38 undercloud-0 systemd[1]: libpod-479032d577dc634024288771b4365394f90da501a3cee983b15081e2b5bd477d.scope: Failed to set cpuset.mems: No space left on device
Mar 16 08:33:38 undercloud-0 systemd[1]: libpod-dc9a8618e1d18cedbb522f53e0f03fa402a5043b7ec32e0767a8d780b1601ad2.scope: Failed to set cpuset.cpus: No space left on device
Mar 16 08:33:38 undercloud-0 systemd[1]: libpod-dc9a8618e1d18cedbb522f53e0f03fa402a5043b7ec32e0767a8d780b1601ad2.scope: Failed to set cpuset.mems: No space left on device

Comment 13 Alex Schultz 2020-03-16 14:43:56 UTC
This is a regression starting with 1.8.18-12.  It appears to now take about <the value of -R> * <the value of -N> to complete. with 1.8.18-12, it scales such that -R 1 = ~5s, -R 2 = ~11s, -R 3 = ~18s for -N 5.  It's worse with 1.8.18-14.

()[root@undercloud /]# time ipmitool -I lanplus -H 192.168.1.15 -L ADMINISTRATOR -U admin -v -R 3 -N 5 -P password chassis bootdev pxe
Running Get PICMG Properties my_addr 0x20, transit 0, target 0x20
Error response 0xc1 from Get PICMG Properities
Running Get VSO Capabilities my_addr 0x20, transit 0, target 0x20
Invalid completion code received: Invalid command
Discovered IPMB address 0x0
Set Boot Device to pxe

real	0m0.519s
user	0m0.000s
sys	0m0.005s
()[root@undercloud /]# rpm -qa | grep ipmitool
ipmitool-1.8.18-10.el8.x86_64


()[root@undercloud /]# time ipmitool -I lanplus -H 192.168.1.15 -L ADMINISTRATOR -U admin -v -R 3 -N 5 -P password chassis bootdev pxe
Unable to Get Channel Cipher Suites
Running Get PICMG Properties my_addr 0x20, transit 0, target 0x20
Error response 0xc1 from Get PICMG Properities
Running Get VSO Capabilities my_addr 0x20, transit 0, target 0x20
Invalid completion code received: Invalid command
Discovered IPMB address 0x0
Set Boot Device to pxe

real	0m18.889s
user	0m0.001s
sys	0m0.006s
()[root@undercloud /]# rpm -qa | grep ipmitool
ipmitool-1.8.18-12.el8.x86_64


()[root@undercloud /]# time ipmitool -I lanplus -H 192.168.1.15 -L ADMINISTRATOR -U admin -v -R 3 -N 5 -P password chassis bootdev pxe
Unable to Get Channel Cipher Suites
Running Get PICMG Properties my_addr 0x20, transit 0, target 0x20
Error response 0xc1 from Get PICMG Properities
Running Get VSO Capabilities my_addr 0x20, transit 0, target 0x20
Invalid completion code received: Invalid command
Discovered IPMB address 0x0
Set Boot Device to pxe

real	0m22.575s
user	0m0.001s
sys	0m0.006s
()[root@undercloud /]# rpm -qa | grep ipmitool
ipmitool-1.8.18-14.el8.x86_64

Comment 14 Dmitry Tantsur 2020-03-16 14:50:10 UTC
Note that "Unable to Get Channel Cipher Suites" is also new in the affected versions. I wonder if it's somehow related.

Comment 15 Bob Fournier 2020-03-16 14:58:36 UTC
> Note that "Unable to Get Channel Cipher Suites" is also new in the affected versions. I wonder if it's somehow related.

This was added in the version that is failing as 1.8.18-10 works fine.

* Tue Oct 15 2019 Václav Doležal <vdolezal> - 1.8.18-11
- Choose the best cipher suite available when connecting over LAN (#1749360)

See https://bugzilla.redhat.com/show_bug.cgi?id=1749360

Comment 16 Bob Fournier 2020-03-17 14:30:05 UTC
I've tried this with baremetal nodes and IPMI works fine using ipmitool-1.8.18-14.el8.x86_64.

$ sudo podman exec -it ironic_conductor /bin/bash
()[ironic@hardprov-dl360-g9-01 /]$ rpm -qa | grep ipmi
ipmitool-1.8.18-14.el8.x86_64


Harald has suggested this may be a pyghmi issue as pyghmi is the common python IPMI implementation used by both OVB and VBMC

Comment 17 Dmitry Tantsur 2020-03-17 14:35:06 UTC
Even if pyghmi doing something fishy, it's still a regression. In any case, we need some guidance from people familiar with ipmitool.

Comment 18 Harald Jensås 2020-03-17 15:00:54 UTC
I was thinking it might be python3-pyghmi-1.0.22-2.el8ost.noarch, being quite dated, some fix could be present in later version. But, for OVB (atleast in my case) pyghmi-1.5.12.tar.gz is installed from pip. Assuming other people's OVB environments also get this version of pyghmi it unlikely that there is a fix.

Comment 19 Vaclav Dolezal 2020-03-17 16:39:10 UTC
Yeah, it looks like it stalls on the "Get Channel Cipher Suites" command.

I don't thinks this is bug in ipmitool. I see "Get Channel Cipher Suites" command listed as "Mandatory if IPMI v2.0/RMCP+ session-based channels are implemented.". And I suppose BMC should return some error code (e.g. 0xC1 Invalid command) instead of ignoring the request.

I think you can work around this timeout by forcing cipher suite, e.g. "-C 3"

Comment 20 Dmitry Tantsur 2020-03-17 17:08:04 UTC
It could be an explanation if ipmitool outright failed or just always waited. But what we observe is that ipmitool receives a correct response and then retries it until timeout. The lower number of retries (-R and -N) - the faster ipmitool (successfully!) returns. This behavior looks illogical to me.

Comment 21 Vaclav Dolezal 2020-03-18 09:49:38 UTC
I don't see in the log where "Get Channel Cipher Suites" command (NetFn, cmd = 0x06, 0x54) receives a correct response. The -R and -N parameters apply to *each* IPMI command.

So ipmitool, in order to use best cipher suite, sends "Get Channel Cipher Suites" command, times out, retries, finally gives up, uses fallback value and continue with the rest of the commands.

Comment 22 Dmitry Tantsur 2020-03-18 10:14:04 UTC
So, it looks like pyghmi sends code 0xC1 on not implemented features: https://opendev.org/x/pyghmi/src/branch/master/pyghmi/ipmi/bmc.py#L182. Is it the right thing to do (assuming we cannot just simply implement "Get Channel Cipher Suites")?

Comment 23 Vaclav Dolezal 2020-03-18 10:27:28 UTC
Yes, it is right thing to do. Note that ipmitool sends "Get Channel Cipher Suites" before authentication so it may be handled in different code path.

Comment 24 Dmitry Tantsur 2020-03-18 10:33:22 UTC
Okay, then shouldn't ipmitool give up and fallback to the default on receiving that?

The practical implication of my question is that we may be able to fix pyghmi, but we may not be able to fix BMCs out there that will get broken with RHEL 8.2.

Comment 25 Vaclav Dolezal 2020-03-18 11:24:28 UTC
> Okay, then shouldn't ipmitool give up and fallback to the default on receiving that?
Well, I don't see ipmitool receiving any reply in the log.
On quick look into pyghmi sources, it looks that it is handled (or ignored in this case) in pyghmi/ipmi/private/serversession.py:IpmiServer:sessionless_data() (line 301).

> The practical implication of my question is that we may be able to fix
> pyghmi, but we may not be able to fix BMCs out there that will get broken
> with RHEL 8.2.
Well, I hope that most BMCs will behave correctly. Without this patch, for some newer BMCs user will need to manually specify "-C 17" because they reject cipher suite 3 as obsolete. And older ones don't support cipher suite 17…

I have confirmed in IPMI spec that "Get Channel Cipher Suites" "works at any privilege level, can be sent prior to a session being established".

Comment 26 Dmitry Tantsur 2020-03-18 13:26:34 UTC
> it looks that it is handled (or ignored in this case) in pyghmi/ipmi/private/serversession.py:IpmiServer:sessionless_data()

Thanks for the pointer! I'll see if I can just implement it.

> Well, I hope that most BMCs will behave correctly.

I no longer hope that.. But I guess you're right, at least some response is expected (although I've seen BMCs that just ignore you when they don't like something..).

Comment 27 Dmitry Tantsur 2020-03-18 14:00:24 UTC
I've reached out to the pyghmi maintainer and he promised to help. I'm personally going to go insane from trying to understand the IPMI spec (I haven't been seriously exposed to it before).

Comment 28 Dmitry Tantsur 2020-03-19 10:23:53 UTC
pyghmi 1.5.13 has been released with a fix. Since virtualbmc is installed from pip, I'm closing this bug as fixed.

Comment 29 Bob Fournier 2020-03-29 20:28:05 UTC
*** Bug 1813468 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.