RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2026987 - [RHEL 8.6][virt-manager] Error when accessing disk details
Summary: [RHEL 8.6][virt-manager] Error when accessing disk details
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: virt-manager
Version: 8.6
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: ---
Assignee: Jonathon Jongsma
QA Contact: smitterl
URL:
Whiteboard:
Depends On:
Blocks: 1995125 2009080 2062656
TreeView+ depends on / blocked
 
Reported: 2021-11-26 18:25 UTC by smitterl
Modified: 2022-05-10 14:27 UTC (History)
11 users (show)

Fixed In Version: virt-manager-3.2.0-4.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2027462 2062656 (view as bug list)
Environment:
Last Closed: 2022-05-10 13:54:55 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
error screenshot (57.07 KB, image/png)
2021-11-26 18:34 UTC, smitterl
no flags Details
domain xml (5.35 KB, text/plain)
2021-12-07 14:10 UTC, smitterl
no flags Details
error window (164.77 KB, image/png)
2022-03-10 14:15 UTC, smitterl
no flags Details


Links
System ID Private Priority Status Summary Last Updated
IBM Linux Technology Center 196636 0 None None None 2022-03-10 07:38:29 UTC
Red Hat Issue Tracker RHELPLAN-104061 0 None None None 2021-11-26 18:26:50 UTC
Red Hat Product Errata RHBA-2022:1862 0 None None None 2022-05-10 13:55:41 UTC

Description smitterl 2021-11-26 18:25:53 UTC
UPDATE: The same error message is reproduced with a different scenario. Updating description and title to reflect the reproducible issue. See also comment#7.

Description:
Error dialog opens on any action on virtual disks.


Compose/Versions:
RHEL-8.6.0-20211206.1
kernel-4.18.0-353.el8.x86_64
libvirt-daemon-7.9.0-1.module+el8.6.0+13150+28339563.x86_64
virt-manager-3.2.0-1.el8.noarch

Reproduces:
100%

Steps to reproduce:
1. Import vm from image file with default settings
OR
1'. virsh define atttached domain.xml
2. Open VM details
3. Click VirtIO Disk 1

Actual result:
Error dialog opens
Error refreshing hardware page: Argument 1 does not allow None as a value

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/details/details.py", line 1714, in _refresh_page
    self._refresh_disk_page(dev)
  File "/usr/share/virt-manager/virtManager/details/details.py", line 2007, in _refresh_disk_page
    self._addstorage.set_dev(disk)
  File "/usr/share/virt-manager/virtManager/device/addstorage.py", line 327, in set_dev
    self.widget("disk-removable").set_active(removable)
TypeError: Argument 1 does not allow None as a value

Expected result:
No error.

Notes:
Doesn't reproduce with
virt-manager-2.2.1-4.el8.noarch
libvirt-daemon-6.0.0-37.module+el8.5.0+12162+40884dd2.x86_64



========================================
original description
--------------------


Description of problem:
If a VM has an mdev attached, the details can't be reviewed or edited.


Version-Release number of selected component (if applicable):
virt-manager-3.2.0-1.el8.noarch

How reproducible:
100%


Steps to Reproduce:
0. Have VM with attached MDEV on $remote_server
1. Connect to $remote_server
2. Open VM details

Actual results:
Error launching details: Argument 1 does not allow None as a value

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/vmwindow.py", line 40, in get_instance
    cls._instances[key] = vmmVMWindow(vm)
  File "/usr/share/virt-manager/virtManager/vmwindow.py", line 158, in __init__
    self.activate_default_page()
  File "/usr/share/virt-manager/virtManager/vmwindow.py", line 459, in activate_default_page
    self.activate_default_console_page()
  File "/usr/share/virt-manager/virtManager/vmwindow.py", line 451, in activate_default_console_page
    self._console.vmwindow_activate_default_console_page()
  File "/usr/share/virt-manager/virtManager/details/console.py", line 976, in vmwindow_activate_default_console_page
    return self._activate_default_console_page()
  File "/usr/share/virt-manager/virtManager/details/console.py", line 931, in _activate_default_console_page
    self._toggle_first_console_menu_item()
  File "/usr/share/virt-manager/virtManager/details/console.py", line 909, in _toggle_first_console_menu_item
    self._populate_console_list_menu()
  File "/usr/share/virt-manager/virtManager/details/console.py", line 904, in _populate_console_list_menu
    self._console_list_menu_toggled)
  File "/usr/share/virt-manager/virtManager/details/console.py", line 281, in rebuild_menu
    item.set_sensitive(sensitive)
TypeError: Argument 1 does not allow None as a value

Expected results:
Details can be opened and edited.


Additional info:
1. Any installation with an additional MDEV host dev will end in the same error although the machine is created correctly.

Comment 1 smitterl 2021-11-26 18:34:12 UTC
Created attachment 1843760 [details]
error screenshot

Comment 2 Jonathon Jongsma 2021-12-02 20:47:39 UTC
I've tried several times to reproduce this and cannot. It seems to work just fine for me with a nightly build of rhel-8.6. Can you provide any more information to help reproduce? Is it architecture-specific? Can you share the domain xml?

Comment 6 Hongzhou Liu 2021-12-07 11:29:26 UTC
This problem is also happened when I tried to edit a storage hardware for a vm.

packages:
virt-manager-3.2.0-1.el8.noarch
libvirt-7.10.0-1.module+el8.6.0+13150+28339563.x86_64

Steps to Reproduce:
1. Prepare a VM 
2. Connect to $remote_server
3. Open VM details and check the virtual device detail 

>>
Error refreshing hardware page: Argument 1 does not allow None as a value

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/details/details.py", line 1714, in _refresh_page
    self._refresh_disk_page(dev)
  File "/usr/share/virt-manager/virtManager/details/details.py", line 2007, in _refresh_disk_page
    self._addstorage.set_dev(disk)
  File "/usr/share/virt-manager/virtManager/device/addstorage.py", line 327, in set_dev
    self.widget("disk-removable").set_active(removable)
TypeError: Argument 1 does not allow None as a value

this error also happens when created a VM, customize a virtual disk.

Please check the attachment to get more info.

After an error occurs, I am still able to change the bus type and the xml file changed successfully. Besides, every single click in this window will trigger this error.

This problem only can be reproduced on RHEL8.6. arc x86_64, I think this not architecture-specific because I can reproduce this on x86_64.

Comment 7 smitterl 2021-12-07 14:06:55 UTC
(In reply to Jonathon Jongsma from comment #2)
> I've tried several times to reproduce this and cannot. It seems to work just
> fine for me with a nightly build of rhel-8.6. Can you provide any more
> information to help reproduce? Is it architecture-specific? Can you share
> the domain xml?

Looking at the traces, IMO the same error message "Argument 1 does not allow None as a value" has more than one root cause by two different scenarios:

1. On MDEV devices (originally reported)
2. On virtual disk devices (hit by Hongzhou, comment#6)

With the latest nightly compose I can't reproduce 1. but I can reproduce 2. reliably. So I'm changing the title and updating BZ links.

As pointed out by Hangzhou

> After an error occurs, I am still able to change the bus type and the xml file changed successfully. Besides, every single click in this window will trigger this error.

so I'm lowering severity to High.

For reproduction of 2. I really just had to import a guest image locally, I did not have to do this remotely. I'm attaching the domain xml in any case.

Steps to reproduce:
1. Import vm from image file with default settings
OR
1'. virsh define atttached domain.xml
2. Open VM details
3. Click VirtIO Disk 1

Compose/Versions this reproduces with:
RHEL-8.6.0-20211206.1
kernel-4.18.0-353.el8.x86_64
libvirt-daemon-7.9.0-1.module+el8.6.0+13150+28339563.x86_64
virt-manager-3.2.0-1.el8.noarch

I'm setting the Regression keyword because this doesn't reproduce with
virt-manager-2.2.1-4.el8.noarch
libvirt-daemon-6.0.0-37.module+el8.5.0+12162+40884dd2.x86_64

Comment 8 smitterl 2021-12-07 14:10:39 UTC
Created attachment 1845075 [details]
domain xml

Comment 10 Jonathon Jongsma 2021-12-07 17:12:16 UTC
Aha. It looks like this is an unrelated bug that was already fixed upstream in commit e7222b5058c8874b15fbfd998e5eeb233f571075

Comment 19 Hongzhou Liu 2021-12-13 01:52:39 UTC
Verify this bug with following packages:
virt-manager-3.2.0-2.el8.noarch
virt-install-3.2.0-2.el8.noarch

step1. Prepare a VM 
step2. Connect to VM
step3. Open VM details and check the virtual device detail by clicking VirtIO

Test result:
VM detail shows correctly and no error messages exist, that's as expected. So I change the status to verified.

Comment 20 smitterl 2021-12-13 10:43:27 UTC
Thank you Hongzhou!

Comment 21 Hongzhou Liu 2021-12-23 05:17:45 UTC
Hi Sebastian,

Because this bug first report on S390, so could you double check the bug?

Thanks

Comment 22 smitterl 2021-12-24 09:35:44 UTC
(In reply to Hongzhou Liu from comment #21)
> Hi Sebastian,
> 
> Because this bug first report on S390, so could you double check the bug?
> 
> Thanks

I don't think it's necessary to double check on s390 as it's an arch-independent fix and our base scenario start virt-manager on x86_64 and connects to headless s390x server.

Comment 23 smitterl 2022-03-02 08:47:22 UTC
Verified successfully on s390x, too, because IBM reported they reproduced it on s390x, details s. https://bugzilla.redhat.com/show_bug.cgi?id=1995125#c31

Comment 24 Jonathon Jongsma 2022-03-08 21:14:21 UTC
Re-opening this bug, because the proposed fix was not sufficient as described at https://bugzilla.redhat.com/show_bug.cgi?id=1995125#c35

We also need to backport commit cf93e2dbff28fe05d6d45364c579f923b157beb1 from upstream to fully fix the issue.

Comment 30 smitterl 2022-03-09 13:06:36 UTC
(In reply to Jonathon Jongsma from comment #24)
> Re-opening this bug, because the proposed fix was not sufficient as
> described at https://bugzilla.redhat.com/show_bug.cgi?id=1995125#c35
> 
> We also need to backport commit cf93e2dbff28fe05d6d45364c579f923b157beb1
> from upstream to fully fix the issue.

Out of curiosity can you point to the commit? I searched the virt-manager repo on github for this to no avail.

Comment 32 Jonathon Jongsma 2022-03-09 14:33:14 UTC
(In reply to smitterl from comment #30)
> (In reply to Jonathon Jongsma from comment #24)
> > Re-opening this bug, because the proposed fix was not sufficient as
> > described at https://bugzilla.redhat.com/show_bug.cgi?id=1995125#c35
> > 
> > We also need to backport commit cf93e2dbff28fe05d6d45364c579f923b157beb1
> > from upstream to fully fix the issue.
> 
> Out of curiosity can you point to the commit? I searched the virt-manager
> repo on github for this to no avail.

https://github.com/virt-manager/virt-manager/commit/cf93e2dbff28fe05d6d45364c579f923b157beb1

Here is the original upstream bug report that prompted the upstream fix: https://github.com/virt-manager/virt-manager/issues/226. The guest apparently needs to be running in order to trigger this second scenario.

Comment 33 smitterl 2022-03-09 16:55:15 UTC
(In reply to Jonathon Jongsma from comment #32)
> (In reply to smitterl from comment #30)
> > (In reply to Jonathon Jongsma from comment #24)
> > > Re-opening this bug, because the proposed fix was not sufficient as
> > > described at https://bugzilla.redhat.com/show_bug.cgi?id=1995125#c35
> > > 
> > > We also need to backport commit cf93e2dbff28fe05d6d45364c579f923b157beb1
> > > from upstream to fully fix the issue.
> > 
> > Out of curiosity can you point to the commit? I searched the virt-manager
> > repo on github for this to no avail.
> 
> https://github.com/virt-manager/virt-manager/commit/
> cf93e2dbff28fe05d6d45364c579f923b157beb1
> 
> Here is the original upstream bug report that prompted the upstream fix:
> https://github.com/virt-manager/virt-manager/issues/226. The guest
> apparently needs to be running in order to trigger this second scenario.

Thanks Jonathon. I still fail to reproduce this issue on a running machine. With this and the scenarios that I covered in 

https://bugzilla.redhat.com/show_bug.cgi?id=1995125#c31
https://bugzilla.redhat.com/show_bug.cgi?id=1995125#c32

I think we need info from IBM to clarify which scenario exactly and versions led to their reporting this https://bugzilla.redhat.com/show_bug.cgi?id=1995125#c26

Until we've come to a conclusion I set this again back to VERIFIED. I hope that's okay.

Until then, IMO we should not risk our beta compose stability with an additional backport.

IBM, please can you help clarify which exact scenario you ran to reproduce the trace you reported in https://bugzilla.redhat.com/show_bug.cgi?id=1995125#c26?
We can also sync on slack or schedule a meeting to accelerate this. Thanks!

Comment 35 smitterl 2022-03-09 17:38:31 UTC
(Note: on the 8.6 compose we deliver/depend on python3-gobject-3.28.3-2.el8.s390x)

Comment 36 smitterl 2022-03-09 19:13:16 UTC
Finally with Boris help I've been able to reproduce this with a specific (valid) XML.



<domain type='kvm'>
  <name>vser</name>
  <uuid>f6e3dc7d-05a5-4410-ba4c-a9fa24acc68b</uuid>
  <memory unit='KiB'>1048576</memory>
  <currentMemory unit='KiB'>1048576</currentMemory>
  <vcpu placement='static'>2</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='s390x' machine='s390-ccw-virtio-rhel8.6.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <cpu mode='host-model' check='partial'/>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/avocado/data/avocado-vt/images/jeos-27-s390x-clone.qcow2'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0000'/>
    </disk>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='virtio-serial' index='0'>
      <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0004'/>
    </controller>
    <interface type='network'>
      <mac address='52:54:00:93:0c:c7'/>
      <source network='default'/>
      <model type='virtio'/>
      <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0001'/>
    </interface>
    <console type='pty'>
      <target type='sclp' port='0'/>
    </console>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <audio id='1' type='none'/>
    <hostdev mode='subsystem' type='mdev' managed='no' model='vfio-ccw'>
      <source>
        <address uuid='ca9a7109-469c-4790-bc4d-94ba19718217'/>
      </source>
      <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0002'/>
    </hostdev>
    <memballoon model='virtio'>
      <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0007'/>
    </memballoon>
    <panic model='s390'/>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'/>
  <seclabel type='dynamic' model='dac' relabel='yes'/>
</domain>

Comment 37 smitterl 2022-03-09 19:32:42 UTC
Reproduced with:
virt-manager-3.2.0-3.el8.noarch

Steps:
1. Define (via virsh or virt-install) a VM with XML as in comment#36
2. Start the VM (via virsh or virt-install)
3. Start virt-manager, connect to host where VM from 1. was defined
4. Double click on VM
==> Error is shown but details window never displayed.

Pre-verified with:
virt-manager-3.2.0-4.el8_rc.2792a24f20.noarch

Confirmation tested that the Error above after 4. doesn't show.

Junqin, do you think we should do more regression testing for the commit? ref. https://github.com/virt-manager/virt-manager/commit/cf93e2dbff28fe05d6d45364c579f923b157beb1

Thanks, Jonathon for the scratch-build. I'll set this back to POST.

Comment 42 zhoujunqin 2022-03-10 12:00:40 UTC
(In reply to smitterl from comment #37)
> Reproduced with:
> virt-manager-3.2.0-3.el8.noarch
> 
> Steps:
> 1. Define (via virsh or virt-install) a VM with XML as in comment#36
> 2. Start the VM (via virsh or virt-install)
> 3. Start virt-manager, connect to host where VM from 1. was defined
> 4. Double click on VM
> ==> Error is shown but details window never displayed.
> 
> Pre-verified with:
> virt-manager-3.2.0-4.el8_rc.2792a24f20.noarch
> 
> Confirmation tested that the Error above after 4. doesn't show.
> 
> Junqin, do you think we should do more regression testing for the commit?
> ref.
> https://github.com/virt-manager/virt-manager/commit/
> cf93e2dbff28fe05d6d45364c579f923b157beb1
> 
> Thanks, Jonathon for the scratch-build. I'll set this back to POST.

Hi smitterl,
To be honest, I can't reproduce the https://bugzilla.redhat.com/show_bug.cgi?id=1995125#c26 on the x86_64 platform with virt-manager-3.2.0-3.el8.noarch.
Steps followed comment#6.

And what's the special part of comment#36, then I can define such VM on the x86_64 platform, thanks?

BR,
juzhou.

Comment 43 smitterl 2022-03-10 14:12:58 UTC
I am not 100% sure, but I can "fix" the error by making sure that the VM has consoles, graphical and text. I assume this is related with the menu item "View -> Consoles". If that list is empty the error would trigger.

I came to this conclusion after adding print() for (label, dev, tooltip) around where the error occurs[1] and started virt-manager with the --debug flag.

Initially I used a minimal XML, removing all but the <disk> below the <devices> node and it reproduced, the triplet with values
(label, dev, tooltip) = (No graphical console available, None, None)

After adding a graphics element I hit it again but with
(label, dev, tooltip) = (No text console available, None, None)

So I also added a text console and then could see the details.

IIRC, there might also be some caching; I assume the safest way to try this is:

1. Define a VM that doesnt' have any text or graphical consoles and start the vm
2. Make sure to close all virt-manager instances first
3. Open virt-manager and double click on the machine
==> the error pops up (screenshot attached).


[1]  File "/usr/share/virt-manager/virtManager/details/console.py", line 281, in rebuild_menu
    item.set_sensitive(sensitive)
TypeError: Argument 1 does not allow None as a value

Comment 44 smitterl 2022-03-10 14:15:21 UTC
Created attachment 1865175 [details]
error window

Comment 46 Hongzhou Liu 2022-03-11 05:43:00 UTC
Hi Sebastian, Thanks for your comment, I am able to reproduce this bug:
packages:
virt-manager-3.2.0-3.el8.noarch
virt-install-3.2.0-3.el8.noarch

Step 1: Run this command to install a vm with --graphics=none or --console=none
virt-install \
--disk /home/vm04.img,size=20 \
--location http://download.eng.pek2.redhat.com/rhel-8/composes/RHEL-8/RHEL-8.7.0-20220305.2/compose/BaseOS/x86_64/os/ \
--graphics=none| --console=none

Step 2: Start virt-manager and click vm to get details.

Result: an error pops up same with smitterl's attachment 1865175 [details]

Now Pre-Verify this bug with
Packages:
virt-manager-3.2.0-4.el8.noarch
virt-install-3.2.0-4.el8.noarch

Step 1: Run this command to install a vm with --graphics=none, --console=none
virt-install \
--disk /home/vm04.img,size=20 \
--location http://download.eng.pek2.redhat.com/rhel-8/composes/RHEL-8/RHEL-8.7.0-20220305.2/compose/BaseOS/x86_64/os/ \
--graphics=none| --console=none

Step 2: Start virt-manager and click vm to get details.

Virtual hardware details show correctly and no errors pop up while clicking each option.

Base on this result I add Verified:tested for this bz, Thanks

Comment 50 Hongzhou Liu 2022-03-16 04:51:48 UTC
Verify this bug with
Packages:
virt-manager-3.2.0-4.el8.noarch
virt-install-3.2.0-4.el8.noarch

Step 1: Run this command to install a vm with --graphics=none, --console=none
virt-install \
--disk /home/vm.img,size=20 \
--location http://download.eng.pek2.redhat.com/rhel-8/composes/RHEL-8/RHEL-8.7.0-20220305.2/compose/BaseOS/x86_64/os/ \
--console=none

Step 2: Start virt-manager GUI, click the vm just created, check the display and vm details

Result: The Graphic console display normal and all options in vm details can be checked correctly.


Base on this result, I change the status for this bug to Verified.

@

Comment 52 errata-xmlrpc 2022-05-10 13:54:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (virt-manager bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:1862


Note You need to log in before you can comment on or make changes to this bug.