Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 2007640

Summary: Opening vm consoles does not work and shutdowns vms
Product: Red Hat Enterprise Linux 9 Reporter: Martin Krajnak <mkrajnak>
Component: qemu-kvmAssignee: Virtualization Maintenance <virt-maint>
qemu-kvm sub component: General QA Contact: CongLi <coli>
Status: CLOSED DUPLICATE Docs Contact:
Severity: high    
Priority: unspecified CC: coli, jinzhao, juzhang, juzhou, virt-maint, zhguo
Version: 9.0Flags: pm-rhel: mirror+
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-09-25 06:37:50 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Martin Krajnak 2021-09-24 12:54:36 UTC
Description of problem:
Guys from #virt channel who helped me with debugging wanted me to mention this:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=993145
also:
<jtomko> mprivozn: fixed by 118d527f2e4baec5fe8060b22a6212468b8e4d3f

Version-Release number of selected component (if applicable):
qemu-kvm-6.1.0-3.el9.x86_64
virt-manager-3.2.0-9.el9.noarch

How reproducible:
always

Steps to Reproduce:
1.Open virt-manager
2.start vm
3.Open vm console

1. virt-viewer --attach rhel9.0

Actual results:
blank window is opened, vm shuts down, windows just show message guest is not running

Expected results:
VM console should be loaded

Additional info:
The VM is not restarted when I use other vnc client to connect to VM.


Contents of /var/log/libvirt/qemu/rhel9.0.log:
2021-09-24 12:17:17.647+0000: starting up libvirt version: 7.6.0, package: 2.el9 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2021-08-10-04:33:30, ), qemu version: 6.1.0qemu-kvm-6.1.0-2.el9, kernel: 5.14.0-2.el9.x86_64, hostname: localhost.localdomain
LC_ALL=C \
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin \
HOME=/var/lib/libvirt/qemu/domain-1-rhel9.0 \
XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-1-rhel9.0/.local/share \
XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-1-rhel9.0/.cache \
XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-1-rhel9.0/.config \
/usr/libexec/qemu-kvm \
-name guest=rhel9.0,debug-threads=on \
-S \
-object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-1-rhel9.0/master-key.aes"}' \
-machine pc-q35-rhel8.5.0,accel=kvm,usb=off,dump-guest-core=off,memory-backend=pc.ram \
-cpu Skylake-Client-IBRS,ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,clflushopt=on,umip=on,md-clear=on,stibp=on,arch-capabilities=on,ssbd=on,xsaves=on,pdpe1gb=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,hle=off,rtm=off \
-m 4096 \
-object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":4294967296}' \
-overcommit mem-lock=off \
-smp 2,sockets=2,cores=1,threads=1 \
-uuid 3e8fbf23-5245-4cd2-8bfe-52e36a9ebe35 \
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=32,server=on,wait=off \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=utc,driftfix=slew \
-global kvm-pit.lost_tick_policy=delay \
-no-hpet \
-no-shutdown \
-global ICH9-LPC.disable_s3=1 \
-global ICH9-LPC.disable_s4=1 \
-boot strict=on \
-device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \
-device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \
-device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \
-device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \
-device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \
-device pcie-root-port,port=0x15,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x5 \
-device pcie-root-port,port=0x16,chassis=7,id=pci.7,bus=pcie.0,addr=0x2.0x6 \
-device qemu-xhci,p2=15,p3=15,id=usb,bus=pci.2,addr=0x0 \
-device virtio-serial-pci,id=virtio-serial0,bus=pci.3,addr=0x0 \
-blockdev '{"driver":"file","filename":"/home/mkrajnak/storage/RHEL-9.0.0-20210107.0-BaseOS-x86_64.qcow2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-1-format","read-only":false,"driver":"qcow2","file":"libvirt-1-storage","backing":null}' \
-device virtio-blk-pci,bus=pci.4,addr=0x0,drive=libvirt-1-format,id=virtio-disk0,bootindex=1 \
-netdev tap,fd=34,id=hostnet0,vhost=on,vhostfd=35 \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:8e:59:a4,bus=pci.1,addr=0x0 \
-chardev pty,id=charserial0 \
-device isa-serial,chardev=charserial0,id=serial0 \
-chardev socket,id=charchannel0,fd=36,server=on,wait=off \
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \
-device usb-tablet,id=input0,bus=usb.0,port=1 \
-audiodev id=audio1,driver=none \
-vnc 127.0.0.1:0,audiodev=audio1 \
-device VGA,id=video0,vgamem_mb=16,bus=pcie.0,addr=0x1 \
-device virtio-balloon-pci,id=balloon0,bus=pci.5,addr=0x0 \
-object '{"qom-type":"rng-random","id":"objrng0","filename":"/dev/urandom"}' \
-device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.6,addr=0x0 \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
char device redirected to /dev/pts/3 (label charserial0)
qemu-kvm: ../util/qemu-sockets.c:1349: SocketAddress *socket_sockaddr_to_address_unix(struct sockaddr_storage *, socklen_t, Error **): Assertion `salen >= sizeof(su->sun_family) + 1 && salen <= sizeof(struct sockaddr_un)' failed.
2021-09-24 12:17:22.758+0000: shutting down, reason=crashed



Output of virt-manager --debug:
[Fri, 24 Sep 2021 14:04:25 virt-manager 3982] DEBUG (vmmenu:230) Starting vm 'rhel9.0'
[Fri, 24 Sep 2021 14:04:26 virt-manager 3982] DEBUG (connection:706) node device lifecycle event: nodedev=net_vnet1_fe_54_00_8e_59_a4 state=VIR_NODE_DEVICE_EVENT_CREATED reason=0
[Fri, 24 Sep 2021 14:04:26 virt-manager 3982] DEBUG (connection:646) domain agent lifecycle event: domain=rhel9.0 state=VIR_CONNECT_DOMAIN_EVENT_AGENT_LIFECYCLE_STATE_DISCONNECTED reason=1
[Fri, 24 Sep 2021 14:04:26 virt-manager 3982] DEBUG (connection:631) domain lifecycle event: domain=rhel9.0 state=VIR_DOMAIN_EVENT_RESUMED reason=VIR_DOMAIN_EVENT_RESUMED_UNPAUSED
[Fri, 24 Sep 2021 14:04:26 virt-manager 3982] DEBUG (connection:631) domain lifecycle event: domain=rhel9.0 state=VIR_DOMAIN_EVENT_STARTED reason=VIR_DOMAIN_EVENT_STARTED_BOOTED
[Fri, 24 Sep 2021 14:04:31 virt-manager 3982] DEBUG (connection:646) domain agent lifecycle event: domain=rhel9.0 state=VIR_CONNECT_DOMAIN_EVENT_AGENT_LIFECYCLE_STATE_CONNECTED reason=2
[Fri, 24 Sep 2021 14:04:34 virt-manager 3982] DEBUG (serialcon:17) Using VTE API 2.91
[Fri, 24 Sep 2021 14:04:34 virt-manager 3982] DEBUG (xmleditor:12) Using GtkSource 4
[Fri, 24 Sep 2021 14:04:35 virt-manager 3982] DEBUG (vmwindow:184) Showing VM details: <vmmDomain name=rhel9.0 id=0x7efcec2734c0>
[Fri, 24 Sep 2021 14:04:35 virt-manager 3982] DEBUG (engine:316) window counter incremented to 2
[Fri, 24 Sep 2021 14:04:35 virt-manager 3982] DEBUG (console:721) Starting connect process for proto=vnc trans= connhost=127.0.0.1 connuser= connport= gaddr=127.0.0.1 gport=5900 gtlsport=None gsocket=None
[Fri, 24 Sep 2021 14:04:35 virt-manager 3982] ERROR (console:736) Error connecting to graphical console
Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/details/console.py", line 734, in _init_viewer
    self._viewer.console_open()
  File "/usr/share/virt-manager/virtManager/details/viewers.py", line 215, in console_open
    return self._open()
  File "/usr/share/virt-manager/virtManager/details/viewers.py", line 429, in _open
    return Viewer._open(self)
  File "/usr/share/virt-manager/virtManager/details/viewers.py", line 134, in _open
    fd = self._get_fd_for_open()
  File "/usr/share/virt-manager/virtManager/details/viewers.py", line 128, in _get_fd_for_open
    return self._vm.open_graphics_fd()
  File "/usr/share/virt-manager/virtManager/object/domain.py", line 1095, in open_graphics_fd
    return self._backend.openGraphicsFD(0, flags)
  File "/usr/lib64/python3.9/site-packages/libvirt.py", line 2240, in openGraphicsFD
    raise libvirtError('virDomainOpenGraphicsFD() failed')
libvirt.libvirtError: Unable to read from monitor: Connection reset by peer
[Fri, 24 Sep 2021 14:04:35 virt-manager 3982] DEBUG (console:592) Viewer disconnected
[Fri, 24 Sep 2021 14:04:35 virt-manager 3982] DEBUG (connection:706) node device lifecycle event: nodedev=net_vnet1_fe_54_00_8e_59_a4 state=VIR_NODE_DEVICE_EVENT_DELETED reason=0
[Fri, 24 Sep 2021 14:04:35 virt-manager 3982] DEBUG (connection:1050) nodedev=net_vnet1_fe_54_00_8e_59_a4 removed
[Fri, 24 Sep 2021 14:04:35 virt-manager 3982] DEBUG (connection:631) domain lifecycle event: domain=rhel9.0 state=VIR_DOMAIN_EVENT_STOPPED reason=VIR_DOMAIN_EVENT_STOPPED_FAILED

Comment 1 CongLi 2021-09-25 06:37:50 UTC

*** This bug has been marked as a duplicate of bug 2000814 ***