Bug 2023627 - virt-viewer cannot connect to RHEL 8 KVM if firewalld is running
Summary: virt-viewer cannot connect to RHEL 8 KVM if firewalld is running
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: firewalld
Version: 8.4
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Eric Garver
QA Contact: qe-baseos-daemons
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-11-16 08:40 UTC by Peter Tselios
Modified: 2023-08-16 07:28 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-08-16 07:28:33 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Virtual Console when firewalld runs on the hypervisor (16.16 KB, image/png)
2021-11-16 08:40 UTC, Peter Tselios
no flags Details
iptable output (4.27 KB, text/plain)
2022-01-27 18:19 UTC, Peter Tselios
no flags Details
nft list (21.97 KB, text/plain)
2022-01-27 18:19 UTC, Peter Tselios
no flags Details
Combined tcpdump from the bridge's interface and lo (3.36 MB, application/x-xz)
2022-02-04 21:31 UTC, Peter Tselios
no flags Details
Combined tcpdump from the bridge's interface and lo (3.37 MB, application/x-xz)
2022-02-04 21:32 UTC, Peter Tselios
no flags Details
tcpdump from the lo interface (9.38 MB, application/x-xz)
2022-02-04 21:33 UTC, Peter Tselios
no flags Details
tcpdump from lo (firewalld stopped) (22.61 KB, application/x-xz)
2022-02-04 21:34 UTC, Peter Tselios
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-102900 0 None None None 2021-11-16 09:25:47 UTC

Description Peter Tselios 2021-11-16 08:40:20 UTC
Created attachment 1841982 [details]
Virtual Console when firewalld runs on the hypervisor

Description of problem:

I have a RHEL 8.4 server with libvirt as my hypervisor. 
My issue is that I cannot access the Virtual Console of a VM via virt-manager or virt-viewer when Firewall is enabled on the RHEL 8 hypervisor. 

When I stop firewalld, everything works as expected. 
I don't face the issue in other hypervisors, RHEL/Fedora/Suse, only on the RHEL 8 machine. 

Version-Release number of selected component (if applicable):

libvirt-daemon-kvm-6.0.0-35.1.module+el8.4.0+11273+64eb94ef.x86_64
firewalld-0.8.2-7.el8_4.noarch

How reproducible:


Steps to Reproduce:
1. Create a VM in a RHEL 8.3/8.4 server
2. Open from another machine virt-manager or virt-viewer and try to access the virtual console of this machine. The virtual console stays black. (with-firewall attachment)
2. Close the console
3. Stop firewalld on the RHEL 8 server
4. Open the vitrual console again. Now you will see the contents. 

Actual results:
VC is not visible

Expected results:
VC is visible

Additional info:

I have setup SSH keys to access libvirt on the remote server and of course the SSH keys works. I have used Fedora 32,33,34 and openSUSE Leap 15.1,Leap 15.2, Leap 15.3

On the client, the virt-manager are the following versions: 

* virt-manager-3.2.0-7.4.1.noarch
* virt-manager-3.2.0-3.fc34.noarch

The firewalld configuration is the following: 

=========================================================================
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp8s0 server_virt
  sources: 
  services: dns http https libvirt-tls nfs nfs-v3 rpc-bind samba ssh
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:
=========================================================================


An interesting point is that if you stop firewalld, open the console and then start firewalld, the session is still working. Until you close virt-viewer or the "view" from virt-manager.


Using virt-viewer with debug: 

============================
virt-viewer --debug -c qemu+ssh://root.0.1/system Windows_10_Pro
(virt-viewer:140910): virt-viewer-DEBUG: 10:24:57.921: connecting ...
(virt-viewer:140910): virt-viewer-DEBUG: 10:24:57.921: Opening connection to libvirt with URI qemu+ssh://root.0.1/system
ssh: connect to host 192.168.0.1 port 22: Connection timed out
(virt-viewer:140910): virt-viewer-DEBUG: 10:26:02.102: initial connect
(virt-viewer:140910): virt-viewer-DEBUG: 10:26:02.102: notebook show status 0x55a4275d62a0
(virt-viewer:140910): virt-viewer-DEBUG: 10:26:02.102: virt_viewer_app_set_uuid_string: UUID changed to 4a776095-d660-41e9-ab3c-744734bf3e4e
(virt-viewer:140910): virt-viewer-DEBUG: 10:26:02.103: notebook show status 0x55a4275d62a0
(virt-viewer:140910): virt-viewer-DEBUG: 10:26:02.104: Guest Windows_10_Pro is running, determining display
(virt-viewer:140910): virt-viewer-DEBUG: 10:26:02.104: Set connect info: (null),(null),-1,-1,(null),(null),(null),0
(virt-viewer:140910): virt-viewer-DEBUG: 10:26:02.105: Guest Windows_10_Pro has a spice display
(virt-viewer:140910): virt-viewer-DEBUG: 10:26:02.114: Guest graphics address is 127.0.0.1:5902
(virt-viewer:140910): virt-viewer-DEBUG: 10:26:02.114: Set connect info: 192.168.0.1,127.0.0.1,5902,-1,ssh,(null),root,0
(virt-viewer:140910): virt-viewer-DEBUG: 10:26:02.114: Error operation forbidden: read only access prevents virDomainOpenGraphicsFD
(virt-viewer:140910): virt-viewer-DEBUG: 10:26:02.114: After open connection callback fd=-1
(virt-viewer:140910): virt-viewer-DEBUG: 10:26:02.114: Opening indirect TCP connection to display at 127.0.0.1:5902
(virt-viewer:140910): virt-viewer-DEBUG: 10:26:02.114: Setting up SSH tunnel via root.0.1
(virt-viewer:140910): virt-viewer-DEBUG: 10:26:02.116: New spice channel 0x55a4274665e0 SpiceMainChannel 0
(virt-viewer:140910): virt-viewer-DEBUG: 10:26:02.116: notebook show status 0x55a4275d62a0
(virt-viewer:140910): virt-viewer-DEBUG: 10:26:02.117: reconnect_poll: 0

(virt-viewer:140910): virt-viewer-DEBUG: 10:26:02.117: reconnect_poll: 0
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.660: main channel: opened
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.660: notebook show status 0x55a4275d62a0
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.662: app is not in full screen
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.664: app is not in full screen
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.664: New spice channel 0x55a427450360 SpiceUsbredirChannel 1
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.664: new usbredir channel
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.664: Error operation forbidden: read only access prevents virDomainOpenGraphicsFD
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.664: After open connection callback fd=-1
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.669: New spice channel 0x55a427450670 SpiceUsbredirChannel 0
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.669: new usbredir channel
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.670: Error operation forbidden: read only access prevents virDomainOpenGraphicsFD
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.670: After open connection callback fd=-1
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.675: New spice channel 0x55a427450980 SpiceRecordChannel 0
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.675: New spice channel 0x55a427450c90 SpicePlaybackChannel 0
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.675: new audio channel
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.708: Error operation forbidden: read only access prevents virDomainOpenGraphicsFD
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.708: After open connection callback fd=-1
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.710: Error operation forbidden: read only access prevents virDomainOpenGraphicsFD
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.710: After open connection callback fd=-1
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.715: New spice channel 0x55a42742e460 SpiceDisplayChannel 0
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.715: Error operation forbidden: read only access prevents virDomainOpenGraphicsFD
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.715: After open connection callback fd=-1
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.717: New spice channel 0x55a427520460 SpiceCursorChannel 0
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.717: New spice channel 0x55a427520760 SpiceInputsChannel 0
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.717: new inputs channel
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.715: After open connection callback fd=-1
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.717: New spice channel 0x55a427520460 SpiceCursorChannel 0
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.717: New spice channel 0x55a427520760 SpiceInputsChannel 0
(virt-viewer:140910): virt-viewer-DEBUG: 10:27:06.717: new inputs channel
ssh: connect to host 192.168.0.1 port 22: Connection timed out
ssh: connect to host 192.168.0.1 port 22: Connection timed out
ssh: connect to host 192.168.0.1 port 22: Connection timed out
ssh: connect to host 192.168.0.1 port 22: Connection timed out

(virt-viewer:140910): GSpice-WARNING **: 10:29:17.983: incomplete link header (-104/16)

(virt-viewer:140910): GSpice-WARNING **: 10:29:17.983: incomplete link header (-104/16)

(virt-viewer:140910): GSpice-WARNING **: 10:29:17.983: incomplete link header (-104/16)

(virt-viewer:140910): GSpice-WARNING **: 10:29:17.985: incomplete link header (-104/16)
============================

The XML of the VM is the following:

============================
virsh -c qemu+ssh://root.0.1/system dumpxml Windows_10_Pro
<domain type='kvm' id='22'>
  <name>Windows_10_Pro</name>
  <uuid>4a776095-d660-41e9-ab3c-744734bf3e4e</uuid>
  <title>Windows 10 Professional</title>
  <description>Windows 10 Professional - Fully registered</description>
  <metadata>
    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
      <libosinfo:os id="http://microsoft.com/win/10"/>
    </libosinfo:libosinfo>
  </metadata>
  <memory unit='KiB'>6303744</memory>
  <currentMemory unit='KiB'>6303744</currentMemory>
  <vcpu placement='static'>2</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-q35-rhel8.2.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
    </hyperv>
    <vmport state='off'/>
  </features>
  <cpu mode='custom' match='exact' check='full'>
    <model fallback='forbid'>EPYC-Rome</model>
    <vendor>AMD</vendor>
    <feature policy='require' name='x2apic'/>
    <feature policy='require' name='tsc-deadline'/>
    <feature policy='require' name='hypervisor'/>
    <feature policy='require' name='tsc_adjust'/>
    <feature policy='require' name='arch-capabilities'/>
    <feature policy='require' name='xsaves'/>
    <feature policy='require' name='cmp_legacy'/>
    <feature policy='require' name='virt-ssbd'/>
    <feature policy='disable' name='svme-addr-chk'/>
    <feature policy='require' name='rdctl-no'/>
    <feature policy='require' name='skip-l1dfl-vmentry'/>
    <feature policy='require' name='mds-no'/>
    <feature policy='require' name='pschange-mc-no'/>
    <feature policy='disable' name='clwb'/>
    <feature policy='disable' name='umip'/>
    <feature policy='disable' name='rdpid'/>
    <feature policy='disable' name='wbnoinvd'/>
    <feature policy='disable' name='amd-stibp'/>
    <feature policy='disable' name='svm'/>
    <feature policy='require' name='topoext'/>
    <feature policy='disable' name='npt'/>
    <feature policy='disable' name='nrip-save'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
    <timer name='hypervclock' present='yes'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/extradisks/win10-pro.qcow2' index='1'/>
      <backingStore/>
      <target dev='sda' bus='sata'/>
      <alias name='sata0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='usb' index='0' model='qemu-xhci' ports='15'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'>
      <alias name='pcie.0'/>
    </controller>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <alias name='pci.1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <alias name='pci.2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <alias name='pci.3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <alias name='pci.4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x14'/>
      <alias name='pci.5'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:da:c2:ac'/>
      <source network='default' portid='f4c8c2df-85b2-40e4-b70f-eef1ec5962de' bridge='server_virt'/>
      <target dev='vnet28'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/2'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/2'>
      <source path='/dev/pts/2'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0' state='connected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='mouse' bus='ps2'>
      <alias name='input0'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input1'/>
    </input>
    <graphics type='spice' port='5902' autoport='yes' listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
    </graphics>
    <sound model='ich9'>
      <alias name='sound0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/>
    </sound>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
    <redirdev bus='usb' type='spicevmc'>
      <alias name='redir0'/>
      <address type='usb' bus='0' port='1'/>
    </redirdev>
    <redirdev bus='usb' type='spicevmc'>
      <alias name='redir1'/>
      <address type='usb' bus='0' port='2'/>
    </redirdev>
    <memballoon model='virtio'>
      <stats period='5'/>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_t:s0:c585,c806</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0:c585,c806</imagelabel>
  </seclabel>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+107:+107</label>
    <imagelabel>+107:+107</imagelabel>
  </seclabel>
</domain>

============================


When I stop firewalld the virt-viewer output is the following: 

============================

virt-viewer --debug -c qemu+ssh://root.0.1/system Windows_10_Pro
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.450: connecting ...
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.450: Opening connection to libvirt with URI qemu+ssh://root.0.1/system
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.620: initial connect
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.620: notebook show status 0x5605b03c22b0
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.621: virt_viewer_app_set_uuid_string: UUID changed to 4a776095-d660-41e9-ab3c-744734bf3e4e
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.621: notebook show status 0x5605b03c22b0
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.622: Guest Windows_10_Pro is running, determining display
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.623: Set connect info: (null),(null),-1,-1,(null),(null),(null),0
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.623: Guest Windows_10_Pro has a spice display
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.631: Guest graphics address is 127.0.0.1:5902
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.631: Set connect info: 192.168.0.1,127.0.0.1,5902,-1,ssh,(null),root,0
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.631: Error operation forbidden: read only access prevents virDomainOpenGraphicsFD
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.631: After open connection callback fd=-1
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.631: Opening indirect TCP connection to display at 127.0.0.1:5902
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.631: Setting up SSH tunnel via root.0.1
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.632: New spice channel 0x5605b15370f0 SpiceMainChannel 0
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.633: notebook show status 0x5605b03c22b0
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.633: reconnect_poll: 0
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.842: main channel: opened
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.842: notebook show status 0x5605b03c22b0
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.843: app is not in full screen
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.856: app is not in full screen
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.856: New spice channel 0x5605b044a300 SpiceUsbredirChannel 1
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.856: new usbredir channel
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.856: Error operation forbidden: read only access prevents virDomainOpenGraphicsFD
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.856: After open connection callback fd=-1
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.859: New spice channel 0x5605b044a610 SpiceUsbredirChannel 0
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.859: new usbredir channel
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.859: Error operation forbidden: read only access prevents virDomainOpenGraphicsFD
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.859: After open connection callback fd=-1
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.862: New spice channel 0x5605b044a920 SpiceRecordChannel 0
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.862: Error operation forbidden: read only access prevents virDomainOpenGraphicsFD
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.862: After open connection callback fd=-1
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.865: New spice channel 0x5605b044ac30 SpicePlaybackChannel 0
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.865: new audio channel
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.865: Error operation forbidden: read only access prevents virDomainOpenGraphicsFD
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.866: After open connection callback fd=-1
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.872: New spice channel 0x5605b03f2440 SpiceDisplayChannel 0
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.872: Error operation forbidden: read only access prevents virDomainOpenGraphicsFD
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.872: After open connection callback fd=-1
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.875: New spice channel 0x5605b0256440 SpiceCursorChannel 0
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.876: New spice channel 0x5605b0256740 SpiceInputsChannel 0
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:58.876: new inputs channel
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:59.108: Error operation forbidden: read only access prevents virDomainOpenGraphicsFD
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:59.108: After open connection callback fd=-1
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:59.112: Error operation forbidden: read only access prevents virDomainOpenGraphicsFD
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:59.112: After open connection callback fd=-1
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:59.116: creating spice display (#:0)
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:59.117: Insert display 0 0x5605b02896a0
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:59.140: Found a window without a display, reusing for display #0
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:59.142: notebook show display 0x5605b03c22b0

(virt-viewer:141586): GSpice-WARNING **: 10:35:59.150: Warning no automount-inhibiting implementation available
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:59.178: Allocated 1024x768
(virt-viewer:141586): virt-viewer-DEBUG: 10:35:59.178: Child allocate 1024x768
============================

Comment 1 zhoujunqin 2021-11-17 13:10:05 UTC
For most test scenarios, we keep the firewalld service as the default status - running.

Test host_1:
Package version: 
libvirt-7.9.0-1.module+el8.6.0+13150+28339563.x86_64
virt-viewer-9.0-12.el8.x86_64
firewalld-0.9.3-7.el8.noarch

1. Check the status of firewalld service is running.
# systemctl  status  firewalld 
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2021-11-17 07:30:56 EST; 23min ago
     Docs: man:firewalld(1)

2. Prepare a running spice(or VNC)VM.

# virsh dumpxml win11
...
    <graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
    </graphics>

...

Detailed VM's XML file, please see the attachment.


Host_2: RHEL-8.5.0
# virt-viewer  -c qemu+ssh://$Host_1/system win11 --debug
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:31.044: connecting ...
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:31.044: Opening connection to libvirt with URI qemu+ssh://$Host_1/system
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:31.395: initial connect
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:31.395: notebook show status 0x55d6725f82e0
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:31.396: virt_viewer_app_set_uuid_string: UUID changed to 8a084aff-592a-41bd-932e-aa78006efa4b
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:31.396: notebook show status 0x55d6725f82e0
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:31.397: Guest win11 is running, determining display
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:31.398: Set connect info: (null),(null),-1,-1,(null),(null),(null),0
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:31.398: Guest win11 has a vnc display
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:31.479: Guest graphics address is 127.0.0.1:5900
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:31.479: Set connect info: $Host_1,127.0.0.1,5900,-1,ssh,(null),(null),0
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:31.479: Error operation forbidden: read only access prevents virDomainOpenGraphicsFD
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:31.479: After open connection callback fd=-1
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:31.479: Opening indirect TCP connection to display at 127.0.0.1:5900
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:31.479: Setting up SSH tunnel via $Host_1
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:31.480: notebook show status 0x55d6725f82e0
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:31.482: reconnect_poll: 0
Warning: Permanently added '$Host_1' (ECDSA) to the list of known hosts.
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:32.334: notebook show status 0x55d6725f82e0
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:32.334: Insert display 0 0x55d67235b680
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:32.334: notebook show status 0x55d6725f82e0
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:32.337: desktop resize 800x600
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:32.337: notebook show status 0x55d6725f82e0
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:32.337: notebook show display 0x55d6725f82e0
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:32.574: Allocated 800x600
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:32.574: Child allocate 800x600
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:36.878: Window closed
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:36.878: close vnc=0x55d672634250
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:36.878: Not removing main window 0 0x55d67233a0f0
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:36.912: Disconnected
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:36.912: close vnc=0x55d672634490
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:36.956: notebook show status 0x55d6725f82e0
(virt-viewer:139548): virt-viewer-DEBUG: 08:05:36.956: Guest win11 display has disconnected, shutting down


Test result: VM's console shows as expected.
Additional info - I can connect to this VM's console from remote RHEL-9.0.0/Fedora-34 connections, thanks.

@uril  Please help investigate the bug issue, thanks for your help.

Comment 3 Jaroslav Suchanek 2021-11-18 14:47:03 UTC
Can you post firewalld logs from the hypervisor host?
1) set the LogDenied=unicast in the /etc/firewalld/firewalld.conf
2) reload the config via systemctl reload firewalld.service command

Try to connect with the virt-viewer command again. Then you should grab the logs from journald via journalctl -u firewalld.

Thanks.

Comment 4 Peter Tselios 2021-11-19 08:34:39 UTC
I already have the logdenied enabled. 

grep -i logd /etc/firewalld/firewalld.conf
# LogDenied
LogDenied=unicast


However, firewalld doesn't record any denied package!

============
Nov 19 10:19:02 server.home systemd[1]: Stopping firewalld - dynamic firewall daemon...
Nov 19 10:19:03 server.home systemd[1]: firewalld.service: Succeeded.
Nov 19 10:19:03 server.home systemd[1]: Stopped firewalld - dynamic firewall daemon.
Nov 19 10:19:03 server.home systemd[1]: Starting firewalld - dynamic firewall daemon...
Nov 19 10:19:03 server.home systemd[1]: Started firewalld - dynamic firewall daemon.
Nov 19 10:19:03 server.home firewalld[1921867]: WARNING: AllowZoneDrifting is enabled. This is considered an insecure configuration option. It will >
============

I tried with LogDenied=all too, but it made no difference. 

The (sanitized) configuration of libvrtd is the following: 

===========================================
egrep -v '^($|#)' /etc/libvirt/libvirtd.conf
listen_tls = 1
key_file = "/etc/pki/tls/private/192.168.0.1_libvirt_key.pem"
cert_file = "/etc/pki/tls/certs/192.168.0.1_crt.pem"
ca_file = "/etc/pki/tls/certs/example.com-CA_chain.pem"
log_filters="1:qemu 1:libvirt 4:object 4:json 4:event 1:util"
tls_allowed_dn_list = ['C=GR,ST=Attica,L=Athens,O=My Organization,OU=IT,CN=wstation.example.com,EMAIL=hostmaster', 'C=GR,ST=Attica,L=Athens,O=My Organization,OU=IT,CN=wstation2.example.com,EMAIL=hostmaster']
===========================================

BTW, the running VM has the following XML for spice:

============================
    <channel type="spicevmc">
      <target type="virtio" name="com.redhat.spice.0"/>
      <address type="virtio-serial" controller="0" bus="0" port="1"/>
    </channel>
============================



I will update the server to 8.5 later today and report back in case I find something new.

Comment 5 Peter Tselios 2021-11-26 07:38:53 UTC
I tried again after the upgrade to RHEL 8.5
Still it's not working, same behavior.

Comment 6 Martin Kletzander 2022-01-21 08:29:21 UTC
What looks weird to me here is that the virt-viewer debug logs show couple of connection timeouts, but not that it would not work at all.  That would suggest to me some weird traffic shaping or similar.  The setup for LogDenied is correct, you just need to look at kernel messages and not firewalld logs.

I would suggest two things:

 1) on the host start `journalctl -kfn0` on a terminal, it will follow the kernel logs

 2) then try to connect via virt-viewer remotely

 3) check the terminal from step 1 to see any denied packets

Please also attach the output of `nft list ruleset` and `iptables -vnL --line-numbers`.

Comment 7 Peter Tselios 2022-01-27 18:19:12 UTC
Created attachment 1857175 [details]
iptable output

Comment 8 Peter Tselios 2022-01-27 18:19:58 UTC
Created attachment 1857176 [details]
nft list

Comment 9 Peter Tselios 2022-01-27 18:41:58 UTC
It gets even more strange. 
Attached are the 2 outputs requested. 
Regarding the dropped packages, what I see is a huge number of the following: 

INAL_REJECT: IN=server_virt OUT= MAC=ff:ff:ff:ff:ff:ff:XXXXXXX SRC=192.168.0.253 DST=255.255.255.255 LEN=201 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=49309 DPT=7437 LEN=181 
Jan 27 20:18:58 server.home kernel: FINAL_REJECT: IN=server_virt OUT= MAC=01:00:5e:00:ff:ff:ff:ff:ff:ff:XXXXXXXff:ff:ff:ff:ff:ff:XXXXXXXff:ff:ff:ff:ff:ff:XXXXXXX SRC=192.168.0.254 DST=224.0.0.1 LEN=36 TOS=0x00 PREC=0x00 TTL=1 ID=0 DF PROTO=2


The .253 and .254 are a WiFi AP and the home router. 
Apart from that there are no other dropped packages. 

Now. Eventually, after 6 minutes on the first client and 9 minutes on the second client, I got remote-viewer working. Every subsequent request for other VMs resulted the same (more or less) delay but it wasn't always possible to use the VM.

Comment 10 Peter Tselios 2022-01-29 13:31:26 UTC
Some more information. 
I have access to 3 different KVM installation, RHEL 8, Fedora 34 and openSUSE Leap 15.3. 

Versions are the following: 

 * Leap 15.3: libvirt-daemon-7.1.0-6.11.1.x86_64
 * F34: libvirt-daemon-7.0.0-8.fc34.x86_64
 * RHEL 8: libvirt-6.0.0-37.module+el8.5.0+12162+40884dd2.x86_64

Now, Leap and the RHEL 8 have identical configuration and both of them are configured with bridged networking while the F34 uses NAT. 
I connect to the graphical console of either the Fedora or the Leap instantly. So, the only system with this behavior is RHEL. Is there any possibility that nft tables are wrongly configured? 
(I still don't know how to manage them and everything is manages via firewalld).

Comment 11 Jaroslav Suchanek 2022-02-04 14:24:25 UTC
Laine, do you see anything suspicious?

Comment 12 Laine Stump 2022-02-04 14:52:06 UTC
I've been watching this BZ, but haven't seen anything that makes sense to me, and I can't think of anything that would cause this behavior.

A couple of questions:

1) does virt-viewer/remote-viewer also fail if it's run directly on the host? (this would ascertain whether the problem was with the incoming ssh connection to 192.168.0.1, or if it was with the (multiple) spice connections from 127.0.0.1 to 127.0.01)

2) Can you maybe do a wireshark grab of all traffic to 127.0.0.1 and 192.168.0.1 during the connection attempt both with and without firewalld - possibly we can learn something by looking at the traffic side-by-side.

3) when you say "eventually after 6/9 minutes I got remote-viewer (I guess you mean virt-viewer) working" do you mean that you enter the command, nothing happens (and you do nothing) for 6-9 minutes, and then the connection "magically" starts working? Or were you making attempts to do things during this 6-9 minutes? A 1 minute (or was it 2) delay is a common symptom, and usually has to do with DNS requests timing out, but this is the first I've heard of a 6-9 minute delay...

Comment 13 Peter Tselios 2022-02-04 21:31:47 UTC
Created attachment 1859156 [details]
Combined tcpdump from the bridge's interface and lo

Comment 14 Peter Tselios 2022-02-04 21:32:18 UTC
Created attachment 1859157 [details]
Combined tcpdump from the bridge's interface and lo

Comment 15 Peter Tselios 2022-02-04 21:33:43 UTC
Created attachment 1859158 [details]
tcpdump from the lo interface

Comment 16 Peter Tselios 2022-02-04 21:34:32 UTC
Created attachment 1859159 [details]
tcpdump from lo (firewalld stopped)

Comment 17 Peter Tselios 2022-02-04 21:48:14 UTC
Personally I start to believe that, maybe, the issue relies on the firewalld/bridge and it's not strictly related to libvirtd. 

Anyway. 
Some timings: 
On average, it takes about 4 minutes to open a virtual console from a remote machine (using qemu+ssh)

Running virt-viewer on the server (having connected by ssh -X), opens the virtual console instantly, faster than I have seen in my life!

Attached are the tcpdumps requested. I hope that you can find a trend! 

Regarding the tests: 
I have 2 clients. I tried to connect to 5 VMs. The tests performed first on F34, I closed all the consoles and then I moved to the Leap. 
When I closed all consoles on the Leap, I used the ssh -X on it first and then I moved to F34 to repeat the test. 
Then I stopped the firewall and started the tests on F34 again. 

tcpdump run with the following flags: 

tcpdump -i lo -W 1 -C 30 -S 0 -vvv -w 
tcpdump -i enp8s0 host 192.168.0.222 and port 22 -s0 -nn -w 


Regarding the eventually:
Either I run virt-viewer or via virt-manager I open the console. 
Then, after the "connecting..." message disappear, I click on the black screen BUT neither keyboard or mouse is grabbed, until I can see the console. As I said, this takes around 4 minutes per machine. 

If I try to connect to more consoles at once, it takes much more time. 
If I connect and then stop the firewall on the server it still needs time
If I connect to the console without firewalld and then I start firewalld I have no problems.

Comment 18 Laine Stump 2022-02-05 21:08:02 UTC
So from attachment 1859156 [details] (combined pcap from the host's bridge and lo with firewalld active) I see one incoming ssh from 192.168.0.222 immediately succeed, followed by another that repeatedly sends SYNs with no response for 64 seconds, then finally a SYN/ACK from 192.168.0.1 after which a spice connection is opened on lo (from 127.0.0.1 to 127.0.0.1), and immediately 3 more ssh sessions are started from 192.168.0.1, with no responses to any of their SYN packets until second 128 (ie another 64 seconds), when *just one* of those sessions gets a SYN/ACK, then a 2nd spice channel is opened on lo, etc.

On attachment 1859157 [details] (same thing, but with firewalld *not* active) the 2nd ssh immediately gets a SYN/ACK, the first spice connection is immediately opened, followed by the subsequent ssh sessions from 192.168.0.222 starting up and corresponding spice sessions on lo starting up.

So it really does look like Martin's guess may be correct - some sort of rate limiting that is holding incoming ssh sessions to 1 per 64 seconds. I didn't even realize anything like that was supported by firewalld, but it is only happening when firewalld is active (perhaps it's some config knob somewhere else that is ineffective until firewalld puts in its basic set of rules?)

Eric - do you have any idea what knob (in firewalld or elsewhere) might be causing this kind of behavior?

Comment 19 Laine Stump 2022-02-05 21:10:45 UTC
Another question for Peter - what happens if you try to open several plain ssh sessions in immediate succession? Do they all start immediately, or are sessions 2-n delayed?

Comment 20 Peter Tselios 2022-02-06 04:47:47 UTC
Yes, it takes forever: "If I try to connect to more consoles at once, it takes much more time. " Sorry it wasn't clear, but this test was about opening all 5 VM's consoles one after the other.

Comment 21 Laine Stump 2022-02-06 15:55:22 UTC
Just to be sure I'm understanding - it isn't just remote virt-viewer connections to a VM on the host that open very slowly, it is also multiple simple ssh connections from the remote machine to the virt host? If that's the case, then anything about virtualization can be removed from the troubleshooting - it's purely a host networking problem.

Comment 22 Peter Tselios 2022-02-07 07:07:36 UTC
I haven't noticed any other issue (so far!). 
I have an ansible playbook that creates VMs and yesterday I went to redeploy my whole IdM lab (16 machines) without anything noticeable.

Comment 23 Eric Garver 2022-02-07 13:52:31 UTC
(In reply to Peter Tselios from comment #0)
> Steps to Reproduce:
> 1. Create a VM in a RHEL 8.3/8.4 server
> 2. Open from another machine virt-manager or virt-viewer and try to access
> the virtual console of this machine. The virtual console stays black.
> (with-firewall attachment)
> 2. Close the console
> 3. Stop firewalld on the RHEL 8 server
> 4. Open the vitrual console again. Now you will see the contents. 

Instead of stopping firewalld, can you try flushing conntrack entries?

  # conntrack -F

Comment 24 Peter Tselios 2022-02-07 17:07:21 UTC
It didn't change anything, actually now it's even worse. 
Even if I stop firewalld I don't see the VM's consoles. 

Just an update on the software: 

* Red Hat Enterprise Linux release 8.5 (Ootpa)
* libvirt-6.0.0-37.1.module+el8.5.0+13858+39fdc467.x86_64
* libvirt-daemon-driver-network-6.0.0-37.1.module+el8.5.0+13858+39fdc467.x86_64
* libvirt-daemon-kvm-6.0.0-37.1.module+el8.5.0+13858+39fdc467.x86_64
* firewalld-0.9.3-7.el8.noarch
* kernel-4.18.0-348.12.2.el8_5.x86_64

I wonder if the changes in RHEL 8, affect the changes that we should do in sysctl as indicated here: https://wiki.libvirt.org/page/Net.bridge.bridge-nf-call_and_sysctl.conf

Comment 25 Laine Stump 2022-02-07 17:40:06 UTC
(In reply to Peter Tselios from comment #24)
> I wonder if the changes in RHEL 8, affect the changes that we should do in
> sysctl as indicated here:
> https://wiki.libvirt.org/page/Net.bridge.bridge-nf-call_and_sysctl.conf


No, there is no relationship. That page is talking about an issue that is "very old" (1), and anyway only affects *direct* connections between external hosts and guests (2).


1) The page you link was written, by me, in 2014 and hasn't been modified since, as the libvirt wiki has been deprecated and people running anything newer than RHEL6 (or maybe RHEL7) in general don't need to be concerned about that topic anyway as the situation has changed drastically.

In the far distant past (the days when that page was written), all traffic traversing a Linux host bridge that wasn't either originating or terminating on the host's own port of the bridge (i.e. traffic that didn't originate or terminate in the host OS itself) was "reject by default" - the settings outlined on the aforementioned page were part of the bridge driver itself, and defaulted to 1 (i.e. reject by default). Sometime later the reject/pass action associated with those settings was moved to a separate module, called br_netfilter (which was still loaded by default, this was later RHEL6, or maybe RHEL7 days), and then even later (I'm pretty sure sometime during RHEL7 or at most early RHEL8) the br_filter module was changed to be *not* loaded by default (but still might be autoloaded by some actions). Unless the output of "lsmod | grep br_netfilter" shows that the br_netfilter module is loaded, you can 100% ignore that page.

2) In any case, the TCP virt-viewer sessions associated with a virt-viewer session are *not* going directly from an external host to the network interface of the guest. Instead, there is A) an SSH session from the external host to the libvirtd process on the virt host (which might traverse a bridge device if your virt host's ethernet is attached to a bridge, but again - since this connection is terminated in the host OS, the bridge-nf-call setting is irrelevant), and B) a separate TCP session on port 590x from localhost (libvirtd) to localhost (spice server running in the qemu-kvm process) - again, the bridge-nf-call setting is irrelevant, because the traffic doesn't traverse the bridge at all.

TL;DR - this is a red herring for you (and almost anyone else in 2022 who might come across that page).

Comment 26 Peter Tselios 2022-02-07 18:42:57 UTC
Many thanks, I was looking for more information and it looks like most of the pages are awfully outdated. 
So, should we consider this a symptom of a network problem in RHEL 8?

Comment 27 Eric Garver 2022-02-07 19:35:44 UTC
(In reply to Peter Tselios from comment #26)
> Many thanks, I was looking for more information and it looks like most of
> the pages are awfully outdated. 
> So, should we consider this a symptom of a network problem in RHEL 8?

Possibly. Can you check the kernel log (dmesg)? There may be clues.

Comment 28 Peter Tselios 2022-02-07 21:14:46 UTC
Nothing...
I have a lot of messages like this:

FINAL_REJECT: IN=server_virt OUT= MAC=ff:ff:ff:ff:ff:ff:14:cc:cc:cc:cc:cc:cc:cc SRC=.... DST=255.255.255.255 LEN=201 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=UDP SPT=47485 DPT=7437 LEN=181

but they are coming from the second wifi router at home I had see those messages in RHEL 7 as well, 
After the second reboot, of the day I still cannot see the VM's console even after a long waiting time (like one hour!)

Eventually I rebooted both my server and the laptop. 
I can connect to the server with ssh from the laptop's terminal, or I can use the virsh -c qemu+tls but I cannot connect via virt-manager.

A second reboot of the laptop fixed those issues and now I can see the VM's console after the usual ~ minutes delay. 

If you don't have any other ideas, my next step will be to find the proper way to virt-viewer with qemu+tls. I already have in my setup qemu+tls and I use it for my simple "virsh" commands.

Comment 29 Eric Garver 2022-02-08 14:05:01 UTC
Is there a router/switch between the client and server? If so, have you power cycled and/or checked it for issues?

If you can reproduce (virt-manager doesn't connect from client), can you show conntrack entries on the client and server?

  server # conntrack -L -s <client_ip>
  client # conntrack -L -d <server_ip>

Comment 30 Peter Tselios 2022-02-08 16:34:10 UTC
VM console is not presented immediately even if I connect the client directly to the server, or via WiFi. 
I will check the conntrack command though and report back.

Comment 31 Jaroslav Suchanek 2023-02-02 16:53:33 UTC
Hi, It's been a while since last response. Is there anything else we can investigate here? Is it possible to catch the output of the conntrack commands requested in comment 29? Thanks.

Comment 32 Peter Tselios 2023-02-02 23:29:18 UTC
This week I attend the RHTE. So definitely later next week.
I had run the command, but I don't know why the results are not posted, I will update it soon. 

Please note that in the mean time I have upgraded to 8.7, replaced all the network equipment but the problem is still there.

Comment 33 Peter Tselios 2023-02-02 23:42:11 UTC
(In reply to Laine Stump from comment #21)
> Just to be sure I'm understanding - it isn't just remote virt-viewer
> connections to a VM on the host that open very slowly, it is also multiple
> simple ssh connections from the remote machine to the virt host? If that's
> the case, then anything about virtualization can be removed from the
> troubleshooting - it's purely a host networking problem.

I have an update on this. 
I have created a set of Ansible roles that create4destroy VMs. Usually is 1 or 2, but lately I had to create about 16 VMs for a project. And I noticed that most of the times, I can only use 2 ssh connections in order not to have the play books fail.

So, I strongly believe that it's not related to virtualization layer, but this is more on the networking side of things. 

SO will be able to provide more information if you request it next week,

Comment 34 Peter Tselios 2023-02-06 18:34:31 UTC
(In reply to Eric Garver from comment #29)
> Is there a router/switch between the client and server? If so, have you
> power cycled and/or checked it for issues?
> 
> If you can reproduce (virt-manager doesn't connect from client), can you
> show conntrack entries on the client and server?
> 
>   server # conntrack -L -s <client_ip>
>   client # conntrack -L -d <server_ip>

conntrack -L -d 192.168.0.1

udp      17 19 src=192.168.0.222 dst=192.168.0.1 sport=57020 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=57020 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 107 SYN_SENT src=192.168.0.222 dst=192.168.0.1 sport=57118 dport=22 [UNREPLIED] src=192.168.0.1 dst=192.168.0.222 sport=22 dport=57118 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 431994 ESTABLISHED src=192.168.0.222 dst=192.168.0.1 sport=40046 dport=22 src=192.168.0.1 dst=192.168.0.222 sport=22 dport=40046 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 431999 ESTABLISHED src=192.168.0.222 dst=192.168.0.1 sport=36700 dport=22 src=192.168.0.1 dst=192.168.0.222 sport=22 dport=36700 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 425454 ESTABLISHED src=192.168.0.222 dst=192.168.0.1 sport=35074 dport=22 src=192.168.0.1 dst=192.168.0.222 sport=22 dport=35074 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
udp      17 2 src=192.168.0.222 dst=192.168.0.1 sport=48051 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=48051 secctx=system_u:object_r:unlabeled_t:s0 use=1
udp      17 19 src=192.168.0.222 dst=192.168.0.1 sport=37045 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=37045 secctx=system_u:object_r:unlabeled_t:s0 use=1
conntrack v1.4.6 (conntrack-tools): 7 flow entries have been shown.



===========================
conntrack -L -s 192.168.0.222
udp      17 8 src=192.168.0.222 dst=192.168.0.1 sport=48051 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=48051 mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 425459 ESTABLISHED src=192.168.0.222 dst=192.168.0.1 sport=35074 dport=22 src=192.168.0.1 dst=192.168.0.222 sport=22 dport=35074 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 431999 ESTABLISHED src=192.168.0.222 dst=192.168.0.1 sport=36700 dport=22 src=192.168.0.1 dst=192.168.0.222 sport=22 dport=36700 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
udp      17 3 src=192.168.0.222 dst=192.168.0.1 sport=52667 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=52667 mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
udp      17 25 src=192.168.0.222 dst=192.168.0.1 sport=37045 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=37045 mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
udp      17 25 src=192.168.0.222 dst=192.168.0.1 sport=57020 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=57020 mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 300 ESTABLISHED src=192.168.0.222 dst=192.168.0.1 sport=40046 dport=22 src=192.168.0.1 dst=192.168.0.222 sport=22 dport=40046 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
udp      17 3 src=192.168.0.222 dst=192.168.0.1 sport=51457 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=51457 mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
conntrack v1.4.4 (conntrack-tools): 8 flow entries have been shown.


=====================

Also, I used the "conntrack -F" on both client and server, instead of stopping the firewall. There was no change. 
Only when the firewall is stopped I can connect to the remote server. 

Also, here is what I see when I try to deploy 5 VMs simultaneously: 


===================================================================
TASK [vm_purge : Remove the VM directory] *******************************************************************************************************
changed: [idmc72.example.com -> server.example.com(192.168.0.1)]
fatal: [idmc73.example.com -> server.example.com]: UNREACHABLE! => changed=false 
  msg: 'Failed to connect to the host via ssh: ssh: connect to host 192.168.0.1 port 22: Connection timed out'
  unreachable: true
fatal: [idmc74.example.com -> server.example.com]: UNREACHABLE! => changed=false 
  msg: 'Failed to connect to the host via ssh: ssh: connect to host 192.168.0.1 port 22: Connection timed out'
  unreachable: true
===================================================================

Every VM I create is under its own directory under the /var/lib/libvirt/images. So, it would be /var/lib/libvirt/images/idmc72.example.com
The specific task connects to the remote machine via SSH (obviously since this is Ansible) and tries to remove the directories. But as you can see it fails with timeout. 

When that thing happen, I run the conntrack on the server and here are the results: 

============================================================
conntrack -L -s 192.168.0.222
tcp      6 91 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=45836 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=45836 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 91 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=59918 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=59918 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 93 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50690 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50690 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 93 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50712 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50712 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 92 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=59946 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=59946 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 94 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50772 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50772 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 93 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50762 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50762 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 92 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=59954 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=59954 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 91 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=59930 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=59930 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 93 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50716 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50716 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 91 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=59932 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=59932 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 93 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50700 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50700 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 91 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=45822 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=45822 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 89 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=45752 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=45752 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 94 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50770 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50770 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 89 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=45722 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=45722 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=2
tcp      6 90 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=45794 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=45794 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 93 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50710 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50710 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 89 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=45736 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=45736 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 90 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=45780 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=45780 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 92 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=59958 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=59958 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 91 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=45842 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=45842 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 90 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=45776 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=45776 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 431974 ESTABLISHED src=192.168.0.222 dst=192.168.0.1 sport=41484 dport=22 src=192.168.0.1 dst=192.168.0.222 sport=22 dport=41484 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 91 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=45808 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=45808 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
udp      17 19 src=192.168.0.222 dst=192.168.0.1 sport=54785 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=54785 mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 93 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50732 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50732 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 93 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50768 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50768 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 90 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=45764 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=45764 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 93 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50746 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50746 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
udp      17 19 src=192.168.0.222 dst=192.168.0.1 sport=47924 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=47924 mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 93 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50682 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50682 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 89 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=45728 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=45728 [ASSURED] mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
conntrack v1.4.4 (conntrack-tools): 33 flow entries have been shown.
==============================================================

I re-run it and here is the conntrack on the client: 

======================
cat /proc/sys/net/nf_conntrack_max
262144
[root@lenovo ~]# conntrack -L -d 192.168.0.1
udp      17 19 src=192.168.0.222 dst=192.168.0.1 sport=57020 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=57020 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 107 SYN_SENT src=192.168.0.222 dst=192.168.0.1 sport=57118 dport=22 [UNREPLIED] src=192.168.0.1 dst=192.168.0.222 sport=22 dport=57118 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 431994 ESTABLISHED src=192.168.0.222 dst=192.168.0.1 sport=40046 dport=22 src=192.168.0.1 dst=192.168.0.222 sport=22 dport=40046 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 431999 ESTABLISHED src=192.168.0.222 dst=192.168.0.1 sport=36700 dport=22 src=192.168.0.1 dst=192.168.0.222 sport=22 dport=36700 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 425454 ESTABLISHED src=192.168.0.222 dst=192.168.0.1 sport=35074 dport=22 src=192.168.0.1 dst=192.168.0.222 sport=22 dport=35074 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
udp      17 2 src=192.168.0.222 dst=192.168.0.1 sport=48051 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=48051 secctx=system_u:object_r:unlabeled_t:s0 use=1
udp      17 19 src=192.168.0.222 dst=192.168.0.1 sport=37045 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=37045 secctx=system_u:object_r:unlabeled_t:s0 use=1
conntrack v1.4.6 (conntrack-tools): 7 flow entries have been shown.
[root@lenovo ~]# conntrack -L -d 192.168.0.1
udp      17 29 src=192.168.0.222 dst=192.168.0.1 sport=38535 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=38535 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 92 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50756 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=50756 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 92 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50774 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=50774 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 96 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50336 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50336 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
udp      17 29 src=192.168.0.222 dst=192.168.0.1 sport=37583 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=37583 secctx=system_u:object_r:unlabeled_t:s0 use=1
udp      17 12 src=192.168.0.222 dst=192.168.0.1 sport=44358 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=44358 secctx=system_u:object_r:unlabeled_t:s0 use=1
udp      17 27 src=192.168.0.222 dst=192.168.0.1 sport=49603 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=49603 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 93 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=60938 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=60938 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
udp      17 21 src=192.168.0.222 dst=192.168.0.1 sport=53159 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=53159 secctx=system_u:object_r:unlabeled_t:s0 use=1
udp      17 21 src=192.168.0.222 dst=192.168.0.1 sport=50928 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50928 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 93 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=60952 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=60952 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 94 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=60972 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=60972 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 431750 ESTABLISHED src=192.168.0.222 dst=192.168.0.1 sport=40046 dport=22 src=192.168.0.1 dst=192.168.0.222 sport=22 dport=40046 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 95 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50276 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50276 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 94 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=32772 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=32772 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 431998 ESTABLISHED src=192.168.0.222 dst=192.168.0.1 sport=36700 dport=22 src=192.168.0.1 dst=192.168.0.222 sport=22 dport=36700 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 96 SYN_SENT src=192.168.0.222 dst=192.168.0.1 sport=48572 dport=22 [UNREPLIED] src=192.168.0.1 dst=192.168.0.222 sport=22 dport=48572 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 431977 ESTABLISHED src=192.168.0.222 dst=192.168.0.1 sport=48568 dport=22 src=192.168.0.1 dst=192.168.0.222 sport=22 dport=48568 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 96 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50358 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50358 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
udp      17 1 src=192.168.0.222 dst=192.168.0.1 sport=44305 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=44305 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 95 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50272 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50272 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
udp      17 12 src=192.168.0.222 dst=192.168.0.1 sport=38560 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=38560 secctx=system_u:object_r:unlabeled_t:s0 use=2
tcp      6 95 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50304 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50304 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 96 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50326 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50326 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 96 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50350 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50350 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 95 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50288 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50288 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 96 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50318 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50318 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 94 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=60986 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=60986 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 94 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=60984 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=60984 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 94 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=60978 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=60978 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
udp      17 1 src=192.168.0.222 dst=192.168.0.1 sport=50307 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50307 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 92 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=60934 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=60934 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 93 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=60968 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=60968 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 424747 ESTABLISHED src=192.168.0.222 dst=192.168.0.1 sport=35074 dport=22 src=192.168.0.1 dst=192.168.0.222 sport=22 dport=35074 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 92 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50784 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=50784 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 94 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=32770 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=32770 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 92 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50760 dport=16514 src=192.168.0.1 dst=192.168.0.222 sport=16514 dport=50760 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
udp      17 24 src=192.168.0.222 dst=192.168.0.1 sport=40588 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=40588 secctx=system_u:object_r:unlabeled_t:s0 use=1
tcp      6 95 TIME_WAIT src=192.168.0.222 dst=192.168.0.1 sport=50308 dport=53 src=192.168.0.1 dst=192.168.0.222 sport=53 dport=50308 [ASSURED] secctx=system_u:object_r:unlabeled_t:s0 use=1
conntrack v1.4.6 (conntrack-tools): 39 flow entries have been shown.
==========================

Comment 35 Eric Garver 2023-02-07 14:21:37 UTC
Most of the conntrack dumps look fine. There is one entry..

tcp      6 107 SYN_SENT src=192.168.0.222 dst=192.168.0.1 sport=57118 dport=22 [UNREPLIED] src=192.168.0.1 dst=192.168.0.222 sport=22 dport=57118 secctx=system_u:object_r:unlabeled_t:s0 use=1

This looks like the connection stalled during TCP handshake. Looks like the SYN/ACK is not returned. This jibes with the symptoms and tcpdumps.

I have no idea why though. If you reproduce in the lab then I can debug your setup.

Comment 36 Peter Tselios 2023-02-07 20:51:08 UTC
I don't know if you talk about MY lab or yours. 
But in MY lab it's 100% reproducible. Actually, nothing changed since I open the report, unfortunately.

Comment 38 Laine Stump 2023-05-10 14:58:58 UTC
This probably should have been moved to a component lower in the stack long ago, since it's pretty clear that the only virtualization part of the problem is that it seems to crop up when several (ssh?) sessions are opened from a remote host (as opposed to localhost) in rapid succession, and that's something that happens to occur when a virt-viewer connection is made to a guest that uses spice.

I'm moving it to firewalld and moving out the stale date (but will still follow it in case something related to libvirt pops up).

Eric - if you think it more properly belongs lower, please move it there.

Comment 39 Peter Tselios 2023-05-10 17:39:27 UTC
Many thanks for this!
I was ready to move to RHEL 9 this weekend, but I will wait.

Comment 40 Laine Stump 2023-05-10 19:33:54 UTC
Please don't let anything I do give falso hope or delay you from upgrading! :-) Just reassigning doesn't mean it will be solved immediately (and there are other compelling reasons to upgrade to RHEL9; for example, almost all developers are more focused on the versions of packages that are in RHEL9 rather than RHEL8, so looking at a bug on RHEL8 requires a mental shift, possibly digging out systems that haven't been kept up to date, etc).

It could be interesting/useful to someone to know if the same problem showed up on an identically configured RHEL9 host. (There must be *something* unique about your system that we haven't yet determined, otherwise I would have expected many more reports of this; running virt-viewer remotely isn't all that common, but it does happen...)

Comment 41 Peter Tselios 2023-05-11 12:26:47 UTC
Sounds like a good reason to upgrade then. 
Although I will go for a clean install and then configure it with my Ansible roles. 

Let's see.

Comment 42 RHEL Program Management 2023-08-16 07:28:33 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.


Note You need to log in before you can comment on or make changes to this bug.