Hide Forgot
Created attachment 1549055 [details] please have a look at this screenshot. btw not related to a dark theme, it happens in the standard one also Description of problem: virt manager does not display cpu usage anymore Version-Release number of selected component (if applicable): virt-manager-2.1.0-1.fc29.src.rpm How reproducible: use it Steps to Reproduce: 1. just use it 2. 3. Actual results: no cpu usage displayed Expected results: cpu usage displayed Additional info:
I can confirm the same - I'm running virt-manager 2.1.0 on Fedora 29 and the CPU statistics for VMs running on hypervisors (both local and remote) are no longer being updated. In my case it looks like they go up to the first third of the graph window and then stop updating.
Thanks for the report. For anyone that can reproduce, please run 'virt-manager --debug', reproduce, and then attach the full debug output here
I don't see anything relevant to the issue :-( [konrad@deathstar ~]$ virt-manager --debug [Wed, 10 Apr 2019 09:46:14 virt-manager 5461] DEBUG (cli:203) Launched with command line: /usr/share/virt-manager/virt-manager --debug [Wed, 10 Apr 2019 09:46:14 virt-manager 5461] DEBUG (virt-manager:176) virt-manager version: 2.1.0 [Wed, 10 Apr 2019 09:46:14 virt-manager 5461] DEBUG (virt-manager:177) virtManager import: <module 'virtManager' from '/usr/share/virt-manager/virtManager/__init__.py'> [Wed, 10 Apr 2019 09:46:14 virt-manager 5461] DEBUG (virt-manager:214) PyGObject version: 3.30.4 [Wed, 10 Apr 2019 09:46:14 virt-manager 5461] DEBUG (virt-manager:218) GTK version: 3.24.1 [Wed, 10 Apr 2019 09:46:14 virt-manager 5461] DEBUG (engine:315) Connected to remote app instance.
(In reply to Davide Corrado from comment #3) > I don't see anything relevant to the issue :-( > > > [konrad@deathstar ~]$ virt-manager --debug > [Wed, 10 Apr 2019 09:46:14 virt-manager 5461] DEBUG (engine:315) Connected > to remote app instance. This last line means you already had an instance of virt-manager running. You'll need to close all virt-manager instances first to get proper output from --debug
sorry I was in a rush. here you go: [konrad@deathstar ~]$ virt-manager --debug [Thu, 11 Apr 2019 10:40:50 virt-manager 3980] DEBUG (cli:203) Launched with command line: /usr/share/virt-manager/virt-manager --debug [Thu, 11 Apr 2019 10:40:50 virt-manager 3980] DEBUG (virt-manager:176) virt-manager version: 2.1.0 [Thu, 11 Apr 2019 10:40:50 virt-manager 3980] DEBUG (virt-manager:177) virtManager import: <module 'virtManager' from '/usr/share/virt-manager/virtManager/__init__.py'> [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (virt-manager:214) PyGObject version: 3.30.4 [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (virt-manager:218) GTK version: 3.24.1 [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (systray:156) AppIndicator3 is available, but didn't find any dbus watcher. [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (systray:202) Showing systray: False [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (inspection:41) python guestfs is not installed [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (engine:114) Loading stored URIs: qemu:///session [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (engine:528) processing cli command uri= show_window=manager domain= [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (engine:530) No cli action requested, launching default window [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (manager:187) Showing manager [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (engine:388) window counter incremented to 1 [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (engine:282) Initial gtkapplication activated [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:565) conn=qemu:///session changed to state=Connecting [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:1002) Scheduling background open thread for qemu:///session [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:1042) libvirt version=4007000 [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:1044) daemon version=4007000 [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:1045) conn version=3000000 [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:1047) qemu:///session capabilities: <capabilities> <host> <uuid>d4f8677e-3ed5-4dbc-954d-9d4107e455b0</uuid> <cpu> <arch>x86_64</arch> <model>Broadwell-noTSX-IBRS</model> <vendor>Intel</vendor> <microcode version="198"/> <topology sockets="1" cores="2" threads="2"/> <feature name="vme"/> <feature name="ds"/> <feature name="acpi"/> <feature name="ss"/> <feature name="ht"/> <feature name="tm"/> <feature name="pbe"/> <feature name="dtes64"/> <feature name="monitor"/> <feature name="ds_cpl"/> <feature name="vmx"/> <feature name="est"/> <feature name="tm2"/> <feature name="xtpr"/> <feature name="pdcm"/> <feature name="osxsave"/> <feature name="f16c"/> <feature name="rdrand"/> <feature name="arat"/> <feature name="tsc_adjust"/> <feature name="mpx"/> <feature name="clflushopt"/> <feature name="ssbd"/> <feature name="xsaveopt"/> <feature name="xsavec"/> <feature name="xgetbv1"/> <feature name="xsaves"/> <feature name="pdpe1gb"/> <feature name="abm"/> <feature name="invtsc"/> <pages unit="KiB" size="4"/> <pages unit="KiB" size="2048"/> <pages unit="KiB" size="1048576"/> </cpu> <power_management> <suspend_mem/> <suspend_disk/> <suspend_hybrid/> </power_management> <iommu support="no"/> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> <uri_transport>rdma</uri_transport> </uri_transports> </migration_features> <topology> <cells num="1"> <cell id="0"> <memory unit="KiB">8033188</memory> <pages unit="KiB" size="4">2008297</pages> <pages unit="KiB" size="2048">0</pages> <pages unit="KiB" size="1048576">0</pages> <distances> <sibling id="0" value="10"/> </distances> <cpus num="4"> <cpu id="0" socket_id="0" core_id="0" siblings="0,2"/> <cpu id="1" socket_id="0" core_id="1" siblings="1,3"/> <cpu id="2" socket_id="0" core_id="0" siblings="0,2"/> <cpu id="3" socket_id="0" core_id="1" siblings="1,3"/> </cpus> </cell> </cells> </topology> <cache> <bank id="0" level="3" type="both" size="4" unit="MiB" cpus="0-3"/> </cache> <secmodel> <model>selinux</model> <doi>0</doi> <baselabel type="kvm">system_u:system_r:svirt_t:s0</baselabel> <baselabel type="qemu">system_u:system_r:svirt_tcg_t:s0</baselabel> </secmodel> </host> <guest> <os_type>hvm</os_type> <arch name="i686"> <wordsize>32</wordsize> <emulator>/usr/bin/qemu-system-i386</emulator> <machine maxCpus="255">pc-i440fx-3.0</machine> <machine canonical="pc-i440fx-3.0" maxCpus="255">pc</machine> <machine maxCpus="1">isapc</machine> <machine maxCpus="255">pc-1.1</machine> <machine maxCpus="255">pc-1.2</machine> <machine maxCpus="255">pc-1.3</machine> <machine maxCpus="255">pc-i440fx-2.8</machine> <machine maxCpus="255">pc-1.0</machine> <machine maxCpus="255">pc-i440fx-2.9</machine> <machine maxCpus="255">pc-i440fx-2.6</machine> <machine maxCpus="255">pc-i440fx-2.7</machine> <machine maxCpus="128">xenfv</machine> <machine maxCpus="255">pc-i440fx-2.3</machine> <machine maxCpus="255">pc-i440fx-2.4</machine> <machine maxCpus="255">pc-i440fx-2.5</machine> <machine maxCpus="255">pc-i440fx-2.1</machine> <machine maxCpus="255">pc-i440fx-2.2</machine> <machine maxCpus="255">pc-i440fx-2.0</machine> <machine maxCpus="288">pc-q35-2.11</machine> <machine maxCpus="288">pc-q35-2.12</machine> <machine maxCpus="288">pc-q35-3.0</machine> <machine canonical="pc-q35-3.0" maxCpus="288">q35</machine> <machine maxCpus="1">xenpv</machine> <machine maxCpus="288">pc-q35-2.10</machine> <machine maxCpus="255">pc-i440fx-1.7</machine> <machine maxCpus="288">pc-q35-2.9</machine> <machine maxCpus="255">pc-0.15</machine> <machine maxCpus="255">pc-i440fx-1.5</machine> <machine maxCpus="255">pc-q35-2.7</machine> <machine maxCpus="255">pc-i440fx-1.6</machine> <machine maxCpus="255">pc-i440fx-2.11</machine> <machine maxCpus="288">pc-q35-2.8</machine> <machine maxCpus="255">pc-0.13</machine> <machine maxCpus="255">pc-i440fx-2.12</machine> <machine maxCpus="255">pc-0.14</machine> <machine maxCpus="255">pc-q35-2.4</machine> <machine maxCpus="255">pc-q35-2.5</machine> <machine maxCpus="255">pc-q35-2.6</machine> <machine maxCpus="255">pc-i440fx-1.4</machine> <machine maxCpus="255">pc-i440fx-2.10</machine> <machine maxCpus="255">pc-0.11</machine> <machine maxCpus="255">pc-0.12</machine> <machine maxCpus="255">pc-0.10</machine> <domain type="qemu"/> <domain type="kvm"> <emulator>/usr/bin/qemu-kvm</emulator> <machine maxCpus="255">pc-i440fx-3.0</machine> <machine canonical="pc-i440fx-3.0" maxCpus="255">pc</machine> <machine maxCpus="1">isapc</machine> <machine maxCpus="255">pc-1.1</machine> <machine maxCpus="255">pc-1.2</machine> <machine maxCpus="255">pc-1.3</machine> <machine maxCpus="255">pc-i440fx-2.8</machine> <machine maxCpus="255">pc-1.0</machine> <machine maxCpus="255">pc-i440fx-2.9</machine> <machine maxCpus="255">pc-i440fx-2.6</machine> <machine maxCpus="255">pc-i440fx-2.7</machine> <machine maxCpus="128">xenfv</machine> <machine maxCpus="255">pc-i440fx-2.3</machine> <machine maxCpus="255">pc-i440fx-2.4</machine> <machine maxCpus="255">pc-i440fx-2.5</machine> <machine maxCpus="255">pc-i440fx-2.1</machine> <machine maxCpus="255">pc-i440fx-2.2</machine> <machine maxCpus="255">pc-i440fx-2.0</machine> <machine maxCpus="288">pc-q35-2.11</machine> <machine maxCpus="288">pc-q35-2.12</machine> <machine maxCpus="288">pc-q35-3.0</machine> <machine canonical="pc-q35-3.0" maxCpus="288">q35</machine> <machine maxCpus="1">xenpv</machine> <machine maxCpus="288">pc-q35-2.10</machine> <machine maxCpus="255">pc-i440fx-1.7</machine> <machine maxCpus="288">pc-q35-2.9</machine> <machine maxCpus="255">pc-0.15</machine> <machine maxCpus="255">pc-i440fx-1.5</machine> <machine maxCpus="255">pc-q35-2.7</machine> <machine maxCpus="255">pc-i440fx-1.6</machine> <machine maxCpus="255">pc-i440fx-2.11</machine> <machine maxCpus="288">pc-q35-2.8</machine> <machine maxCpus="255">pc-0.13</machine> <machine maxCpus="255">pc-i440fx-2.12</machine> <machine maxCpus="255">pc-0.14</machine> <machine maxCpus="255">pc-q35-2.4</machine> <machine maxCpus="255">pc-q35-2.5</machine> <machine maxCpus="255">pc-q35-2.6</machine> <machine maxCpus="255">pc-i440fx-1.4</machine> <machine maxCpus="255">pc-i440fx-2.10</machine> <machine maxCpus="255">pc-0.11</machine> <machine maxCpus="255">pc-0.12</machine> <machine maxCpus="255">pc-0.10</machine> </domain> </arch> <features> <cpuselection/> <deviceboot/> <disksnapshot default="on" toggle="no"/> <acpi default="on" toggle="yes"/> <apic default="on" toggle="no"/> <pae/> <nonpae/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name="x86_64"> <wordsize>64</wordsize> <emulator>/usr/bin/qemu-system-x86_64</emulator> <machine maxCpus="255">pc-i440fx-3.0</machine> <machine canonical="pc-i440fx-3.0" maxCpus="255">pc</machine> <machine maxCpus="1">isapc</machine> <machine maxCpus="255">pc-1.1</machine> <machine maxCpus="255">pc-1.2</machine> <machine maxCpus="255">pc-1.3</machine> <machine maxCpus="255">pc-i440fx-2.8</machine> <machine maxCpus="255">pc-1.0</machine> <machine maxCpus="255">pc-i440fx-2.9</machine> <machine maxCpus="255">pc-i440fx-2.6</machine> <machine maxCpus="255">pc-i440fx-2.7</machine> <machine maxCpus="128">xenfv</machine> <machine maxCpus="255">pc-i440fx-2.3</machine> <machine maxCpus="255">pc-i440fx-2.4</machine> <machine maxCpus="255">pc-i440fx-2.5</machine> <machine maxCpus="255">pc-i440fx-2.1</machine> <machine maxCpus="255">pc-i440fx-2.2</machine> <machine maxCpus="255">pc-i440fx-2.0</machine> <machine maxCpus="288">pc-q35-2.11</machine> <machine maxCpus="288">pc-q35-2.12</machine> <machine maxCpus="288">pc-q35-3.0</machine> <machine canonical="pc-q35-3.0" maxCpus="288">q35</machine> <machine maxCpus="1">xenpv</machine> <machine maxCpus="288">pc-q35-2.10</machine> <machine maxCpus="255">pc-i440fx-1.7</machine> <machine maxCpus="288">pc-q35-2.9</machine> <machine maxCpus="255">pc-0.15</machine> <machine maxCpus="255">pc-i440fx-1.5</machine> <machine maxCpus="255">pc-q35-2.7</machine> <machine maxCpus="255">pc-i440fx-1.6</machine> <machine maxCpus="255">pc-i440fx-2.11</machine> <machine maxCpus="288">pc-q35-2.8</machine> <machine maxCpus="255">pc-0.13</machine> <machine maxCpus="255">pc-i440fx-2.12</machine> <machine maxCpus="255">pc-0.14</machine> <machine maxCpus="255">pc-q35-2.4</machine> <machine maxCpus="255">pc-q35-2.5</machine> <machine maxCpus="255">pc-q35-2.6</machine> <machine maxCpus="255">pc-i440fx-1.4</machine> <machine maxCpus="255">pc-i440fx-2.10</machine> <machine maxCpus="255">pc-0.11</machine> <machine maxCpus="255">pc-0.12</machine> <machine maxCpus="255">pc-0.10</machine> <domain type="qemu"/> <domain type="kvm"> <emulator>/usr/bin/qemu-kvm</emulator> <machine maxCpus="255">pc-i440fx-3.0</machine> <machine canonical="pc-i440fx-3.0" maxCpus="255">pc</machine> <machine maxCpus="1">isapc</machine> <machine maxCpus="255">pc-1.1</machine> <machine maxCpus="255">pc-1.2</machine> <machine maxCpus="255">pc-1.3</machine> <machine maxCpus="255">pc-i440fx-2.8</machine> <machine maxCpus="255">pc-1.0</machine> <machine maxCpus="255">pc-i440fx-2.9</machine> <machine maxCpus="255">pc-i440fx-2.6</machine> <machine maxCpus="255">pc-i440fx-2.7</machine> <machine maxCpus="128">xenfv</machine> <machine maxCpus="255">pc-i440fx-2.3</machine> <machine maxCpus="255">pc-i440fx-2.4</machine> <machine maxCpus="255">pc-i440fx-2.5</machine> <machine maxCpus="255">pc-i440fx-2.1</machine> <machine maxCpus="255">pc-i440fx-2.2</machine> <machine maxCpus="255">pc-i440fx-2.0</machine> <machine maxCpus="288">pc-q35-2.11</machine> <machine maxCpus="288">pc-q35-2.12</machine> <machine maxCpus="288">pc-q35-3.0</machine> <machine canonical="pc-q35-3.0" maxCpus="288">q35</machine> <machine maxCpus="1">xenpv</machine> <machine maxCpus="288">pc-q35-2.10</machine> <machine maxCpus="255">pc-i440fx-1.7</machine> <machine maxCpus="288">pc-q35-2.9</machine> <machine maxCpus="255">pc-0.15</machine> <machine maxCpus="255">pc-i440fx-1.5</machine> <machine maxCpus="255">pc-q35-2.7</machine> <machine maxCpus="255">pc-i440fx-1.6</machine> <machine maxCpus="255">pc-i440fx-2.11</machine> <machine maxCpus="288">pc-q35-2.8</machine> <machine maxCpus="255">pc-0.13</machine> <machine maxCpus="255">pc-i440fx-2.12</machine> <machine maxCpus="255">pc-0.14</machine> <machine maxCpus="255">pc-q35-2.4</machine> <machine maxCpus="255">pc-q35-2.5</machine> <machine maxCpus="255">pc-q35-2.6</machine> <machine maxCpus="255">pc-i440fx-1.4</machine> <machine maxCpus="255">pc-i440fx-2.10</machine> <machine maxCpus="255">pc-0.11</machine> <machine maxCpus="255">pc-0.12</machine> <machine maxCpus="255">pc-0.10</machine> </domain> </arch> <features> <cpuselection/> <deviceboot/> <disksnapshot default="on" toggle="no"/> <acpi default="on" toggle="yes"/> <apic default="on" toggle="no"/> </features> </guest> </capabilities> [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:850) Using domain events [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:885) Using network events [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:905) Using storage pool events [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:924) Using node device events [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:1172) interface=enp0s31f6 status=Inactive added [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:1172) domain=BadStore status=Shutoff added [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:804) storage pool refresh event: pool=default [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:1172) interface=lo status=Active added [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:1172) domain=softsec1-3 status=Shutoff added [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:1172) domain=win10 status=Shutoff added [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:1172) pool=default status=Active added [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:1172) pool=mooc status=Inactive added [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:1172) pool=desktop-live status=Inactive added [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:804) storage pool refresh event: pool=Downloads [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:1172) pool=Downloads status=Active added [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:1172) pool=kali-linux-2016.2-amd64 status=Inactive added [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:804) storage pool refresh event: pool=Torrent_Downloads [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:1172) pool=Torrent_Downloads status=Active added [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:1172) pool=Office_2013_Pro_Plus_with_SP1_VL_Italian_incl._Project,_Visio_(x86-x64) status=Inactive added [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:804) storage pool refresh event: pool=gnome-boxes [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:1172) pool=gnome-boxes status=Active added [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:804) storage pool refresh event: pool=os2 [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:1172) pool=os2 status=Active added [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:1172) pool=OSX-KVM status=Inactive added [Thu, 11 Apr 2019 10:40:51 virt-manager 3980] DEBUG (connection:804) storage pool refresh event: pool=konrad [Thu, 11 Apr 2019 10:40:52 virt-manager 3980] DEBUG (connection:1172) pool=konrad status=Active added [Thu, 11 Apr 2019 10:40:52 virt-manager 3980] DEBUG (connection:1172) pool=Windows_XP_PROFESSIONAL_SP3_Jan_2015_+_SATA_Drivers_[TechTools.net] status=Inactive added [Thu, 11 Apr 2019 10:40:52 virt-manager 3980] DEBUG (connection:1172) pool=kali-linux-2017.1-amd64 status=Inactive added [Thu, 11 Apr 2019 10:40:52 virt-manager 3980] DEBUG (connection:1172) pool=Fedora-Live-Workstation-x86_64-23 status=Inactive added [Thu, 11 Apr 2019 10:40:52 virt-manager 3980] DEBUG (connection:565) conn=qemu:///session changed to state=Active [Thu, 11 Apr 2019 10:41:09 virt-manager 3980] DEBUG (vmmenu:237) Starting vm 'win10' [Thu, 11 Apr 2019 10:41:09 virt-manager 3980] DEBUG (connection:820) node device lifecycle event: nodedev=net_tap0_fe_9f_5c_68_40_01 state=VIR_NODE_DEVICE_EVENT_CREATED reason=0 [Thu, 11 Apr 2019 10:41:09 virt-manager 3980] DEBUG (connection:745) domain lifecycle event: domain=win10 state=VIR_DOMAIN_EVENT_RESUMED reason=VIR_DOMAIN_EVENT_RESUMED_UNPAUSED [Thu, 11 Apr 2019 10:41:09 virt-manager 3980] DEBUG (connection:745) domain lifecycle event: domain=win10 state=VIR_DOMAIN_EVENT_STARTED reason=VIR_DOMAIN_EVENT_STARTED_BOOTED [Thu, 11 Apr 2019 10:41:11 virt-manager 3980] DEBUG (serialcon:17) Using VTE API 2.91 [Thu, 11 Apr 2019 10:41:12 virt-manager 3980] DEBUG (guest:266) Setting Guest osinfo <_OsVariant name=generic> [Thu, 11 Apr 2019 10:41:12 virt-manager 3980] DEBUG (details:692) Showing VM details: <vmmDomain name=win10 id=0x7f0ee40c4c18> [Thu, 11 Apr 2019 10:41:12 virt-manager 3980] DEBUG (engine:388) window counter incremented to 2 [Thu, 11 Apr 2019 10:41:12 virt-manager 3980] DEBUG (console:740) Starting connect process for proto=spice trans= connhost=127.0.0.1 connuser= connport= gaddr=127.0.0.1 gport=5900 gtlsport=None gsocket=None [Thu, 11 Apr 2019 10:41:12 virt-manager 3980] DEBUG (viewers:593) Requesting fd for channel: <SpiceClientGLib.UsbredirChannel object at 0x7f0ecee9ad38 (SpiceUsbredirChannel at 0x55e2c91686d0)> [Thu, 11 Apr 2019 10:41:12 virt-manager 3980] DEBUG (viewers:593) Requesting fd for channel: <SpiceClientGLib.UsbredirChannel object at 0x7f0ecee9ad38 (SpiceUsbredirChannel at 0x55e2c916ca50)> [Thu, 11 Apr 2019 10:41:12 virt-manager 3980] DEBUG (viewers:593) Requesting fd for channel: <SpiceClientGLib.RecordChannel object at 0x7f0ecee9ad38 (SpiceRecordChannel at 0x55e2c9131900)> [Thu, 11 Apr 2019 10:41:12 virt-manager 3980] DEBUG (viewers:593) Requesting fd for channel: <SpiceClientGLib.PlaybackChannel object at 0x7f0ecee9ad38 (SpicePlaybackChannel at 0x55e2c9170680)> [Thu, 11 Apr 2019 10:41:12 virt-manager 3980] DEBUG (viewers:593) Requesting fd for channel: <SpiceClientGLib.DisplayChannel object at 0x7f0ecee9ad38 (SpiceDisplayChannel at 0x55e2c9180810)> [Thu, 11 Apr 2019 10:41:12 virt-manager 3980] DEBUG (console:863) Viewer connected [Thu, 11 Apr 2019 10:41:12 virt-manager 3980] DEBUG (viewers:593) Requesting fd for channel: <SpiceClientGLib.CursorChannel object at 0x7f0eceea2dc8 (SpiceCursorChannel at 0x55e2c938fb20)> [Thu, 11 Apr 2019 10:41:12 virt-manager 3980] DEBUG (viewers:593) Requesting fd for channel: <SpiceClientGLib.InputsChannel object at 0x7f0eceea9558 (SpiceInputsChannel at 0x55e2c939c8e0)>
(In reply to Davide Corrado from comment #5) > sorry I was in a rush. here you go: Hmm I don't see anything strange in your debug output. Are the stats graphs enabled in View->Graph ? Is stats polling enabled in Edit->Preferences polling section? After that, make sure a VM is running
yes polling is enabled (every 3 seconds) and in view there is guest cpu selected. The problem happens in fc30 too (I migrated hoping it would fix it)
What results do you see when running virt-top from the command line? Does it show CPU usage that isn't reported in virt-manager?
virt-top is not installed. Do I have to install it?
I installed it and I get stats in the command line: virt-top 17:55:10 - x86_64 4/4CPU 2800MHz 7845MB 1 domains, 1 active, 1 running, 0 sleeping, 0 paused, 0 inactive D:0 O:0 X:0 CPU: 0.0% Mem: 2048 MB (2048 MB by guests) ID S RDRQ WRRQ RXBY TXBY %CPU %MEM TIME NAME 1 R 21 3 816 1970 0.0 26.0 0:00.00 win10 no stats in virt-manager in graphic format.
I noticed that the cpu does not change. it cannot be 0.0% the guest is running
Hmm strange. If you leave virt-top running for a few minutes and use the win10 VM, does the win10 'TIME' field grow at all ?
nope. I have been running it since i first wrote you back and now time is still zero. virt-top 17:55:10 - x86_64 4/4CPU 2800MHz 7845MB 1 domains, 1 active, 1 running, 0 sleeping, 0 paused, 0 inactive D:0 O:0 X:0 CPU: 0.0% Mem: 2048 MB (2048 MB by guests) ID S RDRQ WRRQ RXBY TXBY %CPU %MEM TIME NAME 1 R 21 3 816 1970 0.0 26.0 0:00.00 win10 I'm sure it worked as expected in FC28 can't you replicate it?
wrong cut and paste. please check the correct one out: virt-top 18:21:15 - x86_64 4/4CPU 2800MHz 7845MB 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 1 domains, 1 active, 1 running, 0 sleeping, 0 paused, 0 inactive D:0 O:0 X:0 CPU: 0.0% Mem: 2048 MB (2048 MB by guests) ID S RDRQ WRRQ RXBY TXBY %CPU %MEM TIME NAME 1 R 11 4 2404 6460 0.0 26.0 0:00.00 win10
No I can't replicate it, not with a win10 VM or any other VM. Can you attach the VM XML of win10? sudo virsh dumpxml win10
here you go. please note that maybe I know why you can't replicate it. I'm running the vm as standard user not root (qemu:///session) this happens with every vm I create in this way (a la gnome-boxes) [konrad@deathstar qemu]$ cat win10.xml <!-- WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: virsh edit win10 or other application using the libvirt API. --> <domain type='kvm'> <name>win10</name> <uuid>86141da1-00c3-439b-99a7-d66a1eeae433</uuid> <memory unit='KiB'>2097152</memory> <currentMemory unit='KiB'>2097152</currentMemory> <vcpu placement='static'>2</vcpu> <os> <type arch='x86_64' machine='pc-i440fx-2.9'>hvm</type> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> </hyperv> <vmport state='off'/> </features> <cpu mode='host-model' check='partial'> <model fallback='allow'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> <timer name='hypervclock' present='yes'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/bin/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/home/konrad/.local/share/libvirt/images/win10.qcow2'/> <target dev='vda' bus='virtio'/> <boot order='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hda' bus='ide'/> <readonly/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </controller> <controller type='usb' index='0' model='nec-xhci'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:b2:1c:d0'/> <source bridge='virbr0'/> <model type='rtl8139'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='spice' autoport='yes'> <listen type='address'/> <image compression='off'/> </graphics> <sound model='ich6'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </sound> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <redirdev bus='usb' type='spicevmc'> <address type='usb' bus='0' port='2'/> </redirdev> <redirdev bus='usb' type='spicevmc'> <address type='usb' bus='0' port='3'/> </redirdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </memballoon> </devices> </domain> [konrad@deathstar qemu]$
Good catch, qemu:///session is the culprit and I can reproduce there. The only cpu.X value listed in 'virsh --connect qemu:///session domstats $vm' is cpu.cache.monitor.count=0. virt-manager and virt-top are looking for cpu.time here, which in libvirt code is virCgroupGetCpuacctUsage. I guess that doesn't work for non-root daemon usage but I didn't look more than that. libvirt should probably give some debug logging in this place but it doesn't. And at least cpu.time can have a fallback impl to match what DomainInfo does that has worked forever, which is parsing /proc files in qemuGetProcessInfo
it worked like a charm with old versions (ie fedora 28). Is there a way to fallback?
Not short of downgrading or patching virt-manager to not prefer the domstats call, older virt-manager didn't use it
*** Bug 1786203 has been marked as a duplicate of this bug. ***
This message is a reminder that Fedora 30 is nearing its end of life. Fedora will stop maintaining and issuing updates for Fedora 30 on 2020-05-26. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '30'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 30 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete.
Since the duplicate bug (https://bugzilla.redhat.com/show_bug.cgi?id=1786203) was for Fedora 31, can this bug remain open? I can confirm on CentOS 8.1.1911 with virt-manager 2.2.1 that this behavior is still exhibited.
Moving to f32, AFAIK this is still valid for upstream libvirt
I can still reproduce this on F32 with virt-manager-2.2.1-3.fc32.noarch
This message is a reminder that Fedora 32 is nearing its end of life. Fedora will stop maintaining and issuing updates for Fedora 32 on 2021-05-25. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '32'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 32 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete.
I don't think anything changed. Looks the same in F33.
This message is a reminder that Fedora Linux 34 is nearing its end of life. Fedora will stop maintaining and issuing updates for Fedora Linux 34 on 2022-06-07. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a 'version' of '34'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, change the 'version' to a later Fedora Linux version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora Linux 34 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora Linux, you are encouraged to change the 'version' to a later version prior to this bug being closed.
AFAICT, this his is unchanged. Still no CPU data report for any libvirt guests.
Can an affected user file a bug in libvirt's upstream tracker? This isn't really fedora specific. https://gitlab.com/libvirt/libvirt/-/issues/new
This bug appears to have been reported against 'rawhide' during the Fedora Linux 37 development cycle. Changing version to 37.
https://gitlab.com/MichalPrivoznik/libvirt/-/commit/f0a6528d6ae7dd9317de521c7938ed3016d785c8
Patches posted here: https://listman.redhat.com/archives/libvir-list/2022-September/234123.html
Merged upstream as: 044b8744d6 qemu: Implement qemuDomainGetStatsCpu fallback for qemu:///session cdc22d9a21 util: Extend virProcessGetStatInfo() for sysTime and userTime v8.7.0-45-g044b8744d6