Login
Log in using an SSO provider:
Fedora Account System
Red Hat Associate
Red Hat Customer
Login using a Red Hat Bugzilla account
Forgot Password
Create an Account
Red Hat Bugzilla – Attachment 1973949 Details for
Bug 2219538
The guest crashed when only starting with one unbootable disk
Home
New
Search
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh89 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
[?]
This site requires JavaScript to be enabled to function correctly, please enable it.
the libvirtd log
libvirtd.log (text/plain), 139.20 KB, created by
Meina Li
on 2023-07-04 08:12:28 UTC
(
hide
)
Description:
the libvirtd log
Filename:
MIME Type:
Creator:
Meina Li
Created:
2023-07-04 08:12:28 UTC
Size:
139.20 KB
patch
obsolete
>2023-07-04 07:38:26.571+0000: 293933: info : libvirt version: 9.5.0, package: 0rc1.1.el9 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2023-06-28-03:58:48, ) >2023-07-04 07:38:26.571+0000: 293933: info : hostname: s390x-kvm-virtqez2.lab.eng.rdu2.redhat.com >2023-07-04 07:38:26.571+0000: 293933: debug : main:962 : Decided on pid file path '/run/virtqemud.pid' >2023-07-04 07:38:26.571+0000: 293933: debug : main:973 : Decided on socket paths '/run/libvirt/virtqemud-sock', '/run/libvirt/virtqemud-sock-ro' and '/run/libvirt/virtqemud-admin-sock' >2023-07-04 07:38:26.571+0000: 293933: debug : main:1008 : Ensuring run dir '/run/libvirt' exists >2023-07-04 07:38:26.571+0000: 293934: debug : virThreadJobSetWorker:75 : Thread 293934 is running worker rpc-virtqemud >2023-07-04 07:38:26.571+0000: 293936: debug : virThreadJobSetWorker:75 : Thread 293936 is running worker rpc-virtqemud >2023-07-04 07:38:26.571+0000: 293935: debug : virThreadJobSetWorker:75 : Thread 293935 is running worker rpc-virtqemud >2023-07-04 07:38:26.571+0000: 293937: debug : virThreadJobSetWorker:75 : Thread 293937 is running worker rpc-virtqemud >2023-07-04 07:38:26.571+0000: 293938: debug : virThreadJobSetWorker:75 : Thread 293938 is running worker rpc-virtqemud >2023-07-04 07:38:26.571+0000: 293939: debug : virThreadJobSetWorker:75 : Thread 293939 is running worker prio-rpc-virtqemud >2023-07-04 07:38:26.571+0000: 293941: debug : virThreadJobSetWorker:75 : Thread 293941 is running worker prio-rpc-virtqemud >2023-07-04 07:38:26.571+0000: 293942: debug : virThreadJobSetWorker:75 : Thread 293942 is running worker prio-rpc-virtqemud >2023-07-04 07:38:26.571+0000: 293933: debug : virDriverLoadModule:57 : Module load qemu >2023-07-04 07:38:26.571+0000: 293933: debug : virFileFindResourceFull:1848 : Resolved 'qemu' to '/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so' >2023-07-04 07:38:26.571+0000: 293943: debug : virThreadJobSetWorker:75 : Thread 293943 is running worker prio-rpc-virtqemud >2023-07-04 07:38:26.571+0000: 293933: debug : virModuleLoadFile:48 : Load module file '/usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so' >2023-07-04 07:38:26.572+0000: 293940: debug : virThreadJobSetWorker:75 : Thread 293940 is running worker prio-rpc-virtqemud >2023-07-04 07:38:26.572+0000: 293933: debug : virModuleLoadFunc:69 : Lookup function 'qemuRegister' >2023-07-04 07:38:26.572+0000: 293933: debug : virRegisterConnectDriver:524 : driver=0x3ff807df988 name=QEMU >2023-07-04 07:38:26.572+0000: 293933: debug : virRegisterConnectDriver:535 : registering QEMU as driver 3 >2023-07-04 07:38:26.573+0000: 293944: debug : virThreadJobSetWorker:75 : Thread 293944 is running worker rpc-admin >2023-07-04 07:38:26.573+0000: 293945: debug : virThreadJobSetWorker:75 : Thread 293945 is running worker rpc-admin >2023-07-04 07:38:26.573+0000: 293946: debug : virThreadJobSetWorker:75 : Thread 293946 is running worker rpc-admin >2023-07-04 07:38:26.573+0000: 293947: debug : virThreadJobSetWorker:75 : Thread 293947 is running worker rpc-admin >2023-07-04 07:38:26.573+0000: 293948: debug : virThreadJobSetWorker:75 : Thread 293948 is running worker rpc-admin >2023-07-04 07:38:26.573+0000: 293933: debug : main:1138 : Attempting to configure auditing subsystem >2023-07-04 07:38:26.573+0000: 293933: debug : virHookCheck:159 : No hook script /etc/libvirt/hooks/daemon >2023-07-04 07:38:26.573+0000: 293933: debug : virHookCheck:170 : Hook dir /etc/libvirt/hooks/daemon.d is not accessible >2023-07-04 07:38:26.573+0000: 293933: debug : virHookCheck:159 : No hook script /etc/libvirt/hooks/qemu >2023-07-04 07:38:26.573+0000: 293933: debug : virHookCheck:170 : Hook dir /etc/libvirt/hooks/qemu.d is not accessible >2023-07-04 07:38:26.573+0000: 293933: debug : virHookCheck:159 : No hook script /etc/libvirt/hooks/lxc >2023-07-04 07:38:26.573+0000: 293933: debug : virHookCheck:170 : Hook dir /etc/libvirt/hooks/lxc.d is not accessible >2023-07-04 07:38:26.573+0000: 293933: debug : virHookCheck:159 : No hook script /etc/libvirt/hooks/network >2023-07-04 07:38:26.573+0000: 293933: debug : virHookCheck:170 : Hook dir /etc/libvirt/hooks/network.d is not accessible >2023-07-04 07:38:26.573+0000: 293933: debug : virHookCheck:159 : No hook script /etc/libvirt/hooks/libxl >2023-07-04 07:38:26.573+0000: 293933: debug : virHookCheck:170 : Hook dir /etc/libvirt/hooks/libxl.d is not accessible >2023-07-04 07:38:26.573+0000: 293933: debug : virHookCheck:159 : No hook script /etc/libvirt/hooks/bhyve >2023-07-04 07:38:26.573+0000: 293933: debug : virHookCheck:170 : Hook dir /etc/libvirt/hooks/bhyve.d is not accessible >2023-07-04 07:38:26.573+0000: 293933: debug : virSystemdGetListenFDs:813 : Setting up networking from caller >2023-07-04 07:38:26.573+0000: 293933: debug : virSystemdGetListenFDs:844 : Got 3 file descriptors >2023-07-04 07:38:26.573+0000: 293933: debug : virSystemdGetListenFDs:849 : Disabling inheritance of passed FD 3 >2023-07-04 07:38:26.573+0000: 293933: debug : virSystemdGetListenFDs:849 : Disabling inheritance of passed FD 4 >2023-07-04 07:38:26.573+0000: 293933: debug : virSystemdGetListenFDs:849 : Disabling inheritance of passed FD 5 >2023-07-04 07:38:26.573+0000: 293933: debug : virSystemdActivationNew:874 : Activated with 3 FDs >2023-07-04 07:38:26.573+0000: 293933: debug : virSystemdActivationInitFromNames:762 : FD names virtqemud.socket:virtqemud-admin.socket:virtqemud-ro.socket >2023-07-04 07:38:26.573+0000: 293933: debug : virSystemdActivationAddFD:734 : Record first FD 3 with name virtqemud.socket >2023-07-04 07:38:26.573+0000: 293933: debug : virSystemdActivationAddFD:734 : Record first FD 4 with name virtqemud-admin.socket >2023-07-04 07:38:26.573+0000: 293933: debug : virSystemdActivationAddFD:734 : Record first FD 5 with name virtqemud-ro.socket >2023-07-04 07:38:26.573+0000: 293933: debug : virSystemdActivationNew:888 : Created activation object for 3 FDs >2023-07-04 07:38:26.573+0000: 293933: debug : virSystemdActivationClaimFDs:997 : Found 1 FDs with name virtqemud.socket >2023-07-04 07:38:26.573+0000: 293933: debug : virSystemdActivationClaimFDs:997 : Found 1 FDs with name virtqemud-ro.socket >2023-07-04 07:38:26.573+0000: 293933: debug : virSystemdActivationClaimFDs:997 : Found 1 FDs with name virtqemud-admin.socket >2023-07-04 07:38:26.573+0000: 293949: debug : virThreadJobSet:96 : Thread 293949 is now running job daemon-init >2023-07-04 07:38:26.575+0000: 293949: debug : virStateInitialize:658 : Running global init for Remote state driver >2023-07-04 07:38:26.575+0000: 293949: debug : virStateInitialize:666 : State init result 1 (mandatory=1) >2023-07-04 07:38:26.575+0000: 293949: debug : virStateInitialize:658 : Running global init for QEMU state driver >2023-07-04 07:38:26.575+0000: 293949: debug : virArchFromHost:233 : Mapped s390x to 28 (s390x) >2023-07-04 07:38:26.577+0000: 293949: debug : virProcessNamespaceAvailable:1480 : All namespaces (2) are enabled >2023-07-04 07:38:26.577+0000: 293949: debug : virConfReadFile:723 : filename=/etc/libvirt/qemu.conf >2023-07-04 07:38:26.577+0000: 293949: debug : virConfAddEntry:214 : Add entry max_core 0x3ff40043ba0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.577+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueUInt:1086 : Get value uint (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueUInt:1086 : Get value uint (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueUInt:1086 : Get value uint (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueUInt:1086 : Get value uint (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueStringList:913 : Get value string list (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueUInt:1086 : Get value uint (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueUInt:1086 : Get value uint (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueUInt:1086 : Get value uint (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueString:865 : Get value string 0x3ff40043ba0 3 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueUInt:1086 : Get value uint (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueInt:1037 : Get value int (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueUInt:1086 : Get value uint (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueUInt:1086 : Get value uint (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueUInt:1086 : Get value uint (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueStringList:913 : Get value string list (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueUInt:1086 : Get value uint (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueStringList:913 : Get value string list (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueBool:989 : Get value bool (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueStringList:913 : Get value string list (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueStringList:913 : Get value string list (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueInt:1037 : Get value int (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueStringList:913 : Get value string list (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virConfGetValueStringList:913 : Get value string list (nil) 0 >2023-07-04 07:38:26.578+0000: 293949: debug : virLockManagerPluginNew:130 : name=nop driverName=qemu configDir=/etc/libvirt flags=0x0 >2023-07-04 07:38:26.578+0000: 293949: debug : virLockManagerNopInit:34 : version=1000000 configFile=/etc/libvirt/qemu-nop.conf flags=0x0 >2023-07-04 07:38:26.578+0000: 293949: debug : virSecurityDriverLookup:54 : name=<null> >2023-07-04 07:38:26.578+0000: 293949: debug : virSecurityDriverLookup:65 : Probed name=selinux >2023-07-04 07:38:26.578+0000: 293949: debug : virSecurityManagerNewDriver:85 : drv=0x3ff88662550 (selinux) virtDriver=QEMU flags=0xa >2023-07-04 07:38:26.578+0000: 293949: debug : virSecuritySELinuxInitialize:817 : SELinuxInitialize QEMU >2023-07-04 07:38:26.580+0000: 293949: debug : virSecuritySELinuxQEMUInitialize:776 : Loaded domain context 'system_u:system_r:svirt_t:s0', alt domain context 'system_u:system_r:svirt_tcg_t:s0' >2023-07-04 07:38:26.580+0000: 293949: debug : virSecuritySELinuxQEMUInitialize:796 : Loaded file context 'system_u:object_r:svirt_image_t:s0', content context 'system_u:object_r:virt_content_t:s0' >2023-07-04 07:38:26.580+0000: 293949: debug : virSecurityManagerNewDriver:85 : drv=0x3ff88662740 (stack) virtDriver=QEMU flags=0xa >2023-07-04 07:38:26.580+0000: 293949: debug : virSecurityManagerNewDriver:85 : drv=0x3ff886628b8 (dac) virtDriver=QEMU flags=0xa >2023-07-04 07:38:26.580+0000: 293949: debug : virArchFromHost:233 : Mapped s390x to 28 (s390x) >2023-07-04 07:38:26.580+0000: 293949: debug : virArchFromHost:233 : Mapped s390x to 28 (s390x) >2023-07-04 07:38:26.580+0000: 293949: debug : virArchFromHost:233 : Mapped s390x to 28 (s390x) >2023-07-04 07:38:26.580+0000: 293949: info : virDomainObjListLoadAllConfigs:575 : Scanning for configs in /run/libvirt/qemu >2023-07-04 07:38:26.580+0000: 293949: info : virDomainObjListLoadAllConfigs:575 : Scanning for configs in /etc/libvirt/qemu >2023-07-04 07:38:26.580+0000: 293949: info : virDomainObjListLoadAllConfigs:590 : Loading config file 'vm2.xml' >2023-07-04 07:38:26.581+0000: 293949: debug : virFileCacheValidate:268 : Creating data for '/usr/libexec/qemu-kvm' >2023-07-04 07:38:26.583+0000: 293949: debug : virQEMUCapsParseFlags:4474 : Got flags 98 >2023-07-04 07:38:26.584+0000: 293949: debug : virCPUGetHostIsSupported:357 : arch=s390x >2023-07-04 07:38:26.584+0000: 293949: debug : virQEMUCapsInitHostCPUModel:3887 : CPU migratability not provided by QEMU >2023-07-04 07:38:26.584+0000: 293949: debug : virCPUCopyMigratable:1114 : arch=s390x, cpu=0x3ff402e58f0, model=gen15a-base >2023-07-04 07:38:26.584+0000: 293949: error : virHostCPUParsePhysAddrSize:598 : internal error: Missing or invalid CPU address size in /proc/cpuinfo >2023-07-04 07:38:26.584+0000: 293949: debug : virQEMUCapsInitHostCPUModel:3887 : CPU migratability not provided by QEMU >2023-07-04 07:38:26.584+0000: 293949: debug : virCPUCopyMigratable:1114 : arch=s390x, cpu=0x3ff40023800, model=gen15a-base >2023-07-04 07:38:26.585+0000: 293949: debug : virQEMUCapsKVMUsable:5248 : /dev/kvm has changed (1688455713 vs 0) >2023-07-04 07:38:26.587+0000: 293949: debug : virArchFromHost:233 : Mapped s390x to 28 (s390x) >2023-07-04 07:38:26.587+0000: 293949: debug : virFileCacheLoad:167 : Loaded cached data '/var/cache/libvirt/qemu/capabilities/3c76bc41d59c0c7314b1ae8e63f4f765d2cf16abaeea081b3ca1f5d8732f7bb1.xml' for '/usr/libexec/qemu-kvm' >2023-07-04 07:38:26.587+0000: 293949: debug : virFileCacheValidate:271 : Caching data '0x3ff400930a0' for '/usr/libexec/qemu-kvm' >2023-07-04 07:38:26.587+0000: 293949: debug : virQEMUCapsCacheLookup:5954 : Returning caps 0x3ff400930a0 for /usr/libexec/qemu-kvm >2023-07-04 07:38:26.587+0000: 293949: debug : virDomainVirtioSerialAddrSetAddController:1485 : Adding virtio serial controller index 0 with 31 ports to the address set >2023-07-04 07:38:26.587+0000: 293949: debug : virDomainVirtioSerialAddrReserve:1560 : Reserving virtio serial 0 1 >2023-07-04 07:38:26.587+0000: 293949: debug : qemuDomainAssignVirtioSerialAddresses:138 : Finished reserving existing ports >2023-07-04 07:38:26.587+0000: 293949: debug : virDomainVirtioSerialAddrNext:1693 : Found free virtio serial controller 0 port 0 >2023-07-04 07:38:26.587+0000: 293949: debug : virDomainPCIAddressGetNextAddr:1154 : Found free PCI slot 0000:00:01 >2023-07-04 07:38:26.587+0000: 293949: debug : virDomainPCIAddressReserveAddrInternal:846 : PCI bus 0000:00 assigned isolation group 0 because of first device 0000:00:01.0 >2023-07-04 07:38:26.587+0000: 293949: debug : virDomainPCIAddressReserveAddrInternal:864 : Reserving PCI address 0000:00:01.0 (aggregate='false') >2023-07-04 07:38:26.587+0000: 293949: debug : qemuDomainUSBAddressAddHubs:3065 : Found 0 USB devices and 0 provided USB ports; adding 0 hubs >2023-07-04 07:38:26.587+0000: 293949: debug : qemuDomainAssignUSBAddresses:3246 : Existing USB addresses have been reserved >2023-07-04 07:38:26.587+0000: 293949: debug : qemuDomainAssignUSBAddresses:3254 : Finished assigning USB ports >2023-07-04 07:38:26.587+0000: 293949: debug : virDomainObjNew:4089 : obj=0x3ff400ad010 >2023-07-04 07:38:26.587+0000: 293949: info : virDomainObjListLoadAllConfigs:590 : Loading config file 'rhel.xml' >2023-07-04 07:38:26.588+0000: 293949: debug : virArchFromHost:233 : Mapped s390x to 28 (s390x) >2023-07-04 07:38:26.588+0000: 293949: debug : virQEMUCapsCacheLookup:5954 : Returning caps 0x3ff400930a0 for /usr/libexec/qemu-kvm >2023-07-04 07:38:26.588+0000: 293949: debug : virDomainVirtioSerialAddrSetAddController:1485 : Adding virtio serial controller index 0 with 31 ports to the address set >2023-07-04 07:38:26.588+0000: 293949: debug : virDomainVirtioSerialAddrReserve:1560 : Reserving virtio serial 0 1 >2023-07-04 07:38:26.588+0000: 293949: debug : qemuDomainAssignVirtioSerialAddresses:138 : Finished reserving existing ports >2023-07-04 07:38:26.588+0000: 293949: debug : virDomainPCIAddressGetNextAddr:1154 : Found free PCI slot 0000:00:01 >2023-07-04 07:38:26.588+0000: 293949: debug : virDomainPCIAddressReserveAddrInternal:846 : PCI bus 0000:00 assigned isolation group 0 because of first device 0000:00:01.0 >2023-07-04 07:38:26.588+0000: 293949: debug : virDomainPCIAddressReserveAddrInternal:864 : Reserving PCI address 0000:00:01.0 (aggregate='false') >2023-07-04 07:38:26.588+0000: 293949: debug : qemuDomainUSBAddressAddHubs:3065 : Found 0 USB devices and 0 provided USB ports; adding 0 hubs >2023-07-04 07:38:26.588+0000: 293949: debug : qemuDomainAssignUSBAddresses:3246 : Existing USB addresses have been reserved >2023-07-04 07:38:26.588+0000: 293949: debug : qemuDomainAssignUSBAddresses:3254 : Finished assigning USB ports >2023-07-04 07:38:26.588+0000: 293949: debug : virDomainObjNew:4089 : obj=0x3ff400ad100 >2023-07-04 07:38:26.588+0000: 293949: info : virDomainObjListLoadAllConfigs:590 : Loading config file 'avocado-vt-vm1.xml' >2023-07-04 07:38:26.588+0000: 293949: debug : virArchFromHost:233 : Mapped s390x to 28 (s390x) >2023-07-04 07:38:26.588+0000: 293949: debug : virQEMUCapsCacheLookup:5954 : Returning caps 0x3ff400930a0 for /usr/libexec/qemu-kvm >2023-07-04 07:38:26.588+0000: 293949: debug : virDomainVirtioSerialAddrSetAddController:1485 : Adding virtio serial controller index 0 with 31 ports to the address set >2023-07-04 07:38:26.588+0000: 293949: debug : virDomainVirtioSerialAddrReserve:1560 : Reserving virtio serial 0 1 >2023-07-04 07:38:26.588+0000: 293949: debug : qemuDomainAssignVirtioSerialAddresses:138 : Finished reserving existing ports >2023-07-04 07:38:26.588+0000: 293949: debug : virDomainVirtioSerialAddrNext:1693 : Found free virtio serial controller 0 port 0 >2023-07-04 07:38:26.588+0000: 293949: debug : virDomainPCIAddressGetNextAddr:1154 : Found free PCI slot 0000:00:01 >2023-07-04 07:38:26.588+0000: 293949: debug : virDomainPCIAddressReserveAddrInternal:846 : PCI bus 0000:00 assigned isolation group 0 because of first device 0000:00:01.0 >2023-07-04 07:38:26.588+0000: 293949: debug : virDomainPCIAddressReserveAddrInternal:864 : Reserving PCI address 0000:00:01.0 (aggregate='false') >2023-07-04 07:38:26.588+0000: 293949: debug : qemuDomainUSBAddressAddHubs:3065 : Found 0 USB devices and 0 provided USB ports; adding 0 hubs >2023-07-04 07:38:26.588+0000: 293949: debug : qemuDomainAssignUSBAddresses:3246 : Existing USB addresses have been reserved >2023-07-04 07:38:26.588+0000: 293949: debug : qemuDomainAssignUSBAddresses:3254 : Finished assigning USB ports >2023-07-04 07:38:26.588+0000: 293949: debug : virDomainObjNew:4089 : obj=0x3ff400ad1f0 >2023-07-04 07:38:26.588+0000: 293949: info : qemuDomainSnapshotLoad:353 : Scanning for snapshots for domain rhel in /var/lib/libvirt/qemu/snapshot/rhel >2023-07-04 07:38:26.588+0000: 293949: info : qemuDomainSnapshotLoad:353 : Scanning for snapshots for domain vm2 in /var/lib/libvirt/qemu/snapshot/vm2 >2023-07-04 07:38:26.588+0000: 293949: info : qemuDomainSnapshotLoad:353 : Scanning for snapshots for domain avocado-vt-vm1 in /var/lib/libvirt/qemu/snapshot/avocado-vt-vm1 >2023-07-04 07:38:26.588+0000: 293949: info : qemuDomainCheckpointLoad:449 : Scanning for checkpoints for domain rhel in /var/lib/libvirt/qemu/checkpoint/rhel >2023-07-04 07:38:26.588+0000: 293949: info : qemuDomainCheckpointLoad:449 : Scanning for checkpoints for domain vm2 in /var/lib/libvirt/qemu/checkpoint/vm2 >2023-07-04 07:38:26.588+0000: 293949: info : qemuDomainCheckpointLoad:449 : Scanning for checkpoints for domain avocado-vt-vm1 in /var/lib/libvirt/qemu/checkpoint/avocado-vt-vm1 >2023-07-04 07:38:26.588+0000: 293949: debug : virDriverShouldAutostart:96 : Autostart file /run/libvirt/qemu/autostarted exists, skipping autostart >2023-07-04 07:38:26.588+0000: 293949: debug : virStateInitialize:666 : State init result 1 (mandatory=1) >2023-07-04 07:38:26.588+0000: 293949: debug : virThreadJobClear:121 : Thread 293949 finished job daemon-init with ret=0 >2023-07-04 07:38:30.586+0000: 293934: debug : virThreadJobSet:93 : Thread 293934 (rpc-virtqemud) is now running job remoteDispatchAuthList >2023-07-04 07:38:30.587+0000: 293934: debug : virThreadJobClear:118 : Thread 293934 (rpc-virtqemud) finished job remoteDispatchAuthList with ret=0 >2023-07-04 07:38:30.587+0000: 293936: debug : virThreadJobSet:93 : Thread 293936 (rpc-virtqemud) is now running job remoteDispatchConnectSupportsFeature >2023-07-04 07:38:30.587+0000: 293936: debug : virThreadJobClear:118 : Thread 293936 (rpc-virtqemud) finished job remoteDispatchConnectSupportsFeature with ret=0 >2023-07-04 07:38:30.587+0000: 293935: debug : virThreadJobSet:93 : Thread 293935 (rpc-virtqemud) is now running job remoteDispatchConnectOpen >2023-07-04 07:38:30.587+0000: 293935: debug : virConnectOpenAuth:1277 : name=, auth=(nil), flags=0x0 >2023-07-04 07:38:30.587+0000: 293935: debug : virConfLoadConfig:1515 : Loading config file '/etc/libvirt/libvirt.conf' >2023-07-04 07:38:30.587+0000: 293935: debug : virConfReadFile:723 : filename=/etc/libvirt/libvirt.conf >2023-07-04 07:38:30.587+0000: 293935: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:30.587+0000: 293935: debug : virConnectOpenInternal:933 : Trying to probe for default URI >2023-07-04 07:38:30.587+0000: 293935: debug : virConnectOpenInternal:938 : QEMU driver URI probe returned 'qemu:///system' >2023-07-04 07:38:30.587+0000: 293935: debug : virConfGetValueStringList:913 : Get value string list (nil) 0 >2023-07-04 07:38:30.587+0000: 293935: debug : virConnectOpenInternal:966 : Split "qemu:///system" to URI components: > scheme qemu > server <null> > user <null> > port 0 > path /system >2023-07-04 07:38:30.587+0000: 293935: debug : virConnectOpenInternal:1072 : trying driver 0 (Test) ... >2023-07-04 07:38:30.587+0000: 293935: debug : virConnectOpenInternal:1094 : Checking for supported URI schemes >2023-07-04 07:38:30.587+0000: 293935: debug : virConnectOpenInternal:1103 : No matching URI scheme >2023-07-04 07:38:30.587+0000: 293935: debug : virConnectOpenInternal:1072 : trying driver 1 (ESX) ... >2023-07-04 07:38:30.587+0000: 293935: debug : virConnectOpenInternal:1094 : Checking for supported URI schemes >2023-07-04 07:38:30.587+0000: 293935: debug : virConnectOpenInternal:1103 : No matching URI scheme >2023-07-04 07:38:30.587+0000: 293935: debug : virConnectOpenInternal:1072 : trying driver 2 (remote) ... >2023-07-04 07:38:30.587+0000: 293935: debug : virConnectOpenInternal:1111 : Matching any URI scheme for 'qemu' >2023-07-04 07:38:30.587+0000: 293935: debug : virConnectOpenInternal:1137 : driver 2 remote returned DECLINED >2023-07-04 07:38:30.587+0000: 293935: debug : virConnectOpenInternal:1072 : trying driver 3 (QEMU) ... >2023-07-04 07:38:30.587+0000: 293935: debug : virConnectOpenInternal:1094 : Checking for supported URI schemes >2023-07-04 07:38:30.587+0000: 293935: debug : virConnectOpenInternal:1097 : Matched URI scheme 'qemu' >2023-07-04 07:38:30.587+0000: 293935: debug : virConnectOpenInternal:1137 : driver 3 QEMU returned SUCCESS >2023-07-04 07:38:30.587+0000: 293935: debug : virConnectGetType:163 : conn=0x3ff40014190 >2023-07-04 07:38:30.587+0000: 293935: debug : virThreadJobClear:118 : Thread 293935 (rpc-virtqemud) finished job remoteDispatchConnectOpen with ret=0 >2023-07-04 07:38:30.587+0000: 293937: debug : virThreadJobSet:93 : Thread 293937 (rpc-virtqemud) is now running job remoteDispatchConnectGetURI >2023-07-04 07:38:30.587+0000: 293937: debug : virConnectGetURI:316 : conn=0x3ff40014190 >2023-07-04 07:38:30.587+0000: 293937: debug : virThreadJobClear:118 : Thread 293937 (rpc-virtqemud) finished job remoteDispatchConnectGetURI with ret=0 >2023-07-04 07:38:30.587+0000: 293938: debug : virThreadJobSet:93 : Thread 293938 (rpc-virtqemud) is now running job remoteDispatchConnectSupportsFeature >2023-07-04 07:38:30.587+0000: 293938: debug : virThreadJobClear:118 : Thread 293938 (rpc-virtqemud) finished job remoteDispatchConnectSupportsFeature with ret=0 >2023-07-04 07:38:30.587+0000: 293934: debug : virThreadJobSet:93 : Thread 293934 (rpc-virtqemud) is now running job remoteDispatchConnectSupportsFeature >2023-07-04 07:38:30.587+0000: 293934: debug : virThreadJobClear:118 : Thread 293934 (rpc-virtqemud) finished job remoteDispatchConnectSupportsFeature with ret=0 >2023-07-04 07:38:30.598+0000: 293936: debug : virThreadJobSet:93 : Thread 293936 (rpc-virtqemud) is now running job remoteDispatchConnectRegisterCloseCallback >2023-07-04 07:38:30.598+0000: 293936: debug : virConnectRegisterCloseCallback:1501 : conn=0x3ff40014190 >2023-07-04 07:38:30.598+0000: 293936: debug : virThreadJobClear:118 : Thread 293936 (rpc-virtqemud) finished job remoteDispatchConnectRegisterCloseCallback with ret=0 >2023-07-04 07:38:30.598+0000: 293942: debug : virThreadJobSet:93 : Thread 293942 (prio-rpc-virtqemud) is now running job remoteDispatchDomainLookupByName >2023-07-04 07:38:30.598+0000: 293942: debug : virDomainLookupByName:449 : conn=0x3ff40014190, name=rhel >2023-07-04 07:38:30.598+0000: 293942: debug : virDomainDispose:326 : release domain 0x3ff2c006010 rhel e2a90373-48e4-4bfc-8f57-07e84139bd67 >2023-07-04 07:38:30.598+0000: 293942: debug : virThreadJobClear:118 : Thread 293942 (prio-rpc-virtqemud) finished job remoteDispatchDomainLookupByName with ret=0 >2023-07-04 07:38:30.598+0000: 293937: debug : virThreadJobSet:93 : Thread 293937 (rpc-virtqemud) is now running job remoteDispatchDomainCreate >2023-07-04 07:38:30.598+0000: 293937: debug : virDomainCreate:7007 : dom=0x3ff4001fa40, (VM: name=rhel, uuid=e2a90373-48e4-4bfc-8f57-07e84139bd67) >2023-07-04 07:38:30.598+0000: 293937: debug : virDomainObjBeginJobInternal:318 : Starting job: API=remoteDispatchDomainCreate job=none agentJob=none asyncJob=start (vm=0x3ff400ad100 name=rhel, current job=none agentJob=none async=none) >2023-07-04 07:38:30.598+0000: 293937: debug : virDomainObjBeginJobInternal:391 : Started async job: start (vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:30.598+0000: 293937: debug : qemuProcessStart:8037 : conn=0x3ff40014190 driver=0x3ff40021560 vm=0x3ff400ad100 name=rhel id=-1 asyncJob=start migrateFrom=<null> migrateFd=-1 migratePath=<null> snapshot=(nil) vmop=0 flags=0x1 >2023-07-04 07:38:30.598+0000: 293937: debug : qemuProcessInit:5700 : vm=0x3ff400ad100 name=rhel id=-1 migration=0 >2023-07-04 07:38:30.598+0000: 293937: debug : qemuProcessInit:5703 : Beginning VM startup process >2023-07-04 07:38:30.598+0000: 293937: debug : qemuProcessInit:5721 : Determining emulator version >2023-07-04 07:38:30.598+0000: 293937: debug : virArchFromHost:233 : Mapped s390x to 28 (s390x) >2023-07-04 07:38:30.598+0000: 293937: debug : virQEMUCapsCacheLookup:5954 : Returning caps 0x3ff400930a0 for /usr/libexec/qemu-kvm >2023-07-04 07:38:30.598+0000: 293937: debug : virCPUDataNewCopy:306 : data=(nil) >2023-07-04 07:38:30.598+0000: 293937: debug : qemuProcessStartValidate:5524 : Checking for KVM availability >2023-07-04 07:38:30.598+0000: 293937: debug : qemuProcessStartValidate:5535 : Checking domain and device security labels >2023-07-04 07:38:30.598+0000: 293937: debug : virCPUValidateFeatures:1143 : arch=s390x, cpu=0x3ff400b8610, nfeatures=0 >2023-07-04 07:38:30.598+0000: 293937: debug : qemuProcessStartValidate:5582 : Checking for any possible (non-fatal) issues >2023-07-04 07:38:30.598+0000: 293937: debug : qemuProcessInit:5735 : Setting current domain def as transient >2023-07-04 07:38:30.599+0000: 293937: debug : virDomainVirtioSerialAddrSetAddController:1485 : Adding virtio serial controller index 0 with 31 ports to the address set >2023-07-04 07:38:30.599+0000: 293937: debug : virDomainVirtioSerialAddrReserve:1560 : Reserving virtio serial 0 1 >2023-07-04 07:38:30.599+0000: 293937: debug : qemuDomainAssignVirtioSerialAddresses:138 : Finished reserving existing ports >2023-07-04 07:38:30.599+0000: 293937: debug : virDomainPCIAddressGetNextAddr:1154 : Found free PCI slot 0000:00:01 >2023-07-04 07:38:30.599+0000: 293937: debug : virDomainPCIAddressReserveAddrInternal:846 : PCI bus 0000:00 assigned isolation group 0 because of first device 0000:00:01.0 >2023-07-04 07:38:30.599+0000: 293937: debug : virDomainPCIAddressReserveAddrInternal:864 : Reserving PCI address 0000:00:01.0 (aggregate='false') >2023-07-04 07:38:30.599+0000: 293937: debug : qemuDomainUSBAddressAddHubs:3065 : Found 0 USB devices and 0 provided USB ports; adding 0 hubs >2023-07-04 07:38:30.599+0000: 293937: debug : qemuDomainAssignUSBAddresses:3246 : Existing USB addresses have been reserved >2023-07-04 07:38:30.599+0000: 293937: debug : qemuDomainAssignUSBAddresses:3254 : Finished assigning USB ports >2023-07-04 07:38:30.600+0000: 293937: debug : qemuProcessPrepareDomain:6646 : Generating domain security label (if required) >2023-07-04 07:38:30.600+0000: 293937: debug : virSecuritySELinuxGenLabel:854 : label=QEMU >2023-07-04 07:38:30.600+0000: 293937: debug : virSecuritySELinuxGenLabel:876 : type=2 >2023-07-04 07:38:30.600+0000: 293937: debug : virSecuritySELinuxMCSFind:369 : Using sensitivity level 's0' cat min 0 max 1023 range 1024 >2023-07-04 07:38:30.600+0000: 293937: debug : virSecuritySELinuxMCSFind:376 : Try cat s0:c737,c993 >2023-07-04 07:38:30.600+0000: 293937: debug : virSecuritySELinuxGenNewContext:620 : basecontext=system_u:system_r:svirt_t:s0 mcs=s0:c737,c993 isObjectContext=0 >2023-07-04 07:38:30.600+0000: 293937: debug : virSecuritySELinuxGenNewContext:634 : process=system_u:system_r:virtd_t:s0-s0:c0.c1023 >2023-07-04 07:38:30.600+0000: 293937: debug : virSecuritySELinuxGenNewContext:672 : Generated context 'system_u:system_r:svirt_t:s0:c737,c993' >2023-07-04 07:38:30.600+0000: 293937: debug : virSecuritySELinuxGenNewContext:620 : basecontext=system_u:object_r:svirt_image_t:s0 mcs=s0:c737,c993 isObjectContext=1 >2023-07-04 07:38:30.600+0000: 293937: debug : virSecuritySELinuxGenNewContext:634 : process=system_u:system_r:virtd_t:s0-s0:c0.c1023 >2023-07-04 07:38:30.600+0000: 293937: debug : virSecuritySELinuxGenNewContext:672 : Generated context 'system_u:object_r:svirt_image_t:s0:c737,c993' >2023-07-04 07:38:30.600+0000: 293937: debug : virSecuritySELinuxGenLabel:980 : model=selinux label=system_u:system_r:svirt_t:s0:c737,c993 imagelabel=system_u:object_r:svirt_image_t:s0:c737,c993 baselabel=<null> >2023-07-04 07:38:30.600+0000: 293937: debug : qemuProcessPrepareDomain:6673 : Assigning domain PCI addresses >2023-07-04 07:38:30.600+0000: 293937: debug : virDomainVirtioSerialAddrSetAddController:1485 : Adding virtio serial controller index 0 with 31 ports to the address set >2023-07-04 07:38:30.600+0000: 293937: debug : virDomainVirtioSerialAddrReserve:1560 : Reserving virtio serial 0 1 >2023-07-04 07:38:30.600+0000: 293937: debug : qemuDomainAssignVirtioSerialAddresses:138 : Finished reserving existing ports >2023-07-04 07:38:30.600+0000: 293937: debug : virDomainPCIAddressGetNextAddr:1154 : Found free PCI slot 0000:00:01 >2023-07-04 07:38:30.600+0000: 293937: debug : virDomainPCIAddressReserveAddrInternal:846 : PCI bus 0000:00 assigned isolation group 0 because of first device 0000:00:01.0 >2023-07-04 07:38:30.600+0000: 293937: debug : virDomainPCIAddressReserveAddrInternal:864 : Reserving PCI address 0000:00:01.0 (aggregate='false') >2023-07-04 07:38:30.600+0000: 293937: debug : qemuDomainUSBAddressAddHubs:3065 : Found 0 USB devices and 0 provided USB ports; adding 0 hubs >2023-07-04 07:38:30.600+0000: 293937: debug : qemuDomainAssignUSBAddresses:3246 : Existing USB addresses have been reserved >2023-07-04 07:38:30.600+0000: 293937: debug : qemuDomainAssignUSBAddresses:3254 : Finished assigning USB ports >2023-07-04 07:38:30.600+0000: 293937: debug : virConnectOpen:1204 : name=network:///system >2023-07-04 07:38:30.600+0000: 293937: debug : virConfLoadConfig:1515 : Loading config file '/etc/libvirt/libvirt.conf' >2023-07-04 07:38:30.600+0000: 293937: debug : virConfReadFile:723 : filename=/etc/libvirt/libvirt.conf >2023-07-04 07:38:30.600+0000: 293937: debug : virConfGetValueStringList:913 : Get value string list (nil) 0 >2023-07-04 07:38:30.600+0000: 293937: debug : virConnectOpenInternal:966 : Split "network:///system" to URI components: > scheme network > server <null> > user <null> > port 0 > path /system >2023-07-04 07:38:30.600+0000: 293937: debug : virConnectOpenInternal:1072 : trying driver 0 (Test) ... >2023-07-04 07:38:30.600+0000: 293937: debug : virConnectOpenInternal:1094 : Checking for supported URI schemes >2023-07-04 07:38:30.600+0000: 293937: debug : virConnectOpenInternal:1103 : No matching URI scheme >2023-07-04 07:38:30.600+0000: 293937: debug : virConnectOpenInternal:1072 : trying driver 1 (ESX) ... >2023-07-04 07:38:30.600+0000: 293937: debug : virConnectOpenInternal:1094 : Checking for supported URI schemes >2023-07-04 07:38:30.600+0000: 293937: debug : virConnectOpenInternal:1103 : No matching URI scheme >2023-07-04 07:38:30.600+0000: 293937: debug : virConnectOpenInternal:1072 : trying driver 2 (remote) ... >2023-07-04 07:38:30.600+0000: 293937: debug : virConnectOpenInternal:1111 : Matching any URI scheme for 'network' >2023-07-04 07:38:30.600+0000: 293937: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:30.600+0000: 293937: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:31.531+0000: 293937: debug : virConnectOpenInternal:1137 : driver 2 remote returned SUCCESS >2023-07-04 07:38:31.531+0000: 293937: debug : virGetConnectGeneric:157 : Opened new network connection 0x3ff40014690 >2023-07-04 07:38:31.531+0000: 293937: debug : virGetConnectGeneric:164 : Attempting to delegate current identity >2023-07-04 07:38:31.531+0000: 293937: debug : virConnectSetIdentity:99 : conn=0x3ff40014690 params=0x3ff7002fba0 nparams=7 flags=0x0 >2023-07-04 07:38:31.531+0000: 293937: debug : virConnectSetIdentity:100 : params["user-name"]=(string)root >2023-07-04 07:38:31.531+0000: 293937: debug : virConnectSetIdentity:100 : params["unix-user-id"]=(ullong)0 >2023-07-04 07:38:31.531+0000: 293937: debug : virConnectSetIdentity:100 : params["group-name"]=(string)root >2023-07-04 07:38:31.531+0000: 293937: debug : virConnectSetIdentity:100 : params["unix-group-id"]=(ullong)0 >2023-07-04 07:38:31.531+0000: 293937: debug : virConnectSetIdentity:100 : params["process-id"]=(llong)293956 >2023-07-04 07:38:31.531+0000: 293937: debug : virConnectSetIdentity:100 : params["process-time"]=(ullong)1507459 >2023-07-04 07:38:31.531+0000: 293937: debug : virConnectSetIdentity:100 : params["selinux-context"]=(string)unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 >2023-07-04 07:38:31.531+0000: 293937: debug : virNetworkLookupByName:306 : conn=0x3ff40014690, name=default >2023-07-04 07:38:31.531+0000: 293937: debug : virNetworkGetXMLDesc:943 : network=0x3ff40020010, flags=0x0 >2023-07-04 07:38:31.532+0000: 293937: debug : virNetworkDispose:388 : release network 0x3ff40020010 default 03ea3bcd-1a0d-40d7-a648-a9e7ce820d7a >2023-07-04 07:38:31.532+0000: 293937: debug : qemuProcessPrepareDomain:6684 : Setting graphics devices >2023-07-04 07:38:31.532+0000: 293937: debug : qemuProcessPrepareDomain:6688 : Create domain masterKey >2023-07-04 07:38:31.532+0000: 293937: debug : qemuProcessPrepareDomain:6692 : Setting up storage >2023-07-04 07:38:31.532+0000: 293937: debug : qemuProcessPrepareDomain:6696 : Setting up host devices >2023-07-04 07:38:31.532+0000: 293937: debug : qemuProcessPrepareDomain:6700 : Prepare chardev source backends >2023-07-04 07:38:31.532+0000: 293937: debug : qemuProcessPrepareDomain:6704 : Prepare device secrets >2023-07-04 07:38:31.532+0000: 293937: debug : qemuProcessPrepareDomain:6708 : Prepare bios/uefi paths >2023-07-04 07:38:31.532+0000: 293937: debug : qemuProcessPrepareDomain:6714 : Preparing external devices >2023-07-04 07:38:31.532+0000: 293937: debug : qemuProcessPrepareDomain:6719 : Aligning guest memory >2023-07-04 07:38:31.532+0000: 293937: debug : qemuProcessPrepareDomain:6733 : Preparing monitor state >2023-07-04 07:38:31.532+0000: 293937: debug : qemuProcessPrepareDomain:6742 : Updating guest CPU definition >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUConvertLegacy:1029 : arch=s390x, cpu=0x3ff400b8610, model=<null> >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUCompare:151 : arch=s390x, host=0x3ff70004320, cpu=0x3ff400b8610 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUUpdate:577 : arch=s390x, guest=0x3ff400b8610 mode=host-model model=<null>, host=0x3ff70004c00 model=gen15a-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUUpdate:625 : model=gen15a-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=gen16a-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z890.2-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z800-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=gen16a >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z9EC.2 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z13.2 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z990.5-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z9BC-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z890 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z890.2 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z9BC >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z13 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z196 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z13s >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=host >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=gen16b-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z990.3 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z13s-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z9EC >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=gen15a >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z14ZR1-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z14.2-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z900.3-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z13.2-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z196.2-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=zBC12-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z9BC.2-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z900.2-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z9EC.3 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=zEC12 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z900 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z114-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=zEC12-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z10EC.2 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z10EC-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z900.3 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z14ZR1 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z10BC >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z10BC.2-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z9BC.2 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z990.2 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z990 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z14 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=gen15b-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z990.4 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=max >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z10EC.2-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z990.2-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=gen15a-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z800 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=gen16b >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z10EC >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=zEC12.2 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z900-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z10BC.2 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z9EC-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z9EC.3-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z114 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z890.3 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z196-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z9EC.2-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z196.2 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z14.2 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z990-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z900.2 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z890-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z10EC.3 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z14-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z990.4-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z10EC.3-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z10BC-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z13-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z990.3-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=zEC12.2-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=zBC12 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z890.3-base >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=z990.5 >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=gen15b >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUGetVendorForModel:941 : arch=s390x model=qemu >2023-07-04 07:38:31.532+0000: 293937: debug : virCPUTranslate:975 : arch=s390x, cpu=0x3ff400b8610, model=gen15a-base, models=0x3ff2c00aa10 >2023-07-04 07:38:31.536+0000: 293937: debug : qemuProcessPrepareHost:7177 : Preparing network devices >2023-07-04 07:38:31.536+0000: 293937: debug : virConnectOpen:1204 : name=network:///system >2023-07-04 07:38:31.536+0000: 293937: debug : virConfLoadConfig:1515 : Loading config file '/etc/libvirt/libvirt.conf' >2023-07-04 07:38:31.536+0000: 293937: debug : virConfReadFile:723 : filename=/etc/libvirt/libvirt.conf >2023-07-04 07:38:31.536+0000: 293937: debug : virConfGetValueStringList:913 : Get value string list (nil) 0 >2023-07-04 07:38:31.536+0000: 293937: debug : virConnectOpenInternal:966 : Split "network:///system" to URI components: > scheme network > server <null> > user <null> > port 0 > path /system >2023-07-04 07:38:31.536+0000: 293937: debug : virConnectOpenInternal:1072 : trying driver 0 (Test) ... >2023-07-04 07:38:31.536+0000: 293937: debug : virConnectOpenInternal:1094 : Checking for supported URI schemes >2023-07-04 07:38:31.536+0000: 293937: debug : virConnectOpenInternal:1103 : No matching URI scheme >2023-07-04 07:38:31.536+0000: 293937: debug : virConnectOpenInternal:1072 : trying driver 1 (ESX) ... >2023-07-04 07:38:31.536+0000: 293937: debug : virConnectOpenInternal:1094 : Checking for supported URI schemes >2023-07-04 07:38:31.536+0000: 293937: debug : virConnectOpenInternal:1103 : No matching URI scheme >2023-07-04 07:38:31.536+0000: 293937: debug : virConnectOpenInternal:1072 : trying driver 2 (remote) ... >2023-07-04 07:38:31.536+0000: 293937: debug : virConnectOpenInternal:1111 : Matching any URI scheme for 'network' >2023-07-04 07:38:31.536+0000: 293937: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:31.536+0000: 293937: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:31.536+0000: 293937: debug : virConnectOpenInternal:1137 : driver 2 remote returned SUCCESS >2023-07-04 07:38:31.536+0000: 293937: debug : virGetConnectGeneric:157 : Opened new network connection 0x3ff40014790 >2023-07-04 07:38:31.536+0000: 293937: debug : virGetConnectGeneric:164 : Attempting to delegate current identity >2023-07-04 07:38:31.536+0000: 293937: debug : virConnectSetIdentity:99 : conn=0x3ff40014790 params=0x3ff70034660 nparams=7 flags=0x0 >2023-07-04 07:38:31.536+0000: 293937: debug : virConnectSetIdentity:100 : params["user-name"]=(string)root >2023-07-04 07:38:31.536+0000: 293937: debug : virConnectSetIdentity:100 : params["unix-user-id"]=(ullong)0 >2023-07-04 07:38:31.536+0000: 293937: debug : virConnectSetIdentity:100 : params["group-name"]=(string)root >2023-07-04 07:38:31.536+0000: 293937: debug : virConnectSetIdentity:100 : params["unix-group-id"]=(ullong)0 >2023-07-04 07:38:31.536+0000: 293937: debug : virConnectSetIdentity:100 : params["process-id"]=(llong)293956 >2023-07-04 07:38:31.536+0000: 293937: debug : virConnectSetIdentity:100 : params["process-time"]=(ullong)1507459 >2023-07-04 07:38:31.536+0000: 293937: debug : virConnectSetIdentity:100 : params["selinux-context"]=(string)unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 >2023-07-04 07:38:31.536+0000: 293937: debug : virNetworkLookupByName:306 : conn=0x3ff40014790, name=default >2023-07-04 07:38:31.536+0000: 293937: debug : virNetworkPortCreateXML:1617 : net=0x3ff40020150, xmldesc=<networkport> > <uuid>a0907b01-7e72-4679-bd47-fc483727b78c</uuid> > <owner> > <name>rhel</name> > <uuid>e2a90373-48e4-4bfc-8f57-07e84139bd67</uuid> > </owner> > <mac address='52:54:00:ac:19:b5'/> ></networkport> >, flags=0x0 >2023-07-04 07:38:31.537+0000: 293937: debug : virNetworkPortGetXMLDesc:1683 : port=0x3ff2c008b10, flags=0x0 >2023-07-04 07:38:31.538+0000: 293937: debug : virNetworkPortGetUUID:1721 : port=0x3ff2c008b10, uuid=0x3ff400b8c30 >2023-07-04 07:38:31.538+0000: 293937: debug : virNetworkPortDispose:446 : release network port 0x3ff2c008b10 a0907b01-7e72-4679-bd47-fc483727b78c >2023-07-04 07:38:31.538+0000: 293937: debug : virNetworkDispose:388 : release network 0x3ff400201a0 default 03ea3bcd-1a0d-40d7-a648-a9e7ce820d7a >2023-07-04 07:38:31.538+0000: 293937: debug : virNetworkDispose:388 : release network 0x3ff40020150 default 03ea3bcd-1a0d-40d7-a648-a9e7ce820d7a >2023-07-04 07:38:31.538+0000: 293937: debug : qemuProcessPrepareHost:7182 : Preparing host devices >2023-07-04 07:38:31.538+0000: 293937: debug : qemuProcessPrepareHost:7190 : Preparing chr device backends >2023-07-04 07:38:31.538+0000: 293937: debug : virSecuritySELinuxSetSocketLabel:3143 : Setting VM rhel socket context system_u:system_r:svirt_t:s0:c737,c993 >2023-07-04 07:38:31.538+0000: 293937: debug : virSecuritySELinuxSetSocketLabel:3143 : Setting VM rhel socket context system_u:system_r:svirt_t:s0:c737,c993 >2023-07-04 07:38:31.538+0000: 293937: debug : qemuProcessPrepareHost:7199 : Ensuring no historical cgroup is lying around >2023-07-04 07:38:31.538+0000: 293937: debug : qemuProcessPrepareHost:7224 : Write domain masterKey >2023-07-04 07:38:31.540+0000: 293937: debug : qemuProcessPrepareHost:7228 : Preparing disks (host) >2023-07-04 07:38:31.540+0000: 293937: debug : qemuProcessPrepareHost:7232 : Preparing external devices >2023-07-04 07:38:31.540+0000: 293937: debug : qemuProcessLaunch:7576 : conn=0x3ff40014190 driver=0x3ff40021560 vm=0x3ff400ad100 name=rhel id=1 asyncJob=6 incoming.uri=<null> incoming.fd=-1 incoming.path=<null> snapshot=(nil) vmop=0 flags=0x11 >2023-07-04 07:38:31.540+0000: 293937: debug : qemuProcessLaunch:7615 : Creating domain log file >2023-07-04 07:38:31.540+0000: 293937: debug : qemuDomainLogContextNew:7026 : Context new 0x3ff4000f080 stdioLogD=1 >2023-07-04 07:38:31.541+0000: 293937: debug : qemuBuildCommandLine:10285 : Building qemu commandline for def=rhel(0x3ff400b7d00) migrateURI=(null) snapshot=(nil) vmop=0 >2023-07-04 07:38:31.541+0000: 293937: debug : virArchFromHost:233 : Mapped s390x to 28 (s390x) >2023-07-04 07:38:31.544+0000: 293937: info : virNetDevTapCreate:258 : created device: 'vnet0' >2023-07-04 07:38:31.544+0000: 293937: debug : virNetDevSetMACInternal:282 : SIOCSIFHWADDR vnet0 MAC=fe:54:00:ac:19:b5 - Success >2023-07-04 07:38:31.545+0000: 293937: debug : virSecuritySELinuxSetTapFDLabel:3414 : fd=24 points to /dev/net/tun not setting SELinux label >2023-07-04 07:38:31.545+0000: 293937: debug : virCommandRunAsync:2632 : About to run LC_ALL=C tc qdisc add dev vnet0 root handle 0: noqueue >2023-07-04 07:38:31.545+0000: 293937: debug : virCommandRunAsync:2634 : Command result 0, with PID 294040 >2023-07-04 07:38:31.548+0000: 293937: debug : virCommandRun:2478 : Result exit status 0, stdout: '' stderr: '' >2023-07-04 07:38:31.548+0000: 293937: debug : qemuProcessEnableDomainNamespaces:7334 : Mount namespace for domain name=rhel is enabled >2023-07-04 07:38:31.548+0000: 293937: debug : qemuProcessLaunch:7662 : Setting up raw IO >2023-07-04 07:38:31.548+0000: 293937: debug : qemuProcessLaunch:7669 : Setting up process limits >2023-07-04 07:38:31.548+0000: 293937: debug : qemuProcessLaunch:7692 : Setting up security labelling >2023-07-04 07:38:31.548+0000: 293937: debug : virSecuritySELinuxSetChildProcessLabel:3041 : label=system_u:system_r:svirt_t:s0:c737,c993 >2023-07-04 07:38:31.548+0000: 293937: debug : virSecurityDACSetChildProcessLabel:2288 : Setting child to drop privileges to 107:107 >2023-07-04 07:38:31.548+0000: 293937: debug : virCommandRequireHandshake:2814 : Transfer handshake wait=28 notify=29, keep handshake wait=27 notify=30 >2023-07-04 07:38:31.548+0000: 293937: debug : virCommandRunAsync:2632 : About to run LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin HOME=/var/lib/libvirt/qemu/domain-1-rhel XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-1-rhel/.local/share XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-1-rhel/.cache XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-1-rhel/.config /usr/libexec/qemu-kvm -name guest=rhel,debug-threads=on -S -object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-1-rhel/master-key.aes"}' -machine s390-ccw-virtio-rhel9.2.0,usb=off,dump-guest-core=off,memory-backend=s390.ram -accel kvm -cpu gen15a-base,aen=on,vxpdeh=on,aefsi=on,diag318=on,csske=on,mepoch=on,msa9=on,msa8=on,msa7=on,msa6=on,msa5=on,msa4=on,msa3=on,msa2=on,msa1=on,sthyi=on,edat=on,ri=on,deflate=on,edat2=on,etoken=on,vx=on,ipter=on,mepochptff=on,ap=on,vxeh=on,vxpd=on,esop=on,msa9_pckmo=on,vxeh2=on,esort=on,apft=on,els=on,iep=on,apqci=on,cte=on,ais=on,bpb=on,gs=on,ppa15=on,zpci=on,sea_esop2=on,te=on -m size=1572864k -object '{"qom-type":"memory-backend-ram","id":"s390.ram","size":1610612736}' -overcommit mem-lock=off -smp 2,sockets=2,cores=1,threads=1 -uuid e2a90373-48e4-4bfc-8f57-07e84139bd67 -display none -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=23,server=on,wait=off -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device '{"driver":"virtio-serial-ccw","id":"virtio-serial0","devno":"fe.0.0002"}' -blockdev '{"driver":"host_device","filename":"/dev/sda","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' -blockdev '{"node-name":"libvirt-1-format","read-only":false,"driver":"raw","file":"libvirt-1-storage"}' -device '{"driver":"virtio-blk-ccw","devno":"fe.0.0005","drive":"libvirt-1-format","id":"virtio-disk1","bootindex":1}' -netdev '{"type":"tap","fd":"24","vhost":true,"vhostfd":"26","id":"hostnet0"}' -device '{"driver":"virtio-net-ccw","netdev":"hostnet0","id":"net0","mac":"52:54:00:ac:19:b5","devno":"fe.0.0001"}' -chardev socket,id=charchannel0,fd=22,server=on,wait=off -device '{"driver":"virtserialport","bus":"virtio-serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu.guest_agent.0"}' -chardev pty,id=charconsole0 -device '{"driver":"sclpconsole","chardev":"charconsole0","id":"console0"}' -audiodev '{"id":"audio1","driver":"none"}' -device '{"driver":"virtio-balloon-ccw","id":"balloon0","devno":"fe.0.0003"}' -object '{"qom-type":"rng-random","id":"objrng0","filename":"/dev/urandom"}' -device '{"driver":"virtio-rng-ccw","rng":"objrng0","id":"rng0","devno":"fe.0.0004"}' -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on >2023-07-04 07:38:31.549+0000: 293937: debug : virCommandRunAsync:2634 : Command result 0, with PID 294043 >2023-07-04 07:38:31.550+0000: 293937: debug : virCommandRun:2478 : Result status 0, stdout: '(null)' stderr: '(null)' >2023-07-04 07:38:31.550+0000: 293937: debug : qemuProcessLaunch:7717 : QEMU vm=0x3ff400ad100 name=rhel running with pid=294044 >2023-07-04 07:38:31.550+0000: 293937: debug : qemuProcessLaunch:7725 : Writing early domain status to disk >2023-07-04 07:38:31.550+0000: 293937: debug : qemuProcessLaunch:7729 : Waiting for handshake from child >2023-07-04 07:38:31.550+0000: 293937: debug : virCommandHandshakeWait:2854 : Wait for handshake on 27 >2023-07-04 07:38:31.550+0000: 293937: debug : qemuProcessLaunch:7737 : Building domain mount namespace (if required) >2023-07-04 07:38:31.550+0000: 293937: debug : qemuDomainSetupAllDisks:315 : Setting up disks >2023-07-04 07:38:31.550+0000: 293937: debug : qemuDomainSetupAllDisks:323 : Setup all disks >2023-07-04 07:38:31.550+0000: 293937: debug : qemuDomainSetupAllHostdevs:356 : Setting up hostdevs >2023-07-04 07:38:31.550+0000: 293937: debug : qemuDomainSetupAllHostdevs:364 : Setup all hostdevs >2023-07-04 07:38:31.550+0000: 293937: debug : qemuDomainSetupAllMemories:401 : Setting up memories >2023-07-04 07:38:31.550+0000: 293937: debug : qemuDomainSetupAllMemories:407 : Setup all memories >2023-07-04 07:38:31.550+0000: 293937: debug : qemuDomainSetupAllChardevs:437 : Setting up chardevs >2023-07-04 07:38:31.550+0000: 293937: debug : qemuDomainSetupAllChardevs:445 : Setup all chardevs >2023-07-04 07:38:31.550+0000: 293937: debug : qemuDomainSetupAllTPMs:476 : Setting up TPMs >2023-07-04 07:38:31.550+0000: 293937: debug : qemuDomainSetupAllTPMs:483 : Setup all TPMs >2023-07-04 07:38:31.550+0000: 293937: debug : qemuDomainSetupAllGraphics:508 : Setting up graphics >2023-07-04 07:38:31.550+0000: 293937: debug : qemuDomainSetupAllGraphics:515 : Setup all graphics >2023-07-04 07:38:31.550+0000: 293937: debug : qemuDomainSetupAllVideos:526 : Setting up video devices >2023-07-04 07:38:31.550+0000: 293937: debug : qemuDomainSetupAllInputs:560 : Setting up inputs >2023-07-04 07:38:31.550+0000: 293937: debug : qemuDomainSetupAllInputs:566 : Setup all inputs >2023-07-04 07:38:31.550+0000: 293937: debug : qemuDomainSetupAllRNGs:597 : Setting up RNGs >2023-07-04 07:38:31.550+0000: 293937: debug : qemuDomainSetupAllRNGs:604 : Setup all RNGs >2023-07-04 07:38:31.550+0000: 293937: debug : qemuDomainSetupLoader:615 : Setting up loader >2023-07-04 07:38:31.550+0000: 293937: debug : qemuDomainSetupLoader:638 : Setup loader >2023-07-04 07:38:31.550+0000: 293937: debug : virFileGetMountSubtreeImpl:2089 : prefix=/dev >2023-07-04 07:38:31.552+0000: 293937: debug : qemuProcessLaunch:7741 : Setting up domain cgroup (if required) >2023-07-04 07:38:31.552+0000: 293937: debug : virCgroupNewMachineSystemd:1286 : Trying to setup machine 'qemu-1-rhel' via systemd >2023-07-04 07:38:31.553+0000: 293937: debug : virSystemdCreateMachine:434 : Attempting to create machine via systemd >2023-07-04 07:38:31.639+0000: 293937: debug : virCgroupNewMachineSystemd:1302 : Detecting systemd placement >2023-07-04 07:38:31.639+0000: 293937: debug : virCgroupNewDetect:1156 : pid=294044 controllers=-1 group=0x3ff7b87abc0 >2023-07-04 07:38:31.639+0000: 293937: debug : virCgroupDetectPlacement:345 : Detecting placement for pid 294044 path >2023-07-04 07:38:31.639+0000: 293937: debug : virCgroupV2DetectPlacement:198 : group=0x3ff7003fe30 path= controllers= selfpath=/machine.slice/machine-qemu\x2d1\x2drhel.scope >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpu' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpuacct' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpuset' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'memory' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'devices' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'freezer' present=no >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'io' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'net_cls' present=no >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'perf_event' present=no >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'name=systemd' present=no >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupNew:743 : path=/ controllers=-1 group=0x3ff7b87abd0 >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupDetectPlacement:345 : Detecting placement for pid -1 path / >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpu' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpuacct' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpuset' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'memory' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'devices' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'freezer' present=no >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'io' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'net_cls' present=no >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'perf_event' present=no >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'name=systemd' present=no >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupNewFromParent:895 : parent=0x3ff70040e50 path=machine.slice controllers=-1 group=0x3ff7b87abd8 >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2CopyPlacement:154 : group=0x3ff70040ff0 path=machine.slice parent=0x3ff70040e50 >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupDetectPlacement:345 : Detecting placement for pid -1 path machine.slice >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpu' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpuacct' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpuset' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'memory' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'devices' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'freezer' present=no >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'io' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'net_cls' present=no >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'perf_event' present=no >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'name=systemd' present=no >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2MakeGroup:430 : Running with systemd so we should not create cgroups ourselves. >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupNewFromParent:895 : parent=0x3ff70040ff0 path=machine-qemu\x2d1\x2drhel.scope controllers=-1 group=0x3ff7b87abd8 >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2CopyPlacement:154 : group=0x3ff70041470 path=machine-qemu\x2d1\x2drhel.scope parent=0x3ff70040ff0 >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupDetectPlacement:345 : Detecting placement for pid -1 path machine-qemu\x2d1\x2drhel.scope >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpu' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpuacct' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpuset' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'memory' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'devices' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'freezer' present=no >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'io' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'net_cls' present=no >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'perf_event' present=no >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'name=systemd' present=no >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2MakeGroup:430 : Running with systemd so we should not create cgroups ourselves. >2023-07-04 07:38:31.640+0000: 293937: debug : virSystemdGetMachineByPID:226 : Domain with pid 294044 has object path '/org/freedesktop/machine1/machine/qemu_2d1_2drhel' >2023-07-04 07:38:31.640+0000: 293937: debug : virSystemdGetMachineUnitByPID:321 : Domain with pid 294044 has unit name 'machine-qemu\x2d1\x2drhel.scope' >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupNewFromParent:895 : parent=0x3ff70041470 path=libvirt controllers=-1 group=0x3ff7b87aab8 >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2CopyPlacement:154 : group=0x3ff7003e550 path=libvirt parent=0x3ff70041470 >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupDetectPlacement:345 : Detecting placement for pid -1 path libvirt >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpu' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpuacct' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpuset' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'memory' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'devices' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'freezer' present=no >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'io' present=yes >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'net_cls' present=no >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'perf_event' present=no >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'name=systemd' present=no >2023-07-04 07:38:31.640+0000: 293937: debug : virCgroupV2MakeGroup:438 : Make controller /sys/fs/cgroup/machine.slice/machine-qemu\x2d1\x2drhel.scope/libvirt/ >2023-07-04 07:38:31.641+0000: 293937: debug : virCgroupSetValueRaw:522 : Set path '/sys/fs/cgroup/machine.slice/machine-qemu\x2d1\x2drhel.scope/libvirt/cgroup.procs' to value '294044' >2023-07-04 07:38:31.641+0000: 293937: debug : virCgroupSetValueRaw:522 : Set path '/sys/fs/cgroup/machine.slice/machine-qemu\x2d1\x2drhel.scope/libvirt/cgroup.procs' to value '294044' >2023-07-04 07:38:31.643+0000: 293937: debug : qemuCgroupAllowDevicePath:59 : Allow path /dev/null, perms: rw >2023-07-04 07:38:31.643+0000: 293937: debug : qemuCgroupAllowDevicePath:59 : Allow path /dev/full, perms: rw >2023-07-04 07:38:31.643+0000: 293937: debug : qemuCgroupAllowDevicePath:59 : Allow path /dev/zero, perms: rw >2023-07-04 07:38:31.643+0000: 293937: debug : qemuCgroupAllowDevicePath:59 : Allow path /dev/random, perms: rw >2023-07-04 07:38:31.643+0000: 293937: debug : qemuCgroupAllowDevicePath:59 : Allow path /dev/urandom, perms: rw >2023-07-04 07:38:31.643+0000: 293937: debug : qemuCgroupAllowDevicePath:59 : Allow path /dev/ptmx, perms: rw >2023-07-04 07:38:31.643+0000: 293937: debug : qemuCgroupAllowDevicePath:59 : Allow path /dev/kvm, perms: rw >2023-07-04 07:38:31.643+0000: 293937: debug : qemuCgroupAllowDevicePath:59 : Allow path /dev/sda, perms: rw >2023-07-04 07:38:31.643+0000: 293937: debug : qemuCgroupAllowDevicePath:59 : Allow path /dev/urandom, perms: rw >2023-07-04 07:38:31.643+0000: 293937: debug : qemuProcessLaunch:7745 : Setting up domain perf (if required) >2023-07-04 07:38:31.643+0000: 293937: debug : qemuProcessLaunch:7754 : Setting emulator tuning/settings >2023-07-04 07:38:31.643+0000: 293937: debug : virCgroupNewFromParent:895 : parent=0x3ff7003e550 path=emulator controllers=7 group=0x3ff7b87acb0 >2023-07-04 07:38:31.643+0000: 293937: debug : virCgroupV2CopyPlacement:154 : group=0x3ff70042b20 path=emulator parent=0x3ff7003e550 >2023-07-04 07:38:31.643+0000: 293937: debug : virCgroupDetectPlacement:345 : Detecting placement for pid -1 path emulator >2023-07-04 07:38:31.643+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpu' present=yes >2023-07-04 07:38:31.643+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpuacct' present=yes >2023-07-04 07:38:31.643+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpuset' present=yes >2023-07-04 07:38:31.643+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'memory' present=no >2023-07-04 07:38:31.643+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'devices' present=no >2023-07-04 07:38:31.643+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'freezer' present=no >2023-07-04 07:38:31.643+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'io' present=no >2023-07-04 07:38:31.643+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'net_cls' present=no >2023-07-04 07:38:31.643+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'perf_event' present=no >2023-07-04 07:38:31.643+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'name=systemd' present=no >2023-07-04 07:38:31.643+0000: 293937: debug : virCgroupV2MakeGroup:438 : Make controller /sys/fs/cgroup/machine.slice/machine-qemu\x2d1\x2drhel.scope/libvirt/emulator/ >2023-07-04 07:38:31.643+0000: 293937: debug : virCgroupSetValueRaw:522 : Set path '/sys/fs/cgroup/machine.slice/machine-qemu\x2d1\x2drhel.scope/libvirt/emulator/cgroup.type' to value 'threaded' >2023-07-04 07:38:31.643+0000: 293937: debug : virCgroupSetValueRaw:522 : Set path '/sys/fs/cgroup/machine.slice/machine-qemu\x2d1\x2drhel.scope/libvirt/emulator/cgroup.threads' to value '294044' >2023-07-04 07:38:31.643+0000: 293937: debug : virProcessSetAffinity:454 : Set process affinity on 294044 >2023-07-04 07:38:31.643+0000: 293937: debug : qemuProcessLaunch:7758 : Setting cgroup for external devices (if required) >2023-07-04 07:38:31.643+0000: 293937: debug : qemuProcessLaunch:7762 : Setting up resctrl >2023-07-04 07:38:31.643+0000: 293937: debug : qemuProcessLaunch:7766 : Setting up managed PR daemon >2023-07-04 07:38:31.643+0000: 293937: debug : qemuProcessLaunch:7771 : Setting domain security labels >2023-07-04 07:38:31.646+0000: 293937: debug : qemuProcessLaunch:7803 : Labelling done, completing handshake to child >2023-07-04 07:38:31.646+0000: 293937: debug : virCommandHandshakeNotify:2917 : Notify handshake on 30 >2023-07-04 07:38:31.646+0000: 293937: debug : qemuProcessLaunch:7806 : Handshake complete, child running >2023-07-04 07:38:31.646+0000: 293937: debug : qemuProcessLaunch:7811 : Waiting for monitor to show up >2023-07-04 07:38:31.646+0000: 293937: debug : qemuProcessWaitForMonitor:2310 : Connect monitor to vm=0x3ff400ad100 name='rhel' >2023-07-04 07:38:31.646+0000: 293937: debug : virSecuritySELinuxSetDaemonSocketLabel:3108 : Setting VM rhel socket context system_u:system_r:virtd_t:s0:c737,c993 >2023-07-04 07:38:31.646+0000: 293937: info : qemuMonitorOpenInternal:650 : QEMU_MONITOR_NEW: mon=0x3ff70043010 fd=26 >2023-07-04 07:38:31.646+0000: 293937: debug : virDomainObjBeginJobInternal:318 : Starting job: API=remoteDispatchDomainCreate job=async nested agentJob=none asyncJob=none (vm=0x3ff400ad100 name=rhel, current job=none agentJob=none async=start) >2023-07-04 07:38:31.646+0000: 293937: debug : virDomainObjBeginJobInternal:382 : Started job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.646+0000: 293937: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.646+0000: 293937: debug : qemuMonitorSetCapabilities:1423 : mon:0x3ff70043010 vm:0x3ff400ad100 fd:26 >2023-07-04 07:38:31.646+0000: 293937: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x3ff70043010 msg={"execute":"qmp_capabilities","id":"libvirt-1"} > fd=-1 >2023-07-04 07:38:31.660+0000: 294055: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"QMP": {"version": {"qemu": {"micro": 0, "minor": 0, "major": 8}, "package": "qemu-kvm-8.0.0-5.el9"}, "capabilities": ["oob"]}}] >2023-07-04 07:38:31.660+0000: 294055: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x3ff70043010 buf={"execute":"qmp_capabilities","id":"libvirt-1"} > len=49 ret=49 errno=0 >2023-07-04 07:38:31.737+0000: 294055: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-1"}] >2023-07-04 07:38:31.737+0000: 294055: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x3ff70043010 reply={"return": {}, "id": "libvirt-1"} >2023-07-04 07:38:31.737+0000: 293937: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.737+0000: 293937: debug : virDomainObjEndJob:614 : Stopping job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.737+0000: 293937: debug : virDomainObjBeginJobInternal:318 : Starting job: API=remoteDispatchDomainCreate job=async nested agentJob=none asyncJob=none (vm=0x3ff400ad100 name=rhel, current job=none agentJob=none async=start) >2023-07-04 07:38:31.737+0000: 293937: debug : virDomainObjBeginJobInternal:382 : Started job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.737+0000: 293937: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.737+0000: 293937: debug : qemuMonitorGetMigrationCapabilities:3395 : mon:0x3ff70043010 vm:0x3ff400ad100 fd:26 >2023-07-04 07:38:31.737+0000: 293937: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x3ff70043010 msg={"execute":"query-migrate-capabilities","id":"libvirt-2"} > fd=-1 >2023-07-04 07:38:31.737+0000: 294055: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x3ff70043010 buf={"execute":"query-migrate-capabilities","id":"libvirt-2"} > len=59 ret=59 errno=0 >2023-07-04 07:38:31.737+0000: 294055: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": [{"state": false, "capability": "xbzrle"}, {"state": false, "capability": "rdma-pin-all"}, {"state": false, "capability": "auto-converge"}, {"state": false, "capability": "zero-blocks"}, {"state": false, "capability": "compress"}, {"state": false, "capability": "events"}, {"state": false, "capability": "postcopy-ram"}, {"state": false, "capability": "x-colo"}, {"state": false, "capability": "release-ram"}, {"state": false, "capability": "return-path"}, {"state": false, "capability": "pause-before-switchover"}, {"state": false, "capability": "multifd"}, {"state": false, "capability": "dirty-bitmaps"}, {"state": false, "capability": "postcopy-blocktime"}, {"state": false, "capability": "late-block-activate"}, {"state": false, "capability": "x-ignore-shared"}, {"state": false, "capability": "validate-uuid"}, {"state": false, "capability": "background-snapshot"}, {"state": false, "capability": "zero-copy-send"}, {"state": false, "capability": "postcopy-preempt"}], "id": "libvirt-2"}] >2023-07-04 07:38:31.737+0000: 294055: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x3ff70043010 reply={"return": [{"state": false, "capability": "xbzrle"}, {"state": false, "capability": "rdma-pin-all"}, {"state": false, "capability": "auto-converge"}, {"state": false, "capability": "zero-blocks"}, {"state": false, "capability": "compress"}, {"state": false, "capability": "events"}, {"state": false, "capability": "postcopy-ram"}, {"state": false, "capability": "x-colo"}, {"state": false, "capability": "release-ram"}, {"state": false, "capability": "return-path"}, {"state": false, "capability": "pause-before-switchover"}, {"state": false, "capability": "multifd"}, {"state": false, "capability": "dirty-bitmaps"}, {"state": false, "capability": "postcopy-blocktime"}, {"state": false, "capability": "late-block-activate"}, {"state": false, "capability": "x-ignore-shared"}, {"state": false, "capability": "validate-uuid"}, {"state": false, "capability": "background-snapshot"}, {"state": false, "capability": "zero-copy-send"}, {"state": false, "capability": "postcopy-preempt"}], "id": "libvirt-2"} >2023-07-04 07:38:31.737+0000: 293937: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.737+0000: 293937: debug : virDomainObjEndJob:614 : Stopping job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.737+0000: 293937: debug : qemuMigrationCapsCheck:1552 : Found migration capability: 'xbzrle' >2023-07-04 07:38:31.737+0000: 293937: debug : qemuMigrationCapsCheck:1552 : Found migration capability: 'rdma-pin-all' >2023-07-04 07:38:31.737+0000: 293937: debug : qemuMigrationCapsCheck:1552 : Found migration capability: 'auto-converge' >2023-07-04 07:38:31.737+0000: 293937: debug : qemuMigrationCapsCheck:1549 : Unknown migration capability: 'zero-blocks' >2023-07-04 07:38:31.737+0000: 293937: debug : qemuMigrationCapsCheck:1552 : Found migration capability: 'compress' >2023-07-04 07:38:31.737+0000: 293937: debug : qemuMigrationCapsCheck:1552 : Found migration capability: 'events' >2023-07-04 07:38:31.737+0000: 293937: debug : qemuMigrationCapsCheck:1552 : Found migration capability: 'postcopy-ram' >2023-07-04 07:38:31.737+0000: 293937: debug : qemuMigrationCapsCheck:1549 : Unknown migration capability: 'x-colo' >2023-07-04 07:38:31.737+0000: 293937: debug : qemuMigrationCapsCheck:1549 : Unknown migration capability: 'release-ram' >2023-07-04 07:38:31.737+0000: 293937: debug : qemuMigrationCapsCheck:1552 : Found migration capability: 'return-path' >2023-07-04 07:38:31.737+0000: 293937: debug : qemuMigrationCapsCheck:1552 : Found migration capability: 'pause-before-switchover' >2023-07-04 07:38:31.737+0000: 293937: debug : qemuMigrationCapsCheck:1552 : Found migration capability: 'multifd' >2023-07-04 07:38:31.737+0000: 293937: debug : qemuMigrationCapsCheck:1552 : Found migration capability: 'dirty-bitmaps' >2023-07-04 07:38:31.737+0000: 293937: debug : qemuMigrationCapsCheck:1549 : Unknown migration capability: 'postcopy-blocktime' >2023-07-04 07:38:31.737+0000: 293937: debug : qemuMigrationCapsCheck:1552 : Found migration capability: 'late-block-activate' >2023-07-04 07:38:31.737+0000: 293937: debug : qemuMigrationCapsCheck:1549 : Unknown migration capability: 'x-ignore-shared' >2023-07-04 07:38:31.737+0000: 293937: debug : qemuMigrationCapsCheck:1549 : Unknown migration capability: 'validate-uuid' >2023-07-04 07:38:31.737+0000: 293937: debug : qemuMigrationCapsCheck:1549 : Unknown migration capability: 'background-snapshot' >2023-07-04 07:38:31.737+0000: 293937: debug : qemuMigrationCapsCheck:1552 : Found migration capability: 'zero-copy-send' >2023-07-04 07:38:31.737+0000: 293937: debug : qemuMigrationCapsCheck:1549 : Unknown migration capability: 'postcopy-preempt' >2023-07-04 07:38:31.737+0000: 293937: debug : virDomainObjBeginJobInternal:318 : Starting job: API=remoteDispatchDomainCreate job=async nested agentJob=none asyncJob=none (vm=0x3ff400ad100 name=rhel, current job=none agentJob=none async=start) >2023-07-04 07:38:31.737+0000: 293937: debug : virDomainObjBeginJobInternal:382 : Started job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.737+0000: 293937: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.737+0000: 293937: debug : qemuMonitorSetMigrationCapabilities:3414 : mon:0x3ff70043010 vm:0x3ff400ad100 fd:26 >2023-07-04 07:38:31.737+0000: 293937: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x3ff70043010 msg={"execute":"migrate-set-capabilities","arguments":{"capabilities":[{"capability":"events","state":true}]},"id":"libvirt-3"} > fd=-1 >2023-07-04 07:38:31.737+0000: 294055: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x3ff70043010 buf={"execute":"migrate-set-capabilities","arguments":{"capabilities":[{"capability":"events","state":true}]},"id":"libvirt-3"} > len=125 ret=125 errno=0 >2023-07-04 07:38:31.738+0000: 294055: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-3"}] >2023-07-04 07:38:31.738+0000: 294055: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x3ff70043010 reply={"return": {}, "id": "libvirt-3"} >2023-07-04 07:38:31.738+0000: 293937: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.738+0000: 293937: debug : virDomainObjEndJob:614 : Stopping job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.738+0000: 293937: debug : virDomainObjBeginJobInternal:318 : Starting job: API=remoteDispatchDomainCreate job=async nested agentJob=none asyncJob=none (vm=0x3ff400ad100 name=rhel, current job=none agentJob=none async=start) >2023-07-04 07:38:31.738+0000: 293937: debug : virDomainObjBeginJobInternal:382 : Started job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.738+0000: 293937: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.738+0000: 293937: debug : qemuMonitorGetChardevInfo:2560 : retinfo=0x3ff7b87b018 >2023-07-04 07:38:31.738+0000: 293937: debug : qemuMonitorGetChardevInfo:2562 : mon:0x3ff70043010 vm:0x3ff400ad100 fd:26 >2023-07-04 07:38:31.738+0000: 293937: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x3ff70043010 msg={"execute":"query-chardev","id":"libvirt-4"} > fd=-1 >2023-07-04 07:38:31.738+0000: 294055: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x3ff70043010 buf={"execute":"query-chardev","id":"libvirt-4"} > len=46 ret=46 errno=0 >2023-07-04 07:38:31.738+0000: 294055: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": [{"frontend-open": false, "filename": "disconnected:unix:/var/lib/libvirt/qemu/channel/target/domain-1-rhel/org.qemu.guest_agent.0,server=on", "label": "charchannel0"}, {"frontend-open": true, "filename": "pty:/dev/pts/1", "label": "charconsole0"}, {"frontend-open": true, "filename": "unix:/var/lib/libvirt/qemu/domain-1-rhel/monitor.sock,server=on", "label": "charmonitor"}], "id": "libvirt-4"}] >2023-07-04 07:38:31.738+0000: 294055: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x3ff70043010 reply={"return": [{"frontend-open": false, "filename": "disconnected:unix:/var/lib/libvirt/qemu/channel/target/domain-1-rhel/org.qemu.guest_agent.0,server=on", "label": "charchannel0"}, {"frontend-open": true, "filename": "pty:/dev/pts/1", "label": "charconsole0"}, {"frontend-open": true, "filename": "unix:/var/lib/libvirt/qemu/domain-1-rhel/monitor.sock,server=on", "label": "charmonitor"}], "id": "libvirt-4"} >2023-07-04 07:38:31.738+0000: 293937: debug : qemuProcessWaitForMonitor:2322 : qemuMonitorGetChardevInfo returned 0 >2023-07-04 07:38:31.738+0000: 293937: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.738+0000: 293937: debug : virDomainObjEndJob:614 : Stopping job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.738+0000: 293937: debug : qemuConnectAgent:218 : Deferring connecting to guest agent >2023-07-04 07:38:31.738+0000: 293937: debug : qemuProcessLaunch:7818 : setting up hotpluggable cpus >2023-07-04 07:38:31.738+0000: 293937: debug : qemuProcessLaunch:7830 : Refreshing VCPU info >2023-07-04 07:38:31.738+0000: 293937: debug : qemuDomainRefreshVcpuInfo:9946 : Maxvcpus 2 hotplug 1 >2023-07-04 07:38:31.738+0000: 293937: debug : virDomainObjBeginJobInternal:318 : Starting job: API=remoteDispatchDomainCreate job=async nested agentJob=none asyncJob=none (vm=0x3ff400ad100 name=rhel, current job=none agentJob=none async=start) >2023-07-04 07:38:31.738+0000: 293937: debug : virDomainObjBeginJobInternal:382 : Started job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.738+0000: 293937: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.738+0000: 293937: debug : qemuMonitorGetCPUInfo:1735 : mon:0x3ff70043010 vm:0x3ff400ad100 fd:26 >2023-07-04 07:38:31.738+0000: 293937: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x3ff70043010 msg={"execute":"query-hotpluggable-cpus","id":"libvirt-5"} > fd=-1 >2023-07-04 07:38:31.738+0000: 294055: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x3ff70043010 buf={"execute":"query-hotpluggable-cpus","id":"libvirt-5"} > len=56 ret=56 errno=0 >2023-07-04 07:38:31.738+0000: 294055: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": [{"props": {"core-id": 1}, "vcpus-count": 1, "qom-path": "/machine/unattached/device[1]", "type": "gen15a-base-s390x-cpu"}, {"props": {"core-id": 0}, "vcpus-count": 1, "qom-path": "/machine/unattached/device[0]", "type": "gen15a-base-s390x-cpu"}], "id": "libvirt-5"}] >2023-07-04 07:38:31.738+0000: 294055: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x3ff70043010 reply={"return": [{"props": {"core-id": 1}, "vcpus-count": 1, "qom-path": "/machine/unattached/device[1]", "type": "gen15a-base-s390x-cpu"}, {"props": {"core-id": 0}, "vcpus-count": 1, "qom-path": "/machine/unattached/device[0]", "type": "gen15a-base-s390x-cpu"}], "id": "libvirt-5"} >2023-07-04 07:38:31.738+0000: 293937: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x3ff70043010 msg={"execute":"query-cpus-fast","id":"libvirt-6"} > fd=-1 >2023-07-04 07:38:31.738+0000: 294055: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x3ff70043010 buf={"execute":"query-cpus-fast","id":"libvirt-6"} > len=48 ret=48 errno=0 >2023-07-04 07:38:31.739+0000: 294055: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": [{"thread-id": 294067, "props": {"core-id": 0}, "cpu-state": "operating", "qom-path": "/machine/unattached/device[0]", "cpu-index": 0, "target": "s390x"}, {"thread-id": 294068, "props": {"core-id": 1}, "cpu-state": "stopped", "qom-path": "/machine/unattached/device[1]", "cpu-index": 1, "target": "s390x"}], "id": "libvirt-6"}] >2023-07-04 07:38:31.739+0000: 294055: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x3ff70043010 reply={"return": [{"thread-id": 294067, "props": {"core-id": 0}, "cpu-state": "operating", "qom-path": "/machine/unattached/device[0]", "cpu-index": 0, "target": "s390x"}, {"thread-id": 294068, "props": {"core-id": 1}, "cpu-state": "stopped", "qom-path": "/machine/unattached/device[1]", "cpu-index": 1, "target": "s390x"}], "id": "libvirt-6"} >2023-07-04 07:38:31.739+0000: 293937: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.739+0000: 293937: debug : virDomainObjEndJob:614 : Stopping job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.739+0000: 293937: debug : qemuDomainRefreshVcpuInfo:10000 : vCPU[0] PID 294067 is valid (node=-1 socket=-1 die=-1 core=0 thread=-1) >2023-07-04 07:38:31.739+0000: 293937: debug : qemuDomainRefreshVcpuInfo:10000 : vCPU[1] PID 294068 is valid (node=-1 socket=-1 die=-1 core=1 thread=-1) >2023-07-04 07:38:31.739+0000: 293937: debug : qemuDomainRefreshVcpuInfo:10010 : Extracting vCPU information validTIDs=1 >2023-07-04 07:38:31.739+0000: 293937: debug : qemuProcessLaunch:7839 : Verifying and updating provided guest CPU >2023-07-04 07:38:31.739+0000: 293937: debug : qemuProcessLaunch:7843 : Detecting IOThread PIDs >2023-07-04 07:38:31.739+0000: 293937: debug : virDomainObjBeginJobInternal:318 : Starting job: API=remoteDispatchDomainCreate job=async nested agentJob=none asyncJob=none (vm=0x3ff400ad100 name=rhel, current job=none agentJob=none async=start) >2023-07-04 07:38:31.739+0000: 293937: debug : virDomainObjBeginJobInternal:382 : Started job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.739+0000: 293937: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.739+0000: 293937: debug : qemuMonitorGetIOThreads:3727 : iothreads=0x3ff7b87ae60 >2023-07-04 07:38:31.739+0000: 293937: debug : qemuMonitorGetIOThreads:3729 : mon:0x3ff70043010 vm:0x3ff400ad100 fd:26 >2023-07-04 07:38:31.739+0000: 293937: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x3ff70043010 msg={"execute":"query-iothreads","id":"libvirt-7"} > fd=-1 >2023-07-04 07:38:31.739+0000: 294055: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x3ff70043010 buf={"execute":"query-iothreads","id":"libvirt-7"} > len=48 ret=48 errno=0 >2023-07-04 07:38:31.739+0000: 294055: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": [], "id": "libvirt-7"}] >2023-07-04 07:38:31.739+0000: 294055: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x3ff70043010 reply={"return": [], "id": "libvirt-7"} >2023-07-04 07:38:31.739+0000: 293937: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.739+0000: 293937: debug : virDomainObjEndJob:614 : Stopping job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.739+0000: 293937: debug : qemuProcessLaunch:7847 : Setting global CPU cgroup (if required) >2023-07-04 07:38:31.739+0000: 293937: debug : qemuProcessLaunch:7851 : Setting vCPU tuning/settings >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupNewFromParent:895 : parent=0x3ff7003e550 path=vcpu0 controllers=7 group=0x3ff7b87aca0 >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2CopyPlacement:154 : group=0x3ff700453b0 path=vcpu0 parent=0x3ff7003e550 >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupDetectPlacement:345 : Detecting placement for pid -1 path vcpu0 >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpu' present=yes >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpuacct' present=yes >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpuset' present=yes >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'memory' present=no >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'devices' present=no >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'freezer' present=no >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'io' present=no >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'net_cls' present=no >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'perf_event' present=no >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'name=systemd' present=no >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2MakeGroup:438 : Make controller /sys/fs/cgroup/machine.slice/machine-qemu\x2d1\x2drhel.scope/libvirt/vcpu0/ >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupSetValueRaw:522 : Set path '/sys/fs/cgroup/machine.slice/machine-qemu\x2d1\x2drhel.scope/libvirt/vcpu0/cgroup.type' to value 'threaded' >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupSetValueRaw:522 : Set path '/sys/fs/cgroup/machine.slice/machine-qemu\x2d1\x2drhel.scope/libvirt/vcpu0/cgroup.threads' to value '294067' >2023-07-04 07:38:31.739+0000: 293937: debug : virProcessSetAffinity:454 : Set process affinity on 294067 >2023-07-04 07:38:31.739+0000: 293937: debug : virProcessSetScheduler:1590 : pid=294067, policy=0, priority=0 >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupNewFromParent:895 : parent=0x3ff7003e550 path=vcpu1 controllers=7 group=0x3ff7b87aca0 >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2CopyPlacement:154 : group=0x3ff70045880 path=vcpu1 parent=0x3ff7003e550 >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupDetectPlacement:345 : Detecting placement for pid -1 path vcpu1 >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpu' present=yes >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpuacct' present=yes >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'cpuset' present=yes >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'memory' present=no >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'devices' present=no >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'freezer' present=no >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'io' present=no >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'net_cls' present=no >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'perf_event' present=no >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2DetectControllers:331 : Controller 'name=systemd' present=no >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupV2MakeGroup:438 : Make controller /sys/fs/cgroup/machine.slice/machine-qemu\x2d1\x2drhel.scope/libvirt/vcpu1/ >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupSetValueRaw:522 : Set path '/sys/fs/cgroup/machine.slice/machine-qemu\x2d1\x2drhel.scope/libvirt/vcpu1/cgroup.type' to value 'threaded' >2023-07-04 07:38:31.739+0000: 293937: debug : virCgroupSetValueRaw:522 : Set path '/sys/fs/cgroup/machine.slice/machine-qemu\x2d1\x2drhel.scope/libvirt/vcpu1/cgroup.threads' to value '294068' >2023-07-04 07:38:31.739+0000: 293937: debug : virProcessSetAffinity:454 : Set process affinity on 294068 >2023-07-04 07:38:31.739+0000: 293937: debug : virProcessSetScheduler:1590 : pid=294068, policy=0, priority=0 >2023-07-04 07:38:31.739+0000: 293937: debug : qemuProcessLaunch:7855 : Setting IOThread tuning/settings >2023-07-04 07:38:31.739+0000: 293937: debug : qemuProcessLaunch:7859 : Setting emulator scheduler >2023-07-04 07:38:31.739+0000: 293937: debug : qemuProcessLaunch:7866 : Setting any required VM passwords >2023-07-04 07:38:31.739+0000: 293937: debug : qemuProcessLaunch:7873 : Setting network link states >2023-07-04 07:38:31.739+0000: 293937: debug : virDomainObjBeginJobInternal:318 : Starting job: API=remoteDispatchDomainCreate job=async nested agentJob=none asyncJob=none (vm=0x3ff400ad100 name=rhel, current job=none agentJob=none async=start) >2023-07-04 07:38:31.739+0000: 293937: debug : virDomainObjBeginJobInternal:382 : Started job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.739+0000: 293937: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.739+0000: 293937: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.739+0000: 293937: debug : virDomainObjEndJob:614 : Stopping job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.739+0000: 293937: debug : qemuProcessLaunch:7877 : Setting initial memory amount >2023-07-04 07:38:31.739+0000: 293937: debug : virDomainObjBeginJobInternal:318 : Starting job: API=remoteDispatchDomainCreate job=async nested agentJob=none asyncJob=none (vm=0x3ff400ad100 name=rhel, current job=none agentJob=none async=start) >2023-07-04 07:38:31.739+0000: 293937: debug : virDomainObjBeginJobInternal:382 : Started job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.739+0000: 293937: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.739+0000: 293937: debug : qemuMonitorSetBalloon:2114 : newmem=1572864 >2023-07-04 07:38:31.740+0000: 293937: debug : qemuMonitorSetBalloon:2116 : mon:0x3ff70043010 vm:0x3ff400ad100 fd:26 >2023-07-04 07:38:31.740+0000: 293937: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x3ff70043010 msg={"execute":"balloon","arguments":{"value":1610612736},"id":"libvirt-8"} > fd=-1 >2023-07-04 07:38:31.740+0000: 294055: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x3ff70043010 buf={"execute":"balloon","arguments":{"value":1610612736},"id":"libvirt-8"} > len=73 ret=73 errno=0 >2023-07-04 07:38:31.740+0000: 294055: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-8"}] >2023-07-04 07:38:31.740+0000: 294055: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x3ff70043010 reply={"return": {}, "id": "libvirt-8"} >2023-07-04 07:38:31.740+0000: 293937: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.740+0000: 293937: debug : virDomainObjEndJob:614 : Stopping job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.740+0000: 293937: debug : qemuProcessSetupDiskThrottling:7291 : Setting up disk throttling for -blockdev via block_set_io_throttle >2023-07-04 07:38:31.740+0000: 293937: debug : virDomainObjBeginJobInternal:318 : Starting job: API=remoteDispatchDomainCreate job=async nested agentJob=none asyncJob=none (vm=0x3ff400ad100 name=rhel, current job=none agentJob=none async=start) >2023-07-04 07:38:31.740+0000: 293937: debug : virDomainObjBeginJobInternal:382 : Started job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.740+0000: 293937: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.740+0000: 293937: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.740+0000: 293937: debug : virDomainObjEndJob:614 : Stopping job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.740+0000: 293937: debug : virDomainObjBeginJobInternal:318 : Starting job: API=remoteDispatchDomainCreate job=async nested agentJob=none asyncJob=none (vm=0x3ff400ad100 name=rhel, current job=none agentJob=none async=start) >2023-07-04 07:38:31.740+0000: 293937: debug : virDomainObjBeginJobInternal:382 : Started job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.740+0000: 293937: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.740+0000: 293937: debug : qemuMonitorGetBalloonInfo:1835 : mon:0x3ff70043010 vm:0x3ff400ad100 fd:26 >2023-07-04 07:38:31.740+0000: 293937: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x3ff70043010 msg={"execute":"query-balloon","id":"libvirt-9"} > fd=-1 >2023-07-04 07:38:31.740+0000: 294055: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x3ff70043010 buf={"execute":"query-balloon","id":"libvirt-9"} > len=46 ret=46 errno=0 >2023-07-04 07:38:31.740+0000: 294055: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {"actual": 1610612736}, "id": "libvirt-9"}] >2023-07-04 07:38:31.740+0000: 294055: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x3ff70043010 reply={"return": {"actual": 1610612736}, "id": "libvirt-9"} >2023-07-04 07:38:31.740+0000: 293937: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.740+0000: 293937: debug : virDomainObjEndJob:614 : Stopping job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.740+0000: 293937: debug : qemuProcessRefreshBalloonState:2287 : balloon size before fix is 1572864 >2023-07-04 07:38:31.740+0000: 293937: debug : qemuProcessRefreshBalloonState:2292 : Updating balloon from 1572864 to 1572864 kb >2023-07-04 07:38:31.740+0000: 293937: debug : qemuProcessLaunch:7895 : Setting up transient disk >2023-07-04 07:38:31.740+0000: 293937: debug : virDomainObjBeginJobInternal:318 : Starting job: API=remoteDispatchDomainCreate job=async nested agentJob=none asyncJob=none (vm=0x3ff400ad100 name=rhel, current job=none agentJob=none async=start) >2023-07-04 07:38:31.740+0000: 293937: debug : virDomainObjBeginJobInternal:382 : Started job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.740+0000: 293937: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.740+0000: 293937: debug : qemuMonitorBlockGetNamedNodeData:2015 : mon:0x3ff70043010 vm:0x3ff400ad100 fd:26 >2023-07-04 07:38:31.740+0000: 293937: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x3ff70043010 msg={"execute":"query-named-block-nodes","arguments":{"flat":true},"id":"libvirt-10"} > fd=-1 >2023-07-04 07:38:31.740+0000: 294055: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x3ff70043010 buf={"execute":"query-named-block-nodes","arguments":{"flat":true},"id":"libvirt-10"} > len=83 ret=83 errno=0 >2023-07-04 07:38:31.740+0000: 294055: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": [{"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 524288000, "filename": "/dev/sda", "format": "raw", "actual-size": 0, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-format", "backing_file_depth": 0, "drv": "raw", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "/dev/sda"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 524288000, "filename": "/dev/sda", "format": "host_device", "actual-size": 0, "format-specific": {"type": "file", "data": {}}, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-storage", "backing_file_depth": 0, "drv": "host_device", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "/dev/sda"}], "id": "libvirt-10"}] >2023-07-04 07:38:31.741+0000: 294055: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x3ff70043010 reply={"return": [{"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 524288000, "filename": "/dev/sda", "format": "raw", "actual-size": 0, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-format", "backing_file_depth": 0, "drv": "raw", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "/dev/sda"}, {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 524288000, "filename": "/dev/sda", "format": "host_device", "actual-size": 0, "format-specific": {"type": "file", "data": {}}, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-storage", "backing_file_depth": 0, "drv": "host_device", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "/dev/sda"}], "id": "libvirt-10"} >2023-07-04 07:38:31.741+0000: 293937: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.741+0000: 293937: debug : virDomainObjEndJob:614 : Stopping job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.741+0000: 293937: debug : qemuProcessLaunch:7900 : Setting handling of lifecycle actions >2023-07-04 07:38:31.741+0000: 293937: debug : qemuProcessRefreshState:7951 : Fetching list of active devices >2023-07-04 07:38:31.741+0000: 293937: debug : virDomainObjBeginJobInternal:318 : Starting job: API=remoteDispatchDomainCreate job=async nested agentJob=none asyncJob=none (vm=0x3ff400ad100 name=rhel, current job=none agentJob=none async=start) >2023-07-04 07:38:31.741+0000: 293937: debug : virDomainObjBeginJobInternal:382 : Started job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.741+0000: 293937: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.741+0000: 293937: debug : qemuMonitorGetDeviceAliases:3565 : aliases=0x3ff7b87b148 >2023-07-04 07:38:31.741+0000: 293937: debug : qemuMonitorGetDeviceAliases:3567 : mon:0x3ff70043010 vm:0x3ff400ad100 fd:26 >2023-07-04 07:38:31.741+0000: 293937: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x3ff70043010 msg={"execute":"qom-list","arguments":{"path":"/machine/peripheral"},"id":"libvirt-11"} > fd=-1 >2023-07-04 07:38:31.741+0000: 294055: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x3ff70043010 buf={"execute":"qom-list","arguments":{"path":"/machine/peripheral"},"id":"libvirt-11"} > len=85 ret=85 errno=0 >2023-07-04 07:38:31.741+0000: 294055: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": [{"name": "type", "type": "string"}, {"name": "rng0", "type": "child<virtio-rng-ccw>"}, {"name": "console0", "type": "child<sclpconsole>"}, {"name": "balloon0", "type": "child<virtio-balloon-ccw>"}, {"name": "channel0", "type": "child<virtserialport>"}, {"name": "net0", "type": "child<virtio-net-ccw>"}, {"name": "virtio-serial0", "type": "child<virtio-serial-ccw>"}, {"name": "virtio-disk1", "type": "child<virtio-blk-ccw>"}], "id": "libvirt-11"}] >2023-07-04 07:38:31.741+0000: 294055: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x3ff70043010 reply={"return": [{"name": "type", "type": "string"}, {"name": "rng0", "type": "child<virtio-rng-ccw>"}, {"name": "console0", "type": "child<sclpconsole>"}, {"name": "balloon0", "type": "child<virtio-balloon-ccw>"}, {"name": "channel0", "type": "child<virtserialport>"}, {"name": "net0", "type": "child<virtio-net-ccw>"}, {"name": "virtio-serial0", "type": "child<virtio-serial-ccw>"}, {"name": "virtio-disk1", "type": "child<virtio-blk-ccw>"}], "id": "libvirt-11"} >2023-07-04 07:38:31.741+0000: 293937: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.741+0000: 293937: debug : virDomainObjEndJob:614 : Stopping job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.741+0000: 293937: debug : qemuProcessRefreshState:7955 : Updating info of memory devices >2023-07-04 07:38:31.741+0000: 293937: debug : qemuProcessRefreshState:7959 : Detecting actual memory size for video device >2023-07-04 07:38:31.741+0000: 293937: debug : virDomainObjBeginJobInternal:318 : Starting job: API=remoteDispatchDomainCreate job=async nested agentJob=none asyncJob=none (vm=0x3ff400ad100 name=rhel, current job=none agentJob=none async=start) >2023-07-04 07:38:31.741+0000: 293937: debug : virDomainObjBeginJobInternal:382 : Started job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.741+0000: 293937: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.741+0000: 293937: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.741+0000: 293937: debug : virDomainObjEndJob:614 : Stopping job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.741+0000: 293937: debug : qemuProcessRefreshState:7963 : Updating disk data >2023-07-04 07:38:31.741+0000: 293937: debug : virDomainObjBeginJobInternal:318 : Starting job: API=remoteDispatchDomainCreate job=async nested agentJob=none asyncJob=none (vm=0x3ff400ad100 name=rhel, current job=none agentJob=none async=start) >2023-07-04 07:38:31.741+0000: 293937: debug : virDomainObjBeginJobInternal:382 : Started job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.741+0000: 293937: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.741+0000: 293937: debug : qemuMonitorGetBlockInfo:1938 : mon:0x3ff70043010 vm:0x3ff400ad100 fd:26 >2023-07-04 07:38:31.741+0000: 293937: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x3ff70043010 msg={"execute":"query-block","id":"libvirt-12"} > fd=-1 >2023-07-04 07:38:31.741+0000: 294055: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x3ff70043010 buf={"execute":"query-block","id":"libvirt-12"} > len=45 ret=45 errno=0 >2023-07-04 07:38:31.742+0000: 294055: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": [{"io-status": "ok", "device": "", "locked": false, "removable": false, "inserted": {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 524288000, "filename": "/dev/sda", "format": "raw", "actual-size": 0, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-format", "backing_file_depth": 0, "drv": "raw", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "/dev/sda"}, "qdev": "/machine/peripheral/virtio-disk1/virtio-backend", "type": "unknown"}], "id": "libvirt-12"}] >2023-07-04 07:38:31.742+0000: 294055: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x3ff70043010 reply={"return": [{"io-status": "ok", "device": "", "locked": false, "removable": false, "inserted": {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 524288000, "filename": "/dev/sda", "format": "raw", "actual-size": 0, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "libvirt-1-format", "backing_file_depth": 0, "drv": "raw", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "/dev/sda"}, "qdev": "/machine/peripheral/virtio-disk1/virtio-backend", "type": "unknown"}], "id": "libvirt-12"} >2023-07-04 07:38:31.742+0000: 293937: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.742+0000: 293937: debug : virDomainObjEndJob:614 : Stopping job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.742+0000: 293937: debug : qemuProcessRefreshState:7967 : Updating rx-filter data >2023-07-04 07:38:31.742+0000: 293937: debug : qemuProcessFinishStartup:7990 : Starting domain CPUs >2023-07-04 07:38:31.742+0000: 293937: debug : qemuProcessStartCPUs:3193 : Using lock state '<null>' >2023-07-04 07:38:31.742+0000: 293937: debug : virDomainLockProcessResume:220 : plugin=0x3ff40025ee0 dom=0x3ff400ad100 state=<null> >2023-07-04 07:38:31.742+0000: 293937: debug : virDomainLockManagerNew:130 : plugin=0x3ff40025ee0 dom=0x3ff400ad100 withResources=1 >2023-07-04 07:38:31.742+0000: 293937: debug : virLockManagerPluginGetDriver:275 : plugin=0x3ff40025ee0 >2023-07-04 07:38:31.742+0000: 293937: debug : virLockManagerNew:298 : driver=0x3ff886626e8 type=0 nparams=5 params=0x3ff7b87af30 flags=0x0 >2023-07-04 07:38:31.742+0000: 293937: debug : virLockManagerLogParams:96 : key=uuid type=uuid value=e2a90373-48e4-4bfc-8f57-07e84139bd67 >2023-07-04 07:38:31.742+0000: 293937: debug : virLockManagerLogParams:89 : key=name type=string value=rhel >2023-07-04 07:38:31.742+0000: 293937: debug : virLockManagerLogParams:77 : key=id type=uint value=1 >2023-07-04 07:38:31.742+0000: 293937: debug : virLockManagerLogParams:77 : key=pid type=uint value=294044 >2023-07-04 07:38:31.742+0000: 293937: debug : virLockManagerLogParams:92 : key=uri type=cstring value=qemu:///system >2023-07-04 07:38:31.742+0000: 293937: debug : virDomainLockManagerNew:143 : Adding leases >2023-07-04 07:38:31.742+0000: 293937: debug : virDomainLockManagerNew:148 : Adding disks >2023-07-04 07:38:31.742+0000: 293937: debug : virDomainLockManagerAddImage:87 : Add disk /dev/sda >2023-07-04 07:38:31.742+0000: 293937: debug : virLockManagerAddResource:324 : lock=0x3ff700457a0 type=0 name=/dev/sda nparams=0 params=(nil) flags=0x0 >2023-07-04 07:38:31.742+0000: 293937: debug : virLockManagerAcquire:342 : lock=0x3ff700457a0 state='<null>' flags=0x0 action=0 fd=(nil) >2023-07-04 07:38:31.742+0000: 293937: debug : virLockManagerFree:380 : lock=0x3ff700457a0 >2023-07-04 07:38:31.742+0000: 293937: debug : virDomainObjBeginJobInternal:318 : Starting job: API=remoteDispatchDomainCreate job=async nested agentJob=none asyncJob=none (vm=0x3ff400ad100 name=rhel, current job=none agentJob=none async=start) >2023-07-04 07:38:31.742+0000: 293937: debug : virDomainObjBeginJobInternal:382 : Started job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.742+0000: 293937: debug : qemuDomainObjEnterMonitorInternal:6311 : Entering monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.742+0000: 293937: debug : qemuMonitorStartCPUs:1432 : mon:0x3ff70043010 vm:0x3ff400ad100 fd:26 >2023-07-04 07:38:31.742+0000: 293937: info : qemuMonitorSend:864 : QEMU_MONITOR_SEND_MSG: mon=0x3ff70043010 msg={"execute":"cont","id":"libvirt-13"} > fd=-1 >2023-07-04 07:38:31.742+0000: 294055: info : qemuMonitorIOWrite:366 : QEMU_MONITOR_IO_WRITE: mon=0x3ff70043010 buf={"execute":"cont","id":"libvirt-13"} > len=38 ret=38 errno=0 >2023-07-04 07:38:31.742+0000: 294055: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1688456311, "microseconds": 742303}, "event": "RESUME"}] >2023-07-04 07:38:31.742+0000: 294055: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x3ff70043010 event={"timestamp": {"seconds": 1688456311, "microseconds": 742303}, "event": "RESUME"} >2023-07-04 07:38:31.742+0000: 294055: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x3ff70043010 obj=0x3ff400912b0 >2023-07-04 07:38:31.742+0000: 294055: debug : qemuMonitorEmitEvent:1072 : mon=0x3ff70043010 event=RESUME >2023-07-04 07:38:31.742+0000: 294055: debug : qemuProcessHandleEvent:546 : vm=0x3ff400ad100 >2023-07-04 07:38:31.742+0000: 294055: debug : qemuMonitorJSONIOProcessEvent:177 : handle RESUME handler=0x3ff80723800 data=(nil) >2023-07-04 07:38:31.742+0000: 294055: debug : qemuMonitorEmitResume:1109 : mon=0x3ff70043010 >2023-07-04 07:38:31.742+0000: 294055: debug : qemuProcessHandleResume:710 : Transitioned guest rhel into running state, reason 'booted', event detail 0 >2023-07-04 07:38:31.742+0000: 294055: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"return": {}, "id": "libvirt-13"}] >2023-07-04 07:38:31.742+0000: 294055: info : qemuMonitorJSONIOProcessLine:210 : QEMU_MONITOR_RECV_REPLY: mon=0x3ff70043010 reply={"return": {}, "id": "libvirt-13"} >2023-07-04 07:38:31.742+0000: 293937: debug : qemuDomainObjExitMonitor:6340 : Exited monitor (mon=0x3ff70043010 vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.742+0000: 293937: debug : virDomainObjEndJob:614 : Stopping job: async nested (async=start vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.742+0000: 293937: debug : qemuProcessFinishStartup:8003 : Writing domain status to disk >2023-07-04 07:38:31.742+0000: 293937: debug : qemuDomainLogContextFinalize:502 : ctxt=0x3ff4000f080 >2023-07-04 07:38:31.743+0000: 293937: debug : virDomainObjEndAsyncJob:652 : Stopping async job: start (vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.743+0000: 293937: debug : virDomainDispose:326 : release domain 0x3ff4001fa40 rhel e2a90373-48e4-4bfc-8f57-07e84139bd67 >2023-07-04 07:38:31.743+0000: 293937: debug : virThreadJobClear:118 : Thread 293937 (rpc-virtqemud) finished job remoteDispatchDomainCreate with ret=0 >2023-07-04 07:38:31.743+0000: 293938: debug : virThreadJobSet:93 : Thread 293938 (rpc-virtqemud) is now running job remoteDispatchDomainLookupByUUID >2023-07-04 07:38:31.744+0000: 293938: debug : virDomainLookupByUUID:369 : conn=0x3ff40014190, uuid=e2a90373-48e4-4bfc-8f57-07e84139bd67 >2023-07-04 07:38:31.744+0000: 293938: debug : virDomainDispose:326 : release domain 0x3ff2c006950 rhel e2a90373-48e4-4bfc-8f57-07e84139bd67 >2023-07-04 07:38:31.744+0000: 293938: debug : virThreadJobClear:118 : Thread 293938 (rpc-virtqemud) finished job remoteDispatchDomainLookupByUUID with ret=0 >2023-07-04 07:38:31.744+0000: 293934: debug : virThreadJobSet:93 : Thread 293934 (rpc-virtqemud) is now running job remoteDispatchConnectUnregisterCloseCallback >2023-07-04 07:38:31.744+0000: 293934: debug : virConnectUnregisterCloseCallback:1538 : conn=0x3ff40014190 >2023-07-04 07:38:31.744+0000: 293934: debug : virThreadJobClear:118 : Thread 293934 (rpc-virtqemud) finished job remoteDispatchConnectUnregisterCloseCallback with ret=0 >2023-07-04 07:38:31.744+0000: 293939: debug : virThreadJobSet:93 : Thread 293939 (prio-rpc-virtqemud) is now running job remoteDispatchConnectClose >2023-07-04 07:38:31.744+0000: 293939: debug : virThreadJobClear:118 : Thread 293939 (prio-rpc-virtqemud) finished job remoteDispatchConnectClose with ret=0 >2023-07-04 07:38:31.744+0000: 293933: debug : daemonRemoveAllClientStreams:519 : stream=(nil) >2023-07-04 07:38:31.745+0000: 293933: debug : virConnectClose:1320 : conn=0x3ff40014190 >2023-07-04 07:38:31.745+0000: 293933: debug : virCloseCallbacksDomainRunForConn:346 : conn=0x3ff40014190 >2023-07-04 07:38:31.754+0000: 294055: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1688456311, "microseconds": 754183}, "event": "GUEST_PANICKED", "data": {"action": "pause", "info": {"core": 0, "psw-addr": 0, "reason": "disabled-wait", "psw-mask": 562956395872256, "type": "s390"}}}] >2023-07-04 07:38:31.754+0000: 294055: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x3ff70043010 event={"timestamp": {"seconds": 1688456311, "microseconds": 754183}, "event": "GUEST_PANICKED", "data": {"action": "pause", "info": {"core": 0, "psw-addr": 0, "reason": "disabled-wait", "psw-mask": 562956395872256, "type": "s390"}}} >2023-07-04 07:38:31.754+0000: 294055: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x3ff70043010 obj=0x3ff40036fa0 >2023-07-04 07:38:31.754+0000: 294055: debug : qemuMonitorEmitEvent:1072 : mon=0x3ff70043010 event=GUEST_PANICKED >2023-07-04 07:38:31.754+0000: 294055: debug : qemuProcessHandleEvent:546 : vm=0x3ff400ad100 >2023-07-04 07:38:31.754+0000: 294055: debug : qemuMonitorJSONIOProcessEvent:177 : handle GUEST_PANICKED handler=0x3ff80723810 data=0x3ff400ab4b0 >2023-07-04 07:38:31.754+0000: 294055: debug : qemuMonitorEmitGuestPanic:1119 : mon=0x3ff70043010 >2023-07-04 07:38:31.754+0000: 294055: debug : qemuMonitorJSONIOProcessLine:191 : Line [{"timestamp": {"seconds": 1688456311, "microseconds": 754310}, "event": "STOP"}] >2023-07-04 07:38:31.754+0000: 294055: info : qemuMonitorJSONIOProcessLine:205 : QEMU_MONITOR_RECV_EVENT: mon=0x3ff70043010 event={"timestamp": {"seconds": 1688456311, "microseconds": 754310}, "event": "STOP"} >2023-07-04 07:38:31.754+0000: 294055: debug : qemuMonitorJSONIOProcessEvent:154 : mon=0x3ff70043010 obj=0x3ff40036fa0 >2023-07-04 07:38:31.754+0000: 294055: debug : qemuMonitorEmitEvent:1072 : mon=0x3ff70043010 event=STOP >2023-07-04 07:38:31.754+0000: 294055: debug : qemuProcessHandleEvent:546 : vm=0x3ff400ad100 >2023-07-04 07:38:31.754+0000: 294070: debug : virThreadJobSetWorker:75 : Thread 294070 is running worker qemu-event >2023-07-04 07:38:31.754+0000: 294070: debug : qemuProcessEventHandler:4038 : vm=0x3ff400ad100, event=1 >2023-07-04 07:38:31.754+0000: 294055: debug : qemuMonitorJSONIOProcessEvent:177 : handle STOP handler=0x3ff807237f0 data=(nil) >2023-07-04 07:38:31.754+0000: 294070: debug : virDomainObjBeginJobInternal:318 : Starting job: API=qemu-event job=none agentJob=none asyncJob=dump (vm=0x3ff400ad100 name=rhel, current job=none agentJob=none async=none) >2023-07-04 07:38:31.754+0000: 294055: debug : qemuMonitorEmitStop:1100 : mon=0x3ff70043010 >2023-07-04 07:38:31.754+0000: 294070: debug : virDomainObjBeginJobInternal:391 : Started async job: dump (vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.754+0000: 294070: debug : qemuDomainLogAppendMessage:7202 : Append log message (vm='rhel' message='2023-07-04 07:38:31.754+0000: panic s390: core='0' psw-mask='0x0002000180000000' psw-addr='0x0000000000000000' reason='disabled-wait' >) stdioLogD=1 >2023-07-04 07:38:31.755+0000: 294070: debug : virDomainLockProcessPause:200 : plugin=0x3ff40025ee0 dom=0x3ff400ad100 state=0x3ff400819e0 >2023-07-04 07:38:31.755+0000: 294070: debug : virDomainLockManagerNew:130 : plugin=0x3ff40025ee0 dom=0x3ff400ad100 withResources=1 >2023-07-04 07:38:31.755+0000: 294070: debug : virLockManagerPluginGetDriver:275 : plugin=0x3ff40025ee0 >2023-07-04 07:38:31.755+0000: 294070: debug : virLockManagerNew:298 : driver=0x3ff886626e8 type=0 nparams=5 params=0x3ff3affc718 flags=0x0 >2023-07-04 07:38:31.755+0000: 294070: debug : virLockManagerLogParams:96 : key=uuid type=uuid value=e2a90373-48e4-4bfc-8f57-07e84139bd67 >2023-07-04 07:38:31.755+0000: 294070: debug : virLockManagerLogParams:89 : key=name type=string value=rhel >2023-07-04 07:38:31.755+0000: 294070: debug : virLockManagerLogParams:77 : key=id type=uint value=1 >2023-07-04 07:38:31.755+0000: 294070: debug : virLockManagerLogParams:77 : key=pid type=uint value=294044 >2023-07-04 07:38:31.755+0000: 294070: debug : virLockManagerLogParams:92 : key=uri type=cstring value=(null) >2023-07-04 07:38:31.755+0000: 294070: debug : virDomainLockManagerNew:143 : Adding leases >2023-07-04 07:38:31.755+0000: 294070: debug : virDomainLockManagerNew:148 : Adding disks >2023-07-04 07:38:31.755+0000: 294070: debug : virDomainLockManagerAddImage:87 : Add disk /dev/sda >2023-07-04 07:38:31.755+0000: 294070: debug : virLockManagerAddResource:324 : lock=0x3ff30008fb0 type=0 name=/dev/sda nparams=0 params=(nil) flags=0x0 >2023-07-04 07:38:31.755+0000: 294070: debug : virLockManagerRelease:358 : lock=0x3ff30008fb0 state=0x3ff400819e0 flags=0x0 >2023-07-04 07:38:31.755+0000: 294070: debug : virLockManagerFree:380 : lock=0x3ff30008fb0 >2023-07-04 07:38:31.755+0000: 294070: debug : processGuestPanicEvent:3552 : Preserving lock state '<null>' >2023-07-04 07:38:31.755+0000: 294070: debug : qemuProcessStop:8259 : Shutting down vm=0x3ff400ad100 name=rhel id=1 pid=294044, reason=crashed, asyncJob=dump, flags=0x0 >2023-07-04 07:38:31.755+0000: 294070: debug : virDomainObjBeginJobInternal:318 : Starting job: API=qemu-event job=async nested agentJob=none asyncJob=none (vm=0x3ff400ad100 name=rhel, current job=none agentJob=none async=dump) >2023-07-04 07:38:31.755+0000: 294070: debug : virDomainObjBeginJobInternal:382 : Started job: async nested (async=dump vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:31.755+0000: 294070: debug : qemuDomainLogAppendMessage:7202 : Append log message (vm='rhel' message='2023-07-04 07:38:31.755+0000: shutting down, reason=crashed >) stdioLogD=1 >2023-07-04 07:38:31.755+0000: 294070: info : qemuMonitorClose:785 : QEMU_MONITOR_CLOSE: mon=0x3ff70043010 >2023-07-04 07:38:31.755+0000: 294055: debug : qemuMonitorDispose:214 : mon=0x3ff70043010 >2023-07-04 07:38:31.755+0000: 294070: debug : qemuProcessKill:8175 : vm=0x3ff400ad100 name=rhel pid=294044 flags=0x5 >2023-07-04 07:38:31.755+0000: 294070: debug : virProcessKillPainfullyDelay:377 : vpid=294044 force=1 extradelay=0 group=0 >2023-07-04 07:38:32.356+0000: 294070: debug : qemuDomainCleanupRun:7652 : driver=0x3ff40021560, vm=rhel >2023-07-04 07:38:32.356+0000: 294070: debug : virSecuritySELinuxRestoreAllLabel:2860 : Restoring security label on rhel migrated=0 >2023-07-04 07:38:32.356+0000: 294070: info : virSecuritySELinuxRestoreFileLabel:1492 : Restoring SELinux context on '/dev/sda' >2023-07-04 07:38:32.356+0000: 294070: debug : virSecurityDACRestoreAllLabel:1928 : Restoring security label on rhel migrated=0 >2023-07-04 07:38:32.450+0000: 294070: debug : virConnectOpen:1204 : name=network:///system >2023-07-04 07:38:32.450+0000: 294070: debug : virConfLoadConfig:1515 : Loading config file '/etc/libvirt/libvirt.conf' >2023-07-04 07:38:32.450+0000: 294070: debug : virConfReadFile:723 : filename=/etc/libvirt/libvirt.conf >2023-07-04 07:38:32.450+0000: 294070: debug : virConfGetValueStringList:913 : Get value string list (nil) 0 >2023-07-04 07:38:32.450+0000: 294070: debug : virConnectOpenInternal:966 : Split "network:///system" to URI components: > scheme network > server <null> > user <null> > port 0 > path /system >2023-07-04 07:38:32.450+0000: 294070: debug : virConnectOpenInternal:1072 : trying driver 0 (Test) ... >2023-07-04 07:38:32.450+0000: 294070: debug : virConnectOpenInternal:1094 : Checking for supported URI schemes >2023-07-04 07:38:32.450+0000: 294070: debug : virConnectOpenInternal:1103 : No matching URI scheme >2023-07-04 07:38:32.450+0000: 294070: debug : virConnectOpenInternal:1072 : trying driver 1 (ESX) ... >2023-07-04 07:38:32.450+0000: 294070: debug : virConnectOpenInternal:1094 : Checking for supported URI schemes >2023-07-04 07:38:32.450+0000: 294070: debug : virConnectOpenInternal:1103 : No matching URI scheme >2023-07-04 07:38:32.450+0000: 294070: debug : virConnectOpenInternal:1072 : trying driver 2 (remote) ... >2023-07-04 07:38:32.450+0000: 294070: debug : virConnectOpenInternal:1111 : Matching any URI scheme for 'network' >2023-07-04 07:38:32.450+0000: 294070: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:32.450+0000: 294070: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:32.451+0000: 294070: debug : virConnectOpenInternal:1137 : driver 2 remote returned SUCCESS >2023-07-04 07:38:32.451+0000: 294070: debug : virGetConnectGeneric:157 : Opened new network connection 0x3ff40014c90 >2023-07-04 07:38:32.451+0000: 294070: debug : virGetConnectGeneric:164 : Attempting to delegate current identity >2023-07-04 07:38:32.451+0000: 294070: debug : virConnectSetIdentity:99 : conn=0x3ff40014c90 params=0x3ff3000d000 nparams=8 flags=0x0 >2023-07-04 07:38:32.451+0000: 294070: debug : virConnectSetIdentity:100 : params["process-id"]=(llong)293933 >2023-07-04 07:38:32.451+0000: 294070: debug : virConnectSetIdentity:100 : params["process-time"]=(ullong)1507058 >2023-07-04 07:38:32.451+0000: 294070: debug : virConnectSetIdentity:100 : params["user-name"]=(string)root >2023-07-04 07:38:32.451+0000: 294070: debug : virConnectSetIdentity:100 : params["unix-user-id"]=(ullong)0 >2023-07-04 07:38:32.451+0000: 294070: debug : virConnectSetIdentity:100 : params["group-name"]=(string)root >2023-07-04 07:38:32.451+0000: 294070: debug : virConnectSetIdentity:100 : params["unix-group-id"]=(ullong)0 >2023-07-04 07:38:32.451+0000: 294070: debug : virConnectSetIdentity:100 : params["selinux-context"]=(string)system_u:system_r:virtd_t:s0-s0:c0.c1023 >2023-07-04 07:38:32.451+0000: 294070: debug : virConnectSetIdentity:100 : params["system.token"]=(string)d4bcdce23037f18cfafbaf2a54e34d4e >2023-07-04 07:38:32.451+0000: 294070: debug : virNetworkLookupByName:306 : conn=0x3ff40014c90, name=default >2023-07-04 07:38:32.451+0000: 294070: debug : virNetworkPortLookupByUUID:1439 : conn=0x3ff30004da0, uuid=a0907b01-7e72-4679-bd47-fc483727b78c >2023-07-04 07:38:32.451+0000: 294070: debug : virNetworkPortDelete:1789 : port=0x3ff2c008390, flags=0x0 >2023-07-04 07:38:32.453+0000: 294070: debug : virNetworkPortDispose:446 : release network port 0x3ff2c008390 a0907b01-7e72-4679-bd47-fc483727b78c >2023-07-04 07:38:32.453+0000: 294070: debug : virNetworkDispose:388 : release network 0x3ff30004df0 default 03ea3bcd-1a0d-40d7-a648-a9e7ce820d7a >2023-07-04 07:38:32.453+0000: 294070: debug : virNetworkDispose:388 : release network 0x3ff30004da0 default 03ea3bcd-1a0d-40d7-a648-a9e7ce820d7a >2023-07-04 07:38:32.453+0000: 294070: debug : virSystemdTerminateMachine:585 : Attempting to terminate machine via systemd >2023-07-04 07:38:32.542+0000: 294070: debug : virCPUDataFree:331 : data=(nil) >2023-07-04 07:38:32.542+0000: 294070: debug : virDomainObjEndJob:614 : Stopping job: async nested (async=dump vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:32.543+0000: 294070: debug : virDomainObjEndAsyncJob:652 : Stopping async job: dump (vm=0x3ff400ad100 name=rhel) >2023-07-04 07:38:34.394+0000: 293935: debug : virThreadJobSet:93 : Thread 293935 (rpc-virtqemud) is now running job remoteDispatchAuthList >2023-07-04 07:38:34.394+0000: 293935: debug : virThreadJobClear:118 : Thread 293935 (rpc-virtqemud) finished job remoteDispatchAuthList with ret=0 >2023-07-04 07:38:34.394+0000: 293942: debug : virThreadJobSet:93 : Thread 293942 (prio-rpc-virtqemud) is now running job remoteDispatchConnectSupportsFeature >2023-07-04 07:38:34.394+0000: 293942: debug : virThreadJobClear:118 : Thread 293942 (prio-rpc-virtqemud) finished job remoteDispatchConnectSupportsFeature with ret=0 >2023-07-04 07:38:34.394+0000: 293943: debug : virThreadJobSet:93 : Thread 293943 (prio-rpc-virtqemud) is now running job remoteDispatchConnectOpen >2023-07-04 07:38:34.394+0000: 293943: debug : virConnectOpenAuth:1277 : name=, auth=(nil), flags=0x0 >2023-07-04 07:38:34.394+0000: 293943: debug : virConfLoadConfig:1515 : Loading config file '/etc/libvirt/libvirt.conf' >2023-07-04 07:38:34.394+0000: 293943: debug : virConfReadFile:723 : filename=/etc/libvirt/libvirt.conf >2023-07-04 07:38:34.394+0000: 293943: debug : virConfGetValueString:865 : Get value string (nil) 0 >2023-07-04 07:38:34.394+0000: 293943: debug : virConnectOpenInternal:933 : Trying to probe for default URI >2023-07-04 07:38:34.394+0000: 293943: debug : virConnectOpenInternal:938 : QEMU driver URI probe returned 'qemu:///system' >2023-07-04 07:38:34.394+0000: 293943: debug : virConfGetValueStringList:913 : Get value string list (nil) 0 >2023-07-04 07:38:34.394+0000: 293943: debug : virConnectOpenInternal:966 : Split "qemu:///system" to URI components: > scheme qemu > server <null> > user <null> > port 0 > path /system >2023-07-04 07:38:34.394+0000: 293943: debug : virConnectOpenInternal:1072 : trying driver 0 (Test) ... >2023-07-04 07:38:34.394+0000: 293943: debug : virConnectOpenInternal:1094 : Checking for supported URI schemes >2023-07-04 07:38:34.394+0000: 293943: debug : virConnectOpenInternal:1103 : No matching URI scheme >2023-07-04 07:38:34.394+0000: 293943: debug : virConnectOpenInternal:1072 : trying driver 1 (ESX) ... >2023-07-04 07:38:34.394+0000: 293943: debug : virConnectOpenInternal:1094 : Checking for supported URI schemes >2023-07-04 07:38:34.394+0000: 293943: debug : virConnectOpenInternal:1103 : No matching URI scheme >2023-07-04 07:38:34.394+0000: 293943: debug : virConnectOpenInternal:1072 : trying driver 2 (remote) ... >2023-07-04 07:38:34.394+0000: 293943: debug : virConnectOpenInternal:1111 : Matching any URI scheme for 'qemu' >2023-07-04 07:38:34.394+0000: 293943: debug : virConnectOpenInternal:1137 : driver 2 remote returned DECLINED >2023-07-04 07:38:34.394+0000: 293943: debug : virConnectOpenInternal:1072 : trying driver 3 (QEMU) ... >2023-07-04 07:38:34.394+0000: 293943: debug : virConnectOpenInternal:1094 : Checking for supported URI schemes >2023-07-04 07:38:34.394+0000: 293943: debug : virConnectOpenInternal:1097 : Matched URI scheme 'qemu' >2023-07-04 07:38:34.395+0000: 293943: debug : virConnectOpenInternal:1137 : driver 3 QEMU returned SUCCESS >2023-07-04 07:38:34.395+0000: 293943: debug : virConnectGetType:163 : conn=0x3ff3000c3b0 >2023-07-04 07:38:34.395+0000: 293943: debug : virThreadJobClear:118 : Thread 293943 (prio-rpc-virtqemud) finished job remoteDispatchConnectOpen with ret=0 >2023-07-04 07:38:34.395+0000: 293934: debug : virThreadJobSet:93 : Thread 293934 (rpc-virtqemud) is now running job remoteDispatchConnectGetURI >2023-07-04 07:38:34.395+0000: 293934: debug : virConnectGetURI:316 : conn=0x3ff3000c3b0 >2023-07-04 07:38:34.395+0000: 293934: debug : virThreadJobClear:118 : Thread 293934 (rpc-virtqemud) finished job remoteDispatchConnectGetURI with ret=0 >2023-07-04 07:38:34.395+0000: 293936: debug : virThreadJobSet:93 : Thread 293936 (rpc-virtqemud) is now running job remoteDispatchConnectSupportsFeature >2023-07-04 07:38:34.395+0000: 293936: debug : virThreadJobClear:118 : Thread 293936 (rpc-virtqemud) finished job remoteDispatchConnectSupportsFeature with ret=0 >2023-07-04 07:38:34.395+0000: 293935: debug : virThreadJobSet:93 : Thread 293935 (rpc-virtqemud) is now running job remoteDispatchConnectSupportsFeature >2023-07-04 07:38:34.395+0000: 293935: debug : virThreadJobClear:118 : Thread 293935 (rpc-virtqemud) finished job remoteDispatchConnectSupportsFeature with ret=0 >2023-07-04 07:38:34.405+0000: 293942: debug : virThreadJobSet:93 : Thread 293942 (prio-rpc-virtqemud) is now running job remoteDispatchConnectRegisterCloseCallback >2023-07-04 07:38:34.405+0000: 293942: debug : virConnectRegisterCloseCallback:1501 : conn=0x3ff3000c3b0 >2023-07-04 07:38:34.405+0000: 293942: debug : virThreadJobClear:118 : Thread 293942 (prio-rpc-virtqemud) finished job remoteDispatchConnectRegisterCloseCallback with ret=0 >2023-07-04 07:38:34.405+0000: 293938: debug : virThreadJobSet:93 : Thread 293938 (rpc-virtqemud) is now running job remoteDispatchConnectListAllDomains >2023-07-04 07:38:34.405+0000: 293938: debug : virConnectListAllDomains:6964 : conn=0x3ff3000c3b0, domains=0x3ff8387b7b8, flags=0x3 >2023-07-04 07:38:34.405+0000: 293938: debug : virDomainDispose:326 : release domain 0x3ff2c0069a0 rhel e2a90373-48e4-4bfc-8f57-07e84139bd67 >2023-07-04 07:38:34.405+0000: 293938: debug : virDomainDispose:326 : release domain 0x3ff2c0069f0 vm2 86ed79ae-cc77-41f1-86a4-d13e1347e166 >2023-07-04 07:38:34.405+0000: 293938: debug : virDomainDispose:326 : release domain 0x3ff2c006a40 avocado-vt-vm1 e7d89b6f-e68e-48fa-9971-d12affa1237c >2023-07-04 07:38:34.405+0000: 293938: debug : virThreadJobClear:118 : Thread 293938 (rpc-virtqemud) finished job remoteDispatchConnectListAllDomains with ret=0 >2023-07-04 07:38:34.406+0000: 293940: debug : virThreadJobSet:93 : Thread 293940 (prio-rpc-virtqemud) is now running job remoteDispatchDomainGetState >2023-07-04 07:38:34.406+0000: 293940: debug : virDomainGetState:2714 : dom=0x3ff30006290, (VM: name=avocado-vt-vm1, uuid=e7d89b6f-e68e-48fa-9971-d12affa1237c), state=0x3ff58000ed0, reason=0x3ff58000ed4, flags=0x0 >2023-07-04 07:38:34.406+0000: 293940: debug : virDomainDispose:326 : release domain 0x3ff30006290 avocado-vt-vm1 e7d89b6f-e68e-48fa-9971-d12affa1237c >2023-07-04 07:38:34.406+0000: 293940: debug : virThreadJobClear:118 : Thread 293940 (prio-rpc-virtqemud) finished job remoteDispatchDomainGetState with ret=0 >2023-07-04 07:38:34.406+0000: 293939: debug : virThreadJobSet:93 : Thread 293939 (prio-rpc-virtqemud) is now running job remoteDispatchDomainGetState >2023-07-04 07:38:34.406+0000: 293939: debug : virDomainGetState:2714 : dom=0x3ff58002150, (VM: name=rhel, uuid=e2a90373-48e4-4bfc-8f57-07e84139bd67), state=0x3ff68002950, reason=0x3ff68002954, flags=0x0 >2023-07-04 07:38:34.406+0000: 293939: debug : virDomainDispose:326 : release domain 0x3ff58002150 rhel e2a90373-48e4-4bfc-8f57-07e84139bd67 >2023-07-04 07:38:34.406+0000: 293939: debug : virThreadJobClear:118 : Thread 293939 (prio-rpc-virtqemud) finished job remoteDispatchDomainGetState with ret=0 >2023-07-04 07:38:34.406+0000: 293935: debug : virThreadJobSet:93 : Thread 293935 (rpc-virtqemud) is now running job remoteDispatchDomainGetState >2023-07-04 07:38:34.406+0000: 293935: debug : virDomainGetState:2714 : dom=0x3ff74006010, (VM: name=vm2, uuid=86ed79ae-cc77-41f1-86a4-d13e1347e166), state=0x3ff740036c0, reason=0x3ff740036c4, flags=0x0 >2023-07-04 07:38:34.406+0000: 293935: debug : virDomainDispose:326 : release domain 0x3ff74006010 vm2 86ed79ae-cc77-41f1-86a4-d13e1347e166 >2023-07-04 07:38:34.406+0000: 293935: debug : virThreadJobClear:118 : Thread 293935 (rpc-virtqemud) finished job remoteDispatchDomainGetState with ret=0 >2023-07-04 07:38:34.406+0000: 293942: debug : virThreadJobSet:93 : Thread 293942 (prio-rpc-virtqemud) is now running job remoteDispatchConnectUnregisterCloseCallback >2023-07-04 07:38:34.406+0000: 293942: debug : virConnectUnregisterCloseCallback:1538 : conn=0x3ff3000c3b0 >2023-07-04 07:38:34.406+0000: 293942: debug : virThreadJobClear:118 : Thread 293942 (prio-rpc-virtqemud) finished job remoteDispatchConnectUnregisterCloseCallback with ret=0 >2023-07-04 07:38:34.406+0000: 293938: debug : virThreadJobSet:93 : Thread 293938 (rpc-virtqemud) is now running job remoteDispatchConnectClose >2023-07-04 07:38:34.406+0000: 293938: debug : virThreadJobClear:118 : Thread 293938 (rpc-virtqemud) finished job remoteDispatchConnectClose with ret=0 >2023-07-04 07:38:34.406+0000: 293933: debug : daemonRemoveAllClientStreams:519 : stream=(nil) >2023-07-04 07:38:34.406+0000: 293933: debug : virConnectClose:1320 : conn=0x3ff3000c3b0 >2023-07-04 07:38:34.406+0000: 293933: debug : virCloseCallbacksDomainRunForConn:346 : conn=0x3ff3000c3b0
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 2219538
:
1973947
|
1973948
| 1973949