Bug 1605071 - On machines where /dev/kvm exists but KVM doesn't work, libguestfs will not fall back to TCG
Summary: On machines where /dev/kvm exists but KVM doesn't work, libguestfs will not f...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libguestfs
Version: 7.6
Hardware: ppc64le
OS: Linux
urgent
urgent
Target Milestone: rc
: 7.7
Assignee: Richard W.M. Jones
QA Contact: YongkuiGuo
URL:
Whiteboard:
Depends On: 1621895 1628468 1628469
Blocks: 1619379 1598750
TreeView+ depends on / blocked
 
Reported: 2018-07-20 06:20 UTC by Xianghua Chen
Modified: 2019-08-06 12:44 UTC (History)
18 users (show)

Fixed In Version: libguestfs-1.40.1-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-08-06 12:44:11 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
IBM Linux Technology Center 170634 None None None 2019-08-06 02:28:20 UTC
Red Hat Product Errata RHEA-2019:2096 None None None 2019-08-06 12:44:35 UTC

Description Xianghua Chen 2018-07-20 06:20:16 UTC
Description of problem:
libguestfs-test-tool failed with "ioctl(KVM_CREATE_VM) failed: 22 Invalid argument" on ppc64le(p9+rhel-alt)


Version-Release number of selected component (if applicable):
libguestfs-1.38.2-5.el7.ppc64le
qemu-kvm-ma-2.12.0-7.el7.ppc64le
libvirt-4.4.0-2.el7.ppc64le
kernel-4.14.0-87.el7a.ppc64le

How reproducible:
100%

Steps:
1. Prepare a "ppc64le (p9) + RHEL-ALT-7.6-20180626.3 Server ppc64le" env.
2. Install libvirt* libguestfs* qemu-kvm* qemu-img*, start libvirt service
# systemctl start libvirtd virtlogd
3. Run libguestfs-test-tool
# libguestfs-test-tool 
     ************************************************************
     *                    IMPORTANT NOTICE
     *
     * When reporting bugs, include the COMPLETE, UNEDITED
     * output below in your bug report.
     *
     ************************************************************
PATH=/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
XDG_RUNTIME_DIR=/run/user/0
SELinux: Enforcing
guestfs_get_append: (null)
guestfs_get_autosync: 1
guestfs_get_backend: libvirt
guestfs_get_backend_settings: []
guestfs_get_cachedir: /var/tmp
guestfs_get_hv: /usr/libexec/qemu-kvm
guestfs_get_memsize: 768
guestfs_get_network: 0
guestfs_get_path: /usr/lib64/guestfs
guestfs_get_pgroup: 0
guestfs_get_program: libguestfs-test-tool
guestfs_get_recovery_proc: 1
guestfs_get_smp: 1
guestfs_get_sockdir: /tmp
guestfs_get_tmpdir: /tmp
guestfs_get_trace: 0
guestfs_get_verbose: 1
host_cpu: powerpc64le
Launching appliance, timeout set to 600 seconds.
libguestfs: launch: program=libguestfs-test-tool
libguestfs: launch: version=1.38.2rhel=7,release=5.el7,libvirt
libguestfs: launch: backend registered: unix
libguestfs: launch: backend registered: uml
libguestfs: launch: backend registered: libvirt
libguestfs: launch: backend registered: direct
libguestfs: launch: backend=libvirt
libguestfs: launch: tmpdir=/tmp/libguestfs6n3SlG
libguestfs: launch: umask=0022
libguestfs: launch: euid=0
libguestfs: libvirt version = 4004000 (4.4.0)
libguestfs: guest random name = guestfs-aktm88krwkjgpz0o
libguestfs: connect to libvirt
libguestfs: opening libvirt handle: URI = qemu:///system, auth = default+wrapper, flags = 0
libguestfs: successfully opened libvirt handle: conn = 0x10038872c40
libguestfs: qemu version (reported by libvirt) = 2012000 (2.12.0)
libguestfs: get libvirt capabilities
libguestfs: parsing capabilities XML
libguestfs: build appliance
libguestfs: begin building supermin appliance
libguestfs: run supermin
libguestfs: command: run: /usr/bin/supermin5
libguestfs: command: run: \ --build
libguestfs: command: run: \ --verbose
libguestfs: command: run: \ --if-newer
libguestfs: command: run: \ --lock /var/tmp/.guestfs-0/lock
libguestfs: command: run: \ --copy-kernel
libguestfs: command: run: \ -f ext2
libguestfs: command: run: \ --host-cpu powerpc64le
libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d
libguestfs: command: run: \ -o /var/tmp/.guestfs-0/appliance.d
supermin: version: 5.1.19
supermin: rpm: detected RPM version 4.11
supermin: package handler: fedora/rpm
supermin: acquiring lock on /var/tmp/.guestfs-0/lock
supermin: build: /usr/lib64/guestfs/supermin.d
supermin: reading the supermin appliance
supermin: build: visiting /usr/lib64/guestfs/supermin.d/base.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib64/guestfs/supermin.d/daemon.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib64/guestfs/supermin.d/excludefiles type uncompressed excludefiles
supermin: build: visiting /usr/lib64/guestfs/supermin.d/hostfiles type uncompressed hostfiles
supermin: build: visiting /usr/lib64/guestfs/supermin.d/init.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib64/guestfs/supermin.d/packages type uncompressed packages
supermin: build: visiting /usr/lib64/guestfs/supermin.d/udev-rules.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib64/guestfs/supermin.d/zz-packages-rescue type uncompressed packages
supermin: build: visiting /usr/lib64/guestfs/supermin.d/zz-packages-rsync type uncompressed packages
supermin: build: visiting /usr/lib64/guestfs/supermin.d/zz-packages-xfs type uncompressed packages
supermin: mapping package names to installed packages
supermin: resolving full list of package dependencies
supermin: build: 200 packages, including dependencies
supermin: build: 31059 files
supermin: build: 7544 files, after matching excludefiles
supermin: build: 7552 files, after adding hostfiles
supermin: build: 7546 files, after removing unreadable files
supermin: build: 7570 files, after munging
supermin: kernel: looking for kernel using environment variables ...
supermin: kernel: looking for kernels in /lib/modules/*/vmlinuz ...
supermin: kernel: looking for kernels in /boot ...
supermin: kernel: kernel version of /boot/vmlinuz-4.14.0-87.el7a.ppc64le = 4.14.0-87.el7a.ppc64le (from filename)
supermin: kernel: picked modules path /lib/modules/4.14.0-87.el7a.ppc64le
supermin: kernel: kernel version of /boot/vmlinuz-0-rescue-a76ecfc0ff7f4403b8e811fbf1ad0949 = error, no modpath
supermin: kernel: picked vmlinuz /boot/vmlinuz-4.14.0-87.el7a.ppc64le
supermin: kernel: kernel_version 4.14.0-87.el7a.ppc64le
supermin: kernel: modpath /lib/modules/4.14.0-87.el7a.ppc64le
supermin: ext2: creating empty ext2 filesystem '/var/tmp/.guestfs-0/appliance.d.qf4zo3g3/root'
supermin: ext2: populating from base image
supermin: ext2: copying files from host filesystem
supermin: ext2: copying kernel modules
supermin: ext2: creating minimal initrd '/var/tmp/.guestfs-0/appliance.d.qf4zo3g3/initrd'
supermin: ext2: wrote 25 modules to minimal initrd
supermin: renaming /var/tmp/.guestfs-0/appliance.d.qf4zo3g3 to /var/tmp/.guestfs-0/appliance.d
libguestfs: finished building supermin appliance
libguestfs: command: run: qemu-img
libguestfs: command: run: \ create
libguestfs: command: run: \ -f qcow2
libguestfs: command: run: \ -o backing_file=/var/tmp/.guestfs-0/appliance.d/root,backing_fmt=raw
libguestfs: command: run: \ /tmp/libguestfs6n3SlG/overlay2.qcow2
Formatting '/tmp/libguestfs6n3SlG/overlay2.qcow2', fmt=qcow2 size=4294967296 backing_file=/var/tmp/.guestfs-0/appliance.d/root backing_fmt=raw cluster_size=65536 lazy_refcounts=off refcount_bits=16
libguestfs: create libvirt XML
libguestfs: libvirt XML:\n<?xml version="1.0"?>\n<domain type="kvm" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0">\n  <name>guestfs-aktm88krwkjgpz0o</name>\n  <memory unit="MiB">768</memory>\n  <currentMemory unit="MiB">768</currentMemory>\n  <cpu mode="host-passthrough">\n    <model fallback="allow"/>\n  </cpu>\n  <vcpu>1</vcpu>\n  <clock offset="utc">\n    <timer name="rtc" tickpolicy="catchup"/>\n    <timer name="pit" tickpolicy="delay"/>\n  </clock>\n  <os>\n    <type machine="pseries">hvm</type>\n    <kernel>/var/tmp/.guestfs-0/appliance.d/kernel</kernel>\n    <initrd>/var/tmp/.guestfs-0/appliance.d/initrd</initrd>\n    <cmdline>panic=1 console=hvc0 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color</cmdline>\n  </os>\n  <on_reboot>destroy</on_reboot>\n  <devices>\n    <rng model="virtio">\n      <backend model="random">/dev/urandom</backend>\n    </rng>\n    <controller type="scsi" index="0" model="virtio-scsi"/>\n    <disk device="disk" type="file">\n      <source file="/tmp/libguestfs6n3SlG/scratch1.img"/>\n      <target dev="sda" bus="scsi"/>\n      <driver name="qemu" type="raw" cache="unsafe"/>\n      <address type="drive" controller="0" bus="0" target="0" unit="0"/>\n    </disk>\n    <disk type="file" device="disk">\n      <source file="/tmp/libguestfs6n3SlG/overlay2.qcow2"/>\n      <target dev="sdb" bus="scsi"/>\n      <driver name="qemu" type="qcow2" cache="unsafe"/>\n      <address type="drive" controller="0" bus="0" target="1" unit="0"/>\n    </disk>\n    <serial type="unix">\n      <source mode="connect" path="/tmp/libguestfscU7lOR/console.sock"/>\n      <target port="0"/>\n    </serial>\n    <channel type="unix">\n      <source mode="connect" path="/tmp/libguestfscU7lOR/guestfsd.sock"/>\n      <target type="virtio" name="org.libguestfs.channel.0"/>\n    </channel>\n    <controller type="usb" model="none"/>\n    <memballoon model="none"/>\n  </devices>\n  <qemu:commandline>\n    <qemu:env name="TMPDIR" value="/var/tmp"/>\n  </qemu:commandline>\n</domain>\n
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -R
libguestfs: command: run: \ -Z /var/tmp/.guestfs-0
libguestfs: /var/tmp/.guestfs-0:
libguestfs: drwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 .
libguestfs: drwxrwxrwt. root root system_u:object_r:tmp_t:s0       ..
libguestfs: drwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 appliance.d
libguestfs: -rw-r--r--. root root unconfined_u:object_r:user_tmp_t:s0 lock
libguestfs: 
libguestfs: /var/tmp/.guestfs-0/appliance.d:
libguestfs: drwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 .
libguestfs: drwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 ..
libguestfs: -rw-r--r--. root root unconfined_u:object_r:user_tmp_t:s0 initrd
libguestfs: -rwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 kernel
libguestfs: -rw-r--r--. root root unconfined_u:object_r:user_tmp_t:s0 root
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -Z /tmp/libguestfscU7lOR
libguestfs: drwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 .
libguestfs: drwxrwxrwt. root root system_u:object_r:tmp_t:s0       ..
libguestfs: srw-rw----. root qemu unconfined_u:object_r:user_tmp_t:s0 console.sock
libguestfs: srw-rw----. root qemu unconfined_u:object_r:user_tmp_t:s0 guestfsd.sock
libguestfs: launch libvirt guest
libguestfs: error: could not create appliance through libvirt.

Try running qemu directly without libvirt using this environment variable:
export LIBGUESTFS_BACKEND=direct

Original error from libvirt: internal error: qemu unexpectedly closed the monitor: ioctl(KVM_CREATE_VM) failed: 22 Invalid argument
2018-07-20T06:16:36.044562Z qemu-kvm: failed to initialize KVM: Invalid argument [code=1 int1=-1]
libguestfs: closing guestfs handle 0x10038871250 (state 0)
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfs6n3SlG
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfscU7lOR

Architecture:          ppc64le
Byte Order:            Little Endian
CPU(s):                8
On-line CPU(s) list:   0-7
Thread(s) per core:    8
Core(s) per socket:    1
Socket(s):             1
NUMA node(s):          1
Model:                 2.2 (pvr 004e 0202)
Model name:            POWER9 (architected), altivec supported
Hypervisor vendor:     pHyp
Virtualization type:   para
L1d cache:             32K
L1i cache:             32K
L2 cache:              512K
L3 cache:              10240K
NUMA node0 CPU(s):     0-7

Actual results:
libguestfs-test-tool failed.

Expected results:
libguestfs-test-tool is ok.


Additional info:
Replace qemu-kvm-ma with qemu-kvm-rhev, it has the same error.
p8+rhel7.6 is ok.

Comment 2 Dan Zheng 2018-07-20 08:47:23 UTC
libvirt team only test on PPC bare metal machine, not on a VM. ibm-p9z-07-lp1.pnr.lab.eng.bos.redhat.com is a VM on physical host, so please try a bare metal host again.

Comment 3 Richard W.M. Jones 2018-07-20 09:34:05 UTC
(In reply to Dan Zheng from comment #2)
> libvirt team only test on PPC bare metal machine, not on a VM.
> ibm-p9z-07-lp1.pnr.lab.eng.bos.redhat.com is a VM on physical host, so
> please try a bare metal host again.

It's supposed to work in the nested case too (using TCG).

(In reply to Xianghua Chen from comment #0)
There may be more information in the qemu log file.

Take the guest name (guestfs-aktm88krwkjgpz0o) and look for
a file called something like:

  ~/.cache/libvirt/log/qemu/guestfs-aktm88krwkjgpz0o.log
  /var/log/libvirt/qemu/log/guestfs-aktm88krwkjgpz0o.log

Comment 5 Xianghua Chen 2018-07-23 05:52:31 UTC
This is log of direct mode, not like livirt backend, seems like it's using kvm:tcg instead of kvm, but still fail with the same error:

# export  LIBGUESTFS_BACKEND=direct;libguestfs-test-tool
     ************************************************************
     *                    IMPORTANT NOTICE
     *
     * When reporting bugs, include the COMPLETE, UNEDITED
     * output below in your bug report.
     *
     ************************************************************
LIBGUESTFS_BACKEND=direct
PATH=/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
XDG_RUNTIME_DIR=/run/user/0
SELinux: Enforcing
guestfs_get_append: (null)
guestfs_get_autosync: 1
guestfs_get_backend: direct
guestfs_get_backend_settings: []
guestfs_get_cachedir: /var/tmp
guestfs_get_hv: /usr/libexec/qemu-kvm
guestfs_get_memsize: 768
guestfs_get_network: 0
guestfs_get_path: /usr/lib64/guestfs
guestfs_get_pgroup: 0
guestfs_get_program: libguestfs-test-tool
guestfs_get_recovery_proc: 1
guestfs_get_smp: 1
guestfs_get_sockdir: /tmp
guestfs_get_tmpdir: /tmp
guestfs_get_trace: 0
guestfs_get_verbose: 1
host_cpu: powerpc64le
Launching appliance, timeout set to 600 seconds.
libguestfs: launch: program=libguestfs-test-tool
libguestfs: launch: version=1.38.2rhel=7,release=5.el7,libvirt
libguestfs: launch: backend registered: unix
libguestfs: launch: backend registered: uml
libguestfs: launch: backend registered: libvirt
libguestfs: launch: backend registered: direct
libguestfs: launch: backend=direct
libguestfs: launch: tmpdir=/tmp/libguestfsHwV5IM
libguestfs: launch: umask=0022
libguestfs: launch: euid=0
libguestfs: begin building supermin appliance
libguestfs: run supermin
libguestfs: command: run: /usr/bin/supermin5
libguestfs: command: run: \ --build
libguestfs: command: run: \ --verbose
libguestfs: command: run: \ --if-newer
libguestfs: command: run: \ --lock /var/tmp/.guestfs-0/lock
libguestfs: command: run: \ --copy-kernel
libguestfs: command: run: \ -f ext2
libguestfs: command: run: \ --host-cpu powerpc64le
libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d
libguestfs: command: run: \ -o /var/tmp/.guestfs-0/appliance.d
supermin: version: 5.1.19
supermin: rpm: detected RPM version 4.11
supermin: package handler: fedora/rpm
supermin: acquiring lock on /var/tmp/.guestfs-0/lock
supermin: if-newer: output does not need rebuilding
libguestfs: finished building supermin appliance
libguestfs: begin testing qemu features
libguestfs: checking for previously cached test results of /usr/libexec/qemu-kvm, in /var/tmp/.guestfs-0
libguestfs: loading previously cached test results
libguestfs: qemu version: 2.12
libguestfs: qemu mandatory locking: yes
libguestfs: finished testing qemu features
/usr/libexec/qemu-kvm \
    -global virtio-blk-pci.scsi=off \
    -enable-fips \
    -nodefaults \
    -display none \
    -machine pseries,accel=kvm:tcg \
    -cpu host \
    -m 768 \
    -no-reboot \
    -rtc driftfix=slew \
    -kernel /var/tmp/.guestfs-0/appliance.d/kernel \
    -initrd /var/tmp/.guestfs-0/appliance.d/initrd \
    -object rng-random,filename=/dev/urandom,id=rng0 \
    -device virtio-rng-pci,rng=rng0 \
    -device virtio-scsi-pci,id=scsi \
    -drive file=/tmp/libguestfsHwV5IM/scratch1.img,cache=unsafe,format=raw,id=hd0,if=none \
    -device scsi-hd,drive=hd0 \
    -drive file=/var/tmp/.guestfs-0/appliance.d/root,snapshot=on,id=appliance,cache=unsafe,if=none,format=raw \
    -device scsi-hd,drive=appliance \
    -device virtio-serial-pci \
    -serial stdio \
    -chardev socket,path=/tmp/libguestfsaoGNrh/guestfsd.sock,id=channel0 \
    -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
    -append "panic=1 console=hvc0 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color"
ioctl(KVM_CREATE_VM) failed: 22 Invalid argument
qemu-kvm: failed to initialize KVM: Invalid argument
qemu-kvm: Back to tcg accelerator
qemu-kvm: unable to find CPU model 'host'
libguestfs: error: appliance closed the connection unexpectedly, see earlier error messages
libguestfs: child_cleanup: 0x1000c6e1250: child process died
libguestfs: sending SIGTERM to process 1079
libguestfs: error: /usr/libexec/qemu-kvm exited with error status 1, see debug messages above
libguestfs: error: guestfs_launch failed, see earlier error messages
libguestfs: closing guestfs handle 0x1000c6e1250 (state 0)
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsHwV5IM
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsaoGNrh

Comment 6 Xianghua Chen 2018-07-23 06:01:54 UTC
If I force the direct and libvirt backends to use TCG:
# export LIBGUESTFS_BACKEND_SETTINGS=force_tcg

Then everything is ok.
So the question is, is it reasonable to export the upper var manually each time, or should it be chosen automatically for us ?

Comment 7 Richard W.M. Jones 2018-08-09 06:38:01 UTC
(In reply to Xianghua Chen from comment #6)
> If I force the direct and libvirt backends to use TCG:
> # export LIBGUESTFS_BACKEND_SETTINGS=force_tcg

This forces TCG, ie. -machine accel=tcg.  However qemu is supposed
to fall back to TCG if KVM fails (-machine accel=kvm:tcg), and that's
not happening for some reason.  See the full command line in comment 5.

As this appears to be a qemu bug, reassigning.

Comment 8 Karen Noel 2018-08-15 23:36:24 UTC
Xianghua, Is this a regression from 7.5? Thanks.

Comment 9 David Gibson 2018-08-16 02:12:44 UTC
The reason KVM is failing is pretty simple: since this is under another hypervisor (pHyp), KVM HV can't work, and KVM PR isn't supported on POWER9.

So, indeed, qemu should be falling back to TCG but isn't.  My first guess is that we're discovering the failure too late, but I'll investigate this.

Comment 10 Xianghua Chen 2018-08-16 07:57:25 UTC
(In reply to Karen Noel from comment #8)
> Xianghua, Is this a regression from 7.5? Thanks.

No, I think, since the hardware is not exactly the same. 
This case used to be tested on ppc64le (bare metal machine p9 + alt), but we didn't tested it on a LPAR.
I happened to reserve a p9 env in beaker and didn't notice it's a lpar, so I rose it here to check whether it should be supported.

Comment 11 IBM Bug Proxy 2018-08-21 22:50:16 UTC
------- Comment From lagarcia@br.ibm.com 2018-08-21 18:40 EDT-------
Although I agree we should investigate this one and eventually fix it, I would not hold RHV 4.2 release because of this issue as we are definitely not supporting RHV on a PowerVM environment anyway.

Comment 12 IBM Bug Proxy 2018-08-22 12:30:20 UTC
------- Comment From seg@us.ibm.com 2018-08-22 08:23 EDT-------
Just making clear that this is actually a RHEL 7.6 bug and NOT a RHV 4.2 bug, so it would not delay RHV 4.2.6. The confusion is that the KVM "host" is an instance of RHEL 7.6 running inside a PowerVM LPAR but using the RHEL 7.6 Qemu (i.e. based on 2.12 not 2.10).

------- Comment From seg@us.ibm.com 2018-08-22 08:24 EDT-------
Correction to previous comment: ...running qemu from RHV 4.2, i.e. 2.10 based.

Comment 13 Serhii Popovych 2018-08-27 05:31:58 UTC
On P8 I'm only able to reproduce issue when "-cpu host" is given and "chmod go-rwx /dev/kvm"
to force TCG fallback (accel=kvm:tcg):

$ ./qemu-kvm-bz1605071.sh
Could not access KVM kernel module: Permission denied
qemu-kvm: failed to initialize KVM: Permission denied
qemu-kvm: Back to tcg accelerator
qemu-kvm: unable to find CPU model 'host'

With '-cpu host' being removed from command line shown in comment 5:

$ ./qemu-kvm.sh
Could not access KVM kernel module: Permission denied
qemu-kvm: failed to initialize KVM: Permission denied
qemu-kvm: Back to tcg accelerator
<normal guest boot>

$ /usr/libexec/qemu-kvm -M pseries,accel=kvm:tcg -cpu help
PowerPC power7_v2.3      PVR 003f0203
PowerPC power7           (alias for power7_v2.3)
PowerPC power7+_v2.1     PVR 004a0201
PowerPC power7+          (alias for power7+_v2.1)
PowerPC power8e_v2.1     PVR 004b0201
PowerPC power8e          (alias for power8e_v2.1)
PowerPC power8nvl_v1.0   PVR 004c0100
PowerPC power8nvl        (alias for power8nvl_v1.0)
PowerPC power8_v2.0      PVR 004d0200
PowerPC power8           (alias for power8_v2.0)
PowerPC power9_v1.0      PVR 004e0100
PowerPC power9_v2.0      PVR 004e1200
PowerPC power9           (alias for power9_v2.0)

PowerPC host

# lscpu
Architecture:          ppc64le
Byte Order:            Little Endian
CPU(s):                96
On-line CPU(s) list:   0,8,16,24,32,40,48,56,64,72,80,88
Off-line CPU(s) list:  1-7,9-15,17-23,25-31,33-39,41-47,49-55,57-63,65-71,73-79,81-87,89-95
Thread(s) per core:    1
Core(s) per socket:    6
Socket(s):             2
NUMA node(s):          2
Model:                 2.1 (pvr 004b 0201)
Model name:            POWER8E (raw), altivec supported
CPU max MHz:           3325.0000
CPU min MHz:           2061.0000
L1d cache:             64K
L1i cache:             32K
L2 cache:              512K
L3 cache:              8192K
NUMA node0 CPU(s):     0,8,16,24,32,40
NUMA node1 CPU(s):     48,56,64,72,80,88

So at this time I suspect something wrong with "-cpu host" happened but still
not found what is.

Comment 14 Serhii Popovych 2018-08-27 05:39:11 UTC
Xianghua, could you try to start qemu-kvm machine with command from comment 5 but "-cpu host" removed? Or give me access to p9 env to test this further.

Comment 16 Laurent Vivier 2018-08-27 14:08:27 UTC
There are two problems here:

1- nested KVM doesn't work on P9, so the test cannot start in KVM mode on P9 (as ibm-p9z-07-lp1.pnr.lab.eng.bos.redhat.com is a virtual machine,  see comment 9)

2- when test switches to TCG mode, it passes the "-cpu host" parameter to QEMU that is not supported by TCG.

Point 1 is already tracked by BZ 1505999, and point 2 is not a bug

Comment 17 David Gibson 2018-08-28 01:41:48 UTC
Ah, nice catch.  -cpu host indeed can't work with TCG.

So, the question is, what's adding the -cpu host in the first place.  Is it a bug in libvirt, or libguestfs?

Next step is to get the libvirt XML that libguestfs generates and see if that's specifying a cpu.

Comment 18 Serhii Popovych 2018-08-28 04:05:43 UTC
Here is libvirt XML dump extracted from comment 0:
--------------------------------------------------

<?xml version="1.0"?>
<domain type="kvm" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0">
  <name>guestfs-aktm88krwkjgpz0o</name>
  <memory unit="MiB">768</memory>
  <currentMemory unit="MiB">768</currentMemory>
  <cpu mode="host-passthrough">
    <model fallback="allow"/>
  </cpu>
  <vcpu>1</vcpu>
  <clock offset="utc">
    <timer name="rtc" tickpolicy="catchup"/>
    <timer name="pit" tickpolicy="delay"/>
  </clock>
  <os>
    <type machine="pseries">hvm</type>
    <kernel>/var/tmp/.guestfs-0/appliance.d/kernel</kernel>
    <initrd>/var/tmp/.guestfs-0/appliance.d/initrd</initrd>
    <cmdline>panic=1 console=hvc0 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color</cmdline>
  </os>
  <on_reboot>destroy</on_reboot>
  <devices>
    <rng model="virtio">
      <backend model="random">/dev/urandom</backend>
    </rng>
    <controller type="scsi" index="0" model="virtio-scsi"/>
    <disk device="disk" type="file">
      <source file="/tmp/libguestfs6n3SlG/scratch1.img"/>
      <target dev="sda" bus="scsi"/>
      <driver name="qemu" type="raw" cache="unsafe"/>
      <address type="drive" controller="0" bus="0" target="0" unit="0"/>
    </disk>
    <disk type="file" device="disk">
      <source file="/tmp/libguestfs6n3SlG/overlay2.qcow2"/>
      <target dev="sdb" bus="scsi"/>
      <driver name="qemu" type="qcow2" cache="unsafe"/>
      <address type="drive" controller="0" bus="0" target="1" unit="0"/>
    </disk>
    <serial type="unix">
      <source mode="connect" path="/tmp/libguestfscU7lOR/console.sock"/>
      <target port="0"/>
    </serial>
    <channel type="unix">
      <source mode="connect" path="/tmp/libguestfscU7lOR/guestfsd.sock"/>
      <target type="virtio" name="org.libguestfs.channel.0"/>
    </channel>
    <controller type="usb" model="none"/>
    <memballoon model="none"/>
  </devices>
  <qemu:commandline>
    <qemu:env name="TMPDIR" value="/var/tmp"/>
  </qemu:commandline>
</domain>

Looks like it is generated by libguestfs:
-----------------------------------------

libguestfs: create libvirt XML
libguestfs: libvirt XML:
<above_guest_xml>

Here is vCPU settings:
----------------------

  <cpu mode="host-passthrough">
    <model fallback="allow"/>
  </cpu>
  <vcpu>1</vcpu>

It looks clear that libguestfs requests "host-passthrough" CPU mode.

Comment 19 David Gibson 2018-08-28 06:47:28 UTC
Yes, it does indeed look like a libguestfs bug then.

I wonder why it hasn't hit on x86.

Anyway, reassigning to libguestfs.

Comment 20 Richard W.M. Jones 2018-08-28 10:04:07 UTC
(In reply to Laurent Vivier from comment #16)
> 2- when test switches to TCG mode, it passes the "-cpu host" parameter to
> QEMU that is not supported by TCG.

What can we practically do to make it work?

Only QEMU knows that nested KVM isn't supported on this platform.
Many times I have asked for "-cpu best" upstream, but to no avail.

(In reply to David Gibson from comment #19)
> I wonder why it hasn't hit on x86.

Because on x86 we use a hack to guess if TCG is going to be used:
https://github.com/libguestfs/libguestfs/blob/0d0b5511309182f163ae263e862ddb0235780917/lib/launch-direct.c#L390

In the absence of "-cpu best" there's nothing else we can do.

Comment 21 Laurent Vivier 2018-08-28 11:32:24 UTC
(In reply to Richard W.M. Jones from comment #20)
...
> In the absence of "-cpu best" there's nothing else we can do.

"best" has no real sense, "default" should be better.

If you don't provide the "-cpu" parameter, qemu will use the default one defined for the machine type (defined in default_cpu_type): it's "power8_v2.0" for the latest, "power7_v2.3" for qemu-2.7 and earlier, and "host" with kvm (POWER8 or POWER9).

Why do you need to provide the "-cpu" parameter to override the default cpu definition?

But I think it should be possible to add a "-cpu default" to act as if there is no "-cpu" parameter.

Comment 22 Richard W.M. Jones 2018-08-28 11:57:00 UTC
It's possible for us to omit the -cpu parameter per architecture
(https://github.com/libguestfs/libguestfs/blob/0d0b5511309182f163ae263e862ddb0235780917/lib/appliance-cpu.c#L31)

The reason we normally want -cpu host is because we want the best
possible CPU features for things like RAID, encryption, etc.

I'll cook up a patch which omits -cpu on ppc64le.

Comment 24 Richard W.M. Jones 2018-08-28 12:00:56 UTC
I guess that should be __powerpc64le__ instead of __powerpc64__?

Comment 25 Richard W.M. Jones 2018-08-28 12:54:01 UTC
I tested the patch in comment 23 (using __powerpc64__ as __powerpc64le__
is apparently not a thing), and it works.  I was also able to
reproduce the original bug without this patch.

Comment 26 Laurent Vivier 2018-08-28 13:04:33 UTC
(In reply to Richard W.M. Jones from comment #25)
> I tested the patch in comment 23 (using __powerpc64__ as __powerpc64le__
> is apparently not a thing), and it works.  I was also able to
> reproduce the original bug without this patch.

yes, __powerpc64__ is for big-endian and little-endian 64bit CPU

Comment 27 Richard W.M. Jones 2018-08-28 22:35:46 UTC
Fixed upstream in commit 56318f0b5ffc287fed71cc7cdd2007dff2b8fb17.

Comment 33 Xianghua Chen 2018-09-04 08:20:56 UTC
I try to verify this bug with:
libguestfs-1.38.2-12.el7.ppc64le.rpm

But the libguestfs-test-tool pass with direct mode, fail with libvirt mode. Is that expected?

=======================================================
# export  LIBGUESTFS_BACKEND=direct;libguestfs-test-tool
... ...
libguestfs: finished testing qemu features
/usr/libexec/qemu-kvm \
    -global virtio-blk-pci.scsi=off \
    -enable-fips \
    -nodefaults \
    -display none \
    -machine pseries,accel=kvm:tcg \
    -m 768 \
    -no-reboot \
    -rtc driftfix=slew \
    -kernel /var/tmp/.guestfs-0/appliance.d/kernel \
    -initrd /var/tmp/.guestfs-0/appliance.d/initrd \
    -object rng-random,filename=/dev/urandom,id=rng0 \
    -device virtio-rng-pci,rng=rng0 \
    -device virtio-scsi-pci,id=scsi \
    -drive file=/tmp/libguestfsNJczwU/scratch1.img,cache=unsafe,format=raw,id=hd0,if=none \
    -device scsi-hd,drive=hd0 \
    -drive file=/var/tmp/.guestfs-0/appliance.d/root,snapshot=on,id=appliance,cache=unsafe,if=none,format=raw \
    -device scsi-hd,drive=appliance \
    -device virtio-serial-pci \
    -serial stdio \
    -chardev socket,path=/tmp/libguestfsmhXEju/guestfsd.sock,id=channel0 \
    -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
    -append "panic=1 console=hvc0 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color"
ioctl(KVM_CREATE_VM) failed: 22 Invalid argument
qemu-kvm: failed to initialize KVM: Invalid argument
qemu-kvm: Back to tcg accelerator


SLOF\x1b[0m\x1b[?25l ******************************************************************
\x1b[1mQEMU Starting
\x1b[0m Build Date = Jun  1 2018 06:24:16
 FW Version = mockbuild@ release 20171214
 Press "s" to enter Open Firmware.
... ...
===== TEST FINISHED OK =====
=========================================================

=========================================================
# export  LIBGUESTFS_BACKEND=libvirt;libguestfs-test-tool
... ...
libguestfs: error: could not create appliance through libvirt.

Try running qemu directly without libvirt using this environment variable:
export LIBGUESTFS_BACKEND=direct

Original error from libvirt: internal error: qemu unexpectedly closed the monitor: ioctl(KVM_CREATE_VM) failed: 22 Invalid argument
2018-09-04T08:16:17.341737Z qemu-kvm: failed to initialize KVM: Invalid argument [code=1 int1=-1]
libguestfs: closing guestfs handle 0x10032e91250 (state 0)
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsirGypz
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsAbysD2
=========================================================

Comment 34 Richard W.M. Jones 2018-09-04 09:38:08 UTC
For the libvirt case, can you find the qemu log file.
Under /var/log/libvirt/qemu/ and called something like
guestfs-XXXX.log.

Comment 35 Xianghua Chen 2018-09-04 10:02:50 UTC
Yes, of course.
# cat /var/log/libvirt/qemu/guestfs-28z6lh6er29ftjco.log
2018-09-04 10:00:22.549+0000: starting up libvirt version: 4.5.0, package: 7.el7 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2018-08-21-09:08:44, ppc-059.build.eng.bos.redhat.com), qemu version: 2.12.0qemu-kvm-ma-2.12.0-12.el7, kernel: 4.14.0-106.el7a.ppc64le, hostname: ibm-p9z-06-lp7.pnr.lab.eng.bos.redhat.com
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none TMPDIR=/var/tmp /usr/libexec/qemu-kvm -name guest=guestfs-28z6lh6er29ftjco,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-3-guestfs-28z6lh6er29f/master-key.aes -machine pseries-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -m 768 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid f9356155-0e4b-482a-b164-5c5c2e3c98f3 -display none -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=25,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -no-reboot -boot strict=on -kernel /var/tmp/.guestfs-0/appliance.d/kernel -initrd /var/tmp/.guestfs-0/appliance.d/initrd -append 'panic=1 console=hvc0 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color' -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x1 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x2 -drive file=/tmp/libguestfsyLDvXv/scratch1.img,format=raw,if=none,id=drive-scsi0-0-0-0,cache=unsafe -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1,write-cache=on -drive file=/tmp/libguestfsyLDvXv/overlay2.qcow2,format=qcow2,if=none,id=drive-scsi0-0-1-0,cache=unsafe -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-scsi0-0-1-0,id=scsi0-0-1-0,write-cache=on -chardev socket,id=charserial0,path=/tmp/libguestfsoEBtXh/console.sock -device spapr-vty,chardev=charserial0,id=serial0,reg=0x30000000 -chardev socket,id=charchannel0,path=/tmp/libguestfsoEBtXh/guestfsd.sock -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.libguestfs.channel.0 -object rng-random,id=objrng0,filename=/dev/urandom -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x3 -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
2018-09-04 10:00:22.549+0000: Domain id=3 is tainted: custom-argv
ioctl(KVM_CREATE_VM) failed: 22 Invalid argument
2018-09-04T10:00:23.252774Z qemu-kvm: failed to initialize KVM: Invalid argument
2018-09-04 10:00:23.306+0000: shutting down, reason=failed

Comment 36 Richard W.M. Jones 2018-09-04 10:19:50 UTC
> -machine pseries-rhel7.6.0,accel=kvm

It's using accel=kvm (instead of accel=kvm:tcg) for some reason.
Really need to see the full libguestfs log from the failing run.

Comment 37 Xianghua Chen 2018-09-05 02:24:59 UTC
(In reply to Richard W.M. Jones from comment #36)
> > -machine pseries-rhel7.6.0,accel=kvm
> 
> It's using accel=kvm (instead of accel=kvm:tcg) for some reason.
> Really need to see the full libguestfs log from the failing run.
This is the failing log:

# export  LIBGUESTFS_BACKEND=libvirt;libguestfs-test-tool
     ************************************************************
     *                    IMPORTANT NOTICE
     *
     * When reporting bugs, include the COMPLETE, UNEDITED
     * output below in your bug report.
     *
     ************************************************************
LIBGUESTFS_BACKEND=libvirt
PATH=/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
XDG_RUNTIME_DIR=/run/user/0
SELinux: Enforcing
guestfs_get_append: (null)
guestfs_get_autosync: 1
guestfs_get_backend: libvirt
guestfs_get_backend_settings: []
guestfs_get_cachedir: /var/tmp
guestfs_get_hv: /usr/libexec/qemu-kvm
guestfs_get_memsize: 768
guestfs_get_network: 0
guestfs_get_path: /usr/lib64/guestfs
guestfs_get_pgroup: 0
guestfs_get_program: libguestfs-test-tool
guestfs_get_recovery_proc: 1
guestfs_get_smp: 1
guestfs_get_sockdir: /tmp
guestfs_get_tmpdir: /tmp
guestfs_get_trace: 0
guestfs_get_verbose: 1
host_cpu: powerpc64le
Launching appliance, timeout set to 600 seconds.
libguestfs: launch: program=libguestfs-test-tool
libguestfs: launch: version=1.38.2rhel=7,release=12.el7,libvirt
libguestfs: launch: backend registered: unix
libguestfs: launch: backend registered: uml
libguestfs: launch: backend registered: libvirt
libguestfs: launch: backend registered: direct
libguestfs: launch: backend=libvirt
libguestfs: launch: tmpdir=/tmp/libguestfsyLDvXv
libguestfs: launch: umask=0022
libguestfs: launch: euid=0
libguestfs: libvirt version = 4005000 (4.5.0)
libguestfs: guest random name = guestfs-28z6lh6er29ftjco
libguestfs: connect to libvirt
libguestfs: opening libvirt handle: URI = qemu:///system, auth = default+wrapper, flags = 0
libguestfs: successfully opened libvirt handle: conn = 0x100205d2c70
libguestfs: qemu version (reported by libvirt) = 2012000 (2.12.0)
libguestfs: get libvirt capabilities
libguestfs: parsing capabilities XML
libguestfs: build appliance
libguestfs: begin building supermin appliance
libguestfs: run supermin
libguestfs: command: run: /usr/bin/supermin5
libguestfs: command: run: \ --build
libguestfs: command: run: \ --verbose
libguestfs: command: run: \ --if-newer
libguestfs: command: run: \ --lock /var/tmp/.guestfs-0/lock
libguestfs: command: run: \ --copy-kernel
libguestfs: command: run: \ -f ext2
libguestfs: command: run: \ --host-cpu powerpc64le
libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d
libguestfs: command: run: \ -o /var/tmp/.guestfs-0/appliance.d
supermin: version: 5.1.19
supermin: rpm: detected RPM version 4.11
supermin: package handler: fedora/rpm
supermin: acquiring lock on /var/tmp/.guestfs-0/lock
supermin: if-newer: output does not need rebuilding
libguestfs: finished building supermin appliance
libguestfs: command: run: qemu-img
libguestfs: command: run: \ create
libguestfs: command: run: \ -f qcow2
libguestfs: command: run: \ -o backing_file=/var/tmp/.guestfs-0/appliance.d/root,backing_fmt=raw
libguestfs: command: run: \ /tmp/libguestfsyLDvXv/overlay2.qcow2
Formatting '/tmp/libguestfsyLDvXv/overlay2.qcow2', fmt=qcow2 size=4294967296 backing_file=/var/tmp/.guestfs-0/appliance.d/root backing_fmt=raw cluster_size=65536 lazy_refcounts=off refcount_bits=16
libguestfs: create libvirt XML
libguestfs: libvirt XML:\n<?xml version="1.0"?>\n<domain type="kvm" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0">\n  <name>guestfs-28z6lh6er29ftjco</name>\n  <memory unit="MiB">768</memory>\n  <currentMemory unit="MiB">768</currentMemory>\n  <vcpu>1</vcpu>\n  <clock offset="utc">\n    <timer name="rtc" tickpolicy="catchup"/>\n    <timer name="pit" tickpolicy="delay"/>\n  </clock>\n  <os>\n    <type machine="pseries">hvm</type>\n    <kernel>/var/tmp/.guestfs-0/appliance.d/kernel</kernel>\n    <initrd>/var/tmp/.guestfs-0/appliance.d/initrd</initrd>\n    <cmdline>panic=1 console=hvc0 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color</cmdline>\n  </os>\n  <on_reboot>destroy</on_reboot>\n  <devices>\n    <rng model="virtio">\n      <backend model="random">/dev/urandom</backend>\n    </rng>\n    <controller type="scsi" index="0" model="virtio-scsi"/>\n    <disk device="disk" type="file">\n      <source file="/tmp/libguestfsyLDvXv/scratch1.img"/>\n      <target dev="sda" bus="scsi"/>\n      <driver name="qemu" type="raw" cache="unsafe"/>\n      <address type="drive" controller="0" bus="0" target="0" unit="0"/>\n    </disk>\n    <disk type="file" device="disk">\n      <source file="/tmp/libguestfsyLDvXv/overlay2.qcow2"/>\n      <target dev="sdb" bus="scsi"/>\n      <driver name="qemu" type="qcow2" cache="unsafe"/>\n      <address type="drive" controller="0" bus="0" target="1" unit="0"/>\n    </disk>\n    <serial type="unix">\n      <source mode="connect" path="/tmp/libguestfsoEBtXh/console.sock"/>\n      <target port="0"/>\n    </serial>\n    <channel type="unix">\n      <source mode="connect" path="/tmp/libguestfsoEBtXh/guestfsd.sock"/>\n      <target type="virtio" name="org.libguestfs.channel.0"/>\n    </channel>\n    <controller type="usb" model="none"/>\n    <memballoon model="none"/>\n  </devices>\n  <qemu:commandline>\n    <qemu:env name="TMPDIR" value="/var/tmp"/>\n  </qemu:commandline>\n</domain>\n
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -R
libguestfs: command: run: \ -Z /var/tmp/.guestfs-0
libguestfs: /var/tmp/.guestfs-0:
libguestfs: drwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 .
libguestfs: drwxrwxrwt. root root system_u:object_r:tmp_t:s0       ..
libguestfs: drwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 appliance.d
libguestfs: -rw-r--r--. root root unconfined_u:object_r:user_tmp_t:s0 lock
libguestfs: -rw-r--r--. root root unconfined_u:object_r:user_tmp_t:s0 qemu-11001208-1535470171.devices
libguestfs: -rw-r--r--. root root unconfined_u:object_r:user_tmp_t:s0 qemu-11001208-1535470171.help
libguestfs: -rw-r--r--. root root unconfined_u:object_r:user_tmp_t:s0 qemu-11001208-1535470171.qmp-schema
libguestfs: -rw-r--r--. root root unconfined_u:object_r:user_tmp_t:s0 qemu-11001208-1535470171.stat
libguestfs: 
libguestfs: /var/tmp/.guestfs-0/appliance.d:
libguestfs: drwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 .
libguestfs: drwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 ..
libguestfs: -rw-r--r--. qemu qemu system_u:object_r:virt_content_t:s0 initrd
libguestfs: -rwxr-xr-x. qemu qemu system_u:object_r:virt_content_t:s0 kernel
libguestfs: -rw-r--r--. qemu qemu system_u:object_r:virt_content_t:s0 root
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -Z /tmp/libguestfsoEBtXh
libguestfs: drwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 .
libguestfs: drwxrwxrwt. root root system_u:object_r:tmp_t:s0       ..
libguestfs: srw-rw----. root qemu unconfined_u:object_r:user_tmp_t:s0 console.sock
libguestfs: srw-rw----. root qemu unconfined_u:object_r:user_tmp_t:s0 guestfsd.sock
libguestfs: launch libvirt guest
libguestfs: error: could not create appliance through libvirt.

Try running qemu directly without libvirt using this environment variable:
export LIBGUESTFS_BACKEND=direct

Original error from libvirt: internal error: qemu unexpectedly closed the monitor: ioctl(KVM_CREATE_VM) failed: 22 Invalid argument
2018-09-04T10:00:23.252774Z qemu-kvm: failed to initialize KVM: Invalid argument [code=1 int1=-1]
libguestfs: closing guestfs handle 0x100205d1250 (state 0)
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsyLDvXv
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsoEBtXh

Comment 38 Richard W.M. Jones 2018-09-05 08:56:34 UTC
I think what's happening is that the libvirt capabilities is
advertising KVM, but KVM is not actually being provided by the
hardware, so this would be a libvirt bug.

Could you attach the output from:

# virsh -c qemu:///system capabilities

Comment 39 Xianghua Chen 2018-09-05 09:49:52 UTC
(In reply to Richard W.M. Jones from comment #38)
> I think what's happening is that the libvirt capabilities is
> advertising KVM, but KVM is not actually being provided by the
> hardware, so this would be a libvirt bug.
> 
> Could you attach the output from:
> 
> # virsh -c qemu:///system capabilities

# virsh -c qemu:///system capabilities
<capabilities>

  <host>
    <uuid>57366057-6a42-48da-a578-e2ddee7b8648</uuid>
    <cpu>
      <arch>ppc64le</arch>
      <model>POWER9</model>
      <vendor>IBM</vendor>
      <topology sockets='1' cores='1' threads='8'/>
      <pages unit='KiB' size='64'/>
      <pages unit='KiB' size='16384'/>
      <pages unit='KiB' size='16777216'/>
    </cpu>
    <power_management>
      <suspend_mem/>
    </power_management>
    <iommu support='no'/>
    <migration_features>
      <live/>
      <uri_transports>
        <uri_transport>tcp</uri_transport>
        <uri_transport>rdma</uri_transport>
      </uri_transports>
    </migration_features>
    <topology>
      <cells num='1'>
        <cell id='0'>
          <memory unit='KiB'>32448704</memory>
          <pages unit='KiB' size='64'>507011</pages>
          <pages unit='KiB' size='16384'>0</pages>
          <pages unit='KiB' size='16777216'>0</pages>
          <distances>
            <sibling id='0' value='10'/>
          </distances>
          <cpus num='8'>
            <cpu id='0' socket_id='0' core_id='0' siblings='0-7'/>
            <cpu id='1' socket_id='0' core_id='0' siblings='0-7'/>
            <cpu id='2' socket_id='0' core_id='0' siblings='0-7'/>
            <cpu id='3' socket_id='0' core_id='0' siblings='0-7'/>
            <cpu id='4' socket_id='0' core_id='0' siblings='0-7'/>
            <cpu id='5' socket_id='0' core_id='0' siblings='0-7'/>
            <cpu id='6' socket_id='0' core_id='0' siblings='0-7'/>
            <cpu id='7' socket_id='0' core_id='0' siblings='0-7'/>
          </cpus>
        </cell>
      </cells>
    </topology>
    <secmodel>
      <model>selinux</model>
      <doi>0</doi>
      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
    </secmodel>
    <secmodel>
      <model>dac</model>
      <doi>0</doi>
      <baselabel type='kvm'>+107:+107</baselabel>
      <baselabel type='qemu'>+107:+107</baselabel>
    </secmodel>
  </host>

  <guest>
    <os_type>hvm</os_type>
    <arch name='ppc64'>
      <wordsize>64</wordsize>
      <emulator>/usr/libexec/qemu-kvm</emulator>
      <machine maxCpus='240'>pseries-rhel7.6.0</machine>
      <machine canonical='pseries-rhel7.6.0' maxCpus='240'>pseries</machine>
      <machine maxCpus='240'>pseries-rhel7.6.0-sxxm</machine>
      <machine maxCpus='240'>pseries-rhel7.5.0</machine>
      <machine maxCpus='240'>pseries-rhel7.5.0-sxxm</machine>
      <domain type='qemu'/>
      <domain type='kvm'>
        <emulator>/usr/libexec/qemu-kvm</emulator>
      </domain>
    </arch>
    <features>
      <cpuselection/>
      <deviceboot/>
      <disksnapshot default='off' toggle='no'/>
    </features>
  </guest>

  <guest>
    <os_type>hvm</os_type>
    <arch name='ppc64le'>
      <wordsize>64</wordsize>
      <emulator>/usr/libexec/qemu-kvm</emulator>
      <machine maxCpus='240'>pseries-rhel7.6.0</machine>
      <machine canonical='pseries-rhel7.6.0' maxCpus='240'>pseries</machine>
      <machine maxCpus='240'>pseries-rhel7.6.0-sxxm</machine>
      <machine maxCpus='240'>pseries-rhel7.5.0</machine>
      <machine maxCpus='240'>pseries-rhel7.5.0-sxxm</machine>
      <domain type='qemu'/>
      <domain type='kvm'>
        <emulator>/usr/libexec/qemu-kvm</emulator>
      </domain>
    </arch>
    <features>
      <cpuselection/>
      <deviceboot/>
      <disksnapshot default='off' toggle='no'/>
    </features>
  </guest>

</capabilities>

Comment 40 Richard W.M. Jones 2018-09-05 10:52:40 UTC
<guest>
    <os_type>hvm</os_type>
    <arch name='ppc64le'>
      <wordsize>64</wordsize>
      <emulator>/usr/libexec/qemu-kvm</emulator>
      <machine maxCpus='240'>pseries-rhel7.6.0</machine>
      <machine canonical='pseries-rhel7.6.0' maxCpus='240'>pseries</machine>
      <machine maxCpus='240'>pseries-rhel7.6.0-sxxm</machine>
      <machine maxCpus='240'>pseries-rhel7.5.0</machine>
      <machine maxCpus='240'>pseries-rhel7.5.0-sxxm</machine>
      <domain type='qemu'/>
      <domain type='kvm'>       <---

libvirt is telling us that KVM is available, so we request it.

I suspect a bug in libvirt, but let me ask the libvirt team first.

Comment 41 Richard W.M. Jones 2018-09-05 11:02:05 UTC
libvirt detects if KVM is available using this test which basically
looks for /dev/kvm and if the qemu binary has support for KVM:

https://github.com/libvirt/libvirt/blob/e9e904b3b70533982954ab39ccb81122e8dad338/src/qemu/qemu_capabilities.c#L837

Because /dev/kvm is present (but not working) libvirt things
KVM is available, and so advertises it.  libguestfs picks this
up and tries to run the appliance with KVM (and without the
fallback to TCG since that's not possible for running VMs
through libvirt).

Comment 42 Laurent Vivier 2018-09-05 14:25:08 UTC
(In reply to Richard W.M. Jones from comment #41)
> libvirt detects if KVM is available using this test which basically
> looks for /dev/kvm and if the qemu binary has support for KVM:
> 
> https://github.com/libvirt/libvirt/blob/
> e9e904b3b70533982954ab39ccb81122e8dad338/src/qemu/qemu_capabilities.c#L837
> 
> Because /dev/kvm is present (but not working) libvirt things
> KVM is available, and so advertises it.  libguestfs picks this
> up and tries to run the appliance with KVM (and without the
> fallback to TCG since that's not possible for running VMs
> through libvirt).

kvm-pr.ko is not loaded on POWER9, this is checked by:

arch/powerpc/kvm/book3s_pr.c:

static int kvmppc_core_check_processor_compat_pr(void)
{
        /*
         * PR KVM can work on POWER9 inside a guest partition
         * running in HPT mode.  It can't work if we are using
         * radix translation (because radix provides no way for
         * a process to have unique translations in quadrant 3).
         */
        if (cpu_has_feature(CPU_FTR_ARCH_300) && radix_enabled())
                return -EIO;
        return 0;
}

But the /dev/kvm is always created, because of the udev rule from qemu-kvm-rhev package:

/usr/lib/udev/rules.d/80-kvm.rules

KERNEL=="kvm", GROUP="kvm", MODE="0666", OPTIONS+="static_node=kvm"

So, checking for /dev/kvm is not the good way to check for the kvm module availability.

udev(7)

          static_node=
...
               The static nodes might not have a corresponding kernel device;
               they are used to trigger automatic kernel module loading when
               they are accessed.

Looks like a bug in libvirt?

Comment 43 Richard W.M. Jones 2018-09-05 14:51:12 UTC
FWIW libguestfs in direct mode checks that /dev/kvm is openable
rather than merely the node exists:
https://github.com/libguestfs/libguestfs/blob/2c349a00d27957911ea0ee7704420aec0715eea9/lib/launch-direct.c#L396

Comment 44 Pino Toscano 2018-09-05 15:37:12 UTC
FWIW, the static kvm node was introduced recently:
https://bugzilla.redhat.com/show_bug.cgi?id=1532382
https://github.com/systemd/systemd/commit/d35d6249d5a7ed3228
(The bug was originally reported for s390x, bug 1527947 is related to it.)

Comment 45 Xianghua Chen 2018-09-10 03:09:19 UTC
(In reply to Richard W.M. Jones from comment #43)
> FWIW libguestfs in direct mode checks that /dev/kvm is openable
> rather than merely the node exists:
> https://github.com/libguestfs/libguestfs/blob/
> 2c349a00d27957911ea0ee7704420aec0715eea9/lib/launch-direct.c#L396

Hi, should I change status back to "ASSIGNED" so that you can do something with the "libvirt" mode?
Or should I just verify this bug since the direct mode is ok, and there will be another bug for libvirt to resolve this problem?

Comment 46 Richard W.M. Jones 2018-09-13 07:45:21 UTC
This isn't really a bug in libguestfs, probably not even in libvirt.

We need a way to detect reliably if KVM is working.  Both are
currently looking at /dev/kvm to see if that's true, and that's
not really a good test.  But there's nothing better to replace it
with.

For this reason I have opened some new bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1628468
No reliable way to detect that KVM is working

https://bugzilla.redhat.com/show_bug.cgi?id=1628469
libvirt uses incorrect method to detect that KVM is working

Comment 47 Andrea Bolognani 2018-09-13 08:39:39 UTC
(In reply to Richard W.M. Jones from comment #46)
> This isn't really a bug in libguestfs, probably not even in libvirt.
> 
> We need a way to detect reliably if KVM is working.  Both are
> currently looking at /dev/kvm to see if that's true, and that's
> not really a good test.  But there's nothing better to replace it
> with.

I looked into this yesterday and managed to reproduce it;
unfortunately I didn't get to updating the bug before leaving
for the day :(

There *is* a better way to detect whether KVM is working, which
is the query-kvm QMP command: in a POWER9 guests, it correctly
reports that KVM is not enabled.

Unfortunately it looks like libvirt is messing with the result
and reporting incorrect information through capabilities, which
in turn leads to libguestfs attempting to create a KVM guest
instead of a TCG one.

> For this reason I have opened some new bugs:
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1628468
> No reliable way to detect that KVM is working

So this bug is not needed. I'll add a comment and close it
accordingly.

> https://bugzilla.redhat.com/show_bug.cgi?id=1628469
> libvirt uses incorrect method to detect that KVM is working

This one is a real bug, though, and I've already started looking
into it.

Comment 48 Richard W.M. Jones 2018-09-13 11:30:16 UTC
Patch posted:
https://www.redhat.com/archives/libguestfs/2018-September/msg00072.html

Comment 49 Xianghua Chen 2018-09-14 02:34:55 UTC
(In reply to Richard W.M. Jones from comment #48)
> Patch posted:
> https://www.redhat.com/archives/libguestfs/2018-September/msg00072.html

Thank you for your reply.
I see you've changed : 
  Target Release: 7.6 → 7.7

So, should it be removed from errata advisory?
 https://errata.devel.redhat.com/advisory/33252 
Can you work on it? Thanks a lot.

Comment 50 Richard W.M. Jones 2018-09-14 07:51:13 UTC
Yes it should, I'll remove it now.

Comment 53 Joseph Kachuck 2018-10-17 19:09:13 UTC
Hello,
This BZ has been approved for RHEL 7.7. 

RHEL ALT 7.6 is the last release of RHEL ALT. There will be one final release of RHEL ALT 7.6.z.
Only critical bugs will be accepted for this release. If this bug is required for RHEL ALT 7.6 ALT. 
Please provide a justification why this is required for Z stream. Please confirm what a client would see in the field from this issue.

Thank You
Joe Kachuck

Comment 54 Richard W.M. Jones 2018-10-17 19:56:46 UTC
I don't believe any request has been made for z-stream.

Comment 55 David Gibson 2018-10-17 23:20:23 UTC
In any case libguestfs is in mainstream RHEL, only the kernel is different in RHEL-ALT-7.6, so the end of the alt series isn't relevant here.

Comment 57 Pino Toscano 2019-01-17 12:09:59 UTC
This bug will be fixed by the rebase scheduled for RHEL 7.7, see bug 1621895.

Comment 59 YongkuiGuo 2019-01-25 11:19:13 UTC
rjones, I tried to verify this bug with the fixed version 'libguestfs-1.40.1-1.el7' on ppc64le(p9+alt). But it still failed with libvirt backend.

1.
#LIBGUESTFS_BACKEND=libvirt libguestfs-test-tool 
...
libguestfs: error: could not create appliance through libvirt.

Try running qemu directly without libvirt using this environment variable:
export LIBGUESTFS_BACKEND=direct

Original error from libvirt: internal error: qemu unexpectedly closed the monitor: ioctl(KVM_CREATE_VM) failed: 22 Invalid argument
2019-01-25T11:07:33.930070Z qemu-kvm: failed to initialize KVM: Invalid argument [code=1 int1=-1]
libguestfs: closing guestfs handle 0x10023261250 (state 0) 

2.
#LIBGUESTFS_BACKEND=direct libguestfs-test-tool   --- OK


Should we wait for the fix of bug 1628469 and update the libvirt version?

Comment 60 Richard W.M. Jones 2019-01-25 11:20:37 UTC
Difficult to tell.  In all cases we need to see the full output of libguestfs-test-tool
to determine what's going on.

Comment 61 YongkuiGuo 2019-01-25 11:30:26 UTC
# LIBGUESTFS_BACKEND=libvirt libguestfs-test-tool 
LIBGUESTFS_BACKEND=libvirt
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
XDG_RUNTIME_DIR=/run/user/0
SELinux: Enforcing
guestfs_get_append: (null)
guestfs_get_autosync: 1
guestfs_get_backend: libvirt
guestfs_get_backend_settings: []
guestfs_get_cachedir: /var/tmp
guestfs_get_hv: /usr/libexec/qemu-kvm
guestfs_get_memsize: 1024
guestfs_get_network: 0
guestfs_get_path: /usr/lib64/guestfs
guestfs_get_pgroup: 0
guestfs_get_program: libguestfs-test-tool
guestfs_get_recovery_proc: 1
guestfs_get_smp: 1
guestfs_get_sockdir: /tmp
guestfs_get_tmpdir: /tmp
guestfs_get_trace: 0
guestfs_get_verbose: 1
host_cpu: powerpc64le
Launching appliance, timeout set to 600 seconds.
libguestfs: launch: program=libguestfs-test-tool
libguestfs: launch: version=1.40.1rhel=7,release=1.el7,libvirt
libguestfs: launch: backend registered: unix
libguestfs: launch: backend registered: uml
libguestfs: launch: backend registered: libvirt
libguestfs: launch: backend registered: direct
libguestfs: launch: backend=libvirt
libguestfs: launch: tmpdir=/tmp/libguestfsko3SYD
libguestfs: launch: umask=0022
libguestfs: launch: euid=0
libguestfs: libvirt version = 4005000 (4.5.0)
libguestfs: guest random name = guestfs-cnihfkhvgx8au7ci
libguestfs: connect to libvirt
libguestfs: opening libvirt handle: URI = qemu:///system, auth = default+wrapper, flags = 0
libguestfs: successfully opened libvirt handle: conn = 0x10023b32c80
libguestfs: qemu version (reported by libvirt) = 2012000 (2.12.0)
libguestfs: get libvirt capabilities
libguestfs: parsing capabilities XML
libguestfs: build appliance
libguestfs: begin building supermin appliance
libguestfs: run supermin
libguestfs: command: run: /usr/bin/supermin5
libguestfs: command: run: \ --build
libguestfs: command: run: \ --verbose
libguestfs: command: run: \ --if-newer
libguestfs: command: run: \ --lock /var/tmp/.guestfs-0/lock
libguestfs: command: run: \ --copy-kernel
libguestfs: command: run: \ -f ext2
libguestfs: command: run: \ --host-cpu powerpc64le
libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d
libguestfs: command: run: \ -o /var/tmp/.guestfs-0/appliance.d
supermin: version: 5.1.19
supermin: rpm: detected RPM version 4.11
supermin: package handler: fedora/rpm
supermin: acquiring lock on /var/tmp/.guestfs-0/lock
supermin: build: /usr/lib64/guestfs/supermin.d
supermin: reading the supermin appliance
supermin: build: visiting /usr/lib64/guestfs/supermin.d/base.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib64/guestfs/supermin.d/daemon.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib64/guestfs/supermin.d/excludefiles type uncompressed excludefiles
supermin: build: visiting /usr/lib64/guestfs/supermin.d/hostfiles type uncompressed hostfiles
supermin: build: visiting /usr/lib64/guestfs/supermin.d/init.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib64/guestfs/supermin.d/packages type uncompressed packages
supermin: build: visiting /usr/lib64/guestfs/supermin.d/udev-rules.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib64/guestfs/supermin.d/zz-packages-rescue type uncompressed packages
supermin: build: visiting /usr/lib64/guestfs/supermin.d/zz-packages-rsync type uncompressed packages
supermin: build: visiting /usr/lib64/guestfs/supermin.d/zz-packages-xfs type uncompressed packages
supermin: mapping package names to installed packages
supermin: resolving full list of package dependencies
supermin: build: 206 packages, including dependencies
supermin: build: 31239 files
supermin: build: 7583 files, after matching excludefiles
supermin: build: 7591 files, after adding hostfiles
supermin: build: 7580 files, after removing unreadable files
supermin: build: 7604 files, after munging
supermin: kernel: looking for kernel using environment variables ...
supermin: kernel: looking for kernels in /lib/modules/*/vmlinuz ...
supermin: kernel: looking for kernels in /boot ...
supermin: kernel: kernel version of /boot/vmlinuz-4.14.0-115.el7a.ppc64le = 4.14.0-115.el7a.ppc64le (from filename)
supermin: kernel: picked modules path /lib/modules/4.14.0-115.el7a.ppc64le
supermin: kernel: kernel version of /boot/vmlinuz-3.10.0-957.el7.ppc64le = 3.10.0-957.el7.ppc64le (from filename)
supermin: kernel: picked modules path /lib/modules/3.10.0-957.el7.ppc64le
supermin: kernel: kernel version of /boot/vmlinuz-0-rescue-973b5f23631e48ecb9df78cd49ed737e = error, no modpath
supermin: kernel: picked vmlinuz /boot/vmlinuz-4.14.0-115.el7a.ppc64le
supermin: kernel: kernel_version 4.14.0-115.el7a.ppc64le
supermin: kernel: modpath /lib/modules/4.14.0-115.el7a.ppc64le
supermin: ext2: creating empty ext2 filesystem '/var/tmp/.guestfs-0/appliance.d.lxctv9sy/root'
supermin: ext2: populating from base image
supermin: ext2: copying files from host filesystem
supermin: ext2: copying kernel modules
supermin: ext2: creating minimal initrd '/var/tmp/.guestfs-0/appliance.d.lxctv9sy/initrd'
supermin: ext2: wrote 25 modules to minimal initrd
supermin: renaming /var/tmp/.guestfs-0/appliance.d.lxctv9sy to /var/tmp/.guestfs-0/appliance.d
libguestfs: finished building supermin appliance
libguestfs: command: run: qemu-img
libguestfs: command: run: \ create
libguestfs: command: run: \ -f qcow2
libguestfs: command: run: \ -o backing_file=/var/tmp/.guestfs-0/appliance.d/root,backing_fmt=raw
libguestfs: command: run: \ /tmp/libguestfsko3SYD/overlay2.qcow2
Formatting '/tmp/libguestfsko3SYD/overlay2.qcow2', fmt=qcow2 size=4294967296 backing_file=/var/tmp/.guestfs-0/appliance.d/root backing_fmt=raw cluster_size=65536 lazy_refcounts=off refcount_bits=16
libguestfs: create libvirt XML
libguestfs: libvirt XML:\n<?xml version="1.0"?>\n<domain type="kvm" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0">\n  <name>guestfs-cnihfkhvgx8au7ci</name>\n  <memory unit="MiB">1024</memory>\n  <currentMemory unit="MiB">1024</currentMemory>\n  <vcpu>1</vcpu>\n  <clock offset="utc">\n    <timer name="rtc" tickpolicy="catchup"/>\n    <timer name="pit" tickpolicy="delay"/>\n  </clock>\n  <os>\n    <type machine="pseries">hvm</type>\n    <kernel>/var/tmp/.guestfs-0/appliance.d/kernel</kernel>\n    <initrd>/var/tmp/.guestfs-0/appliance.d/initrd</initrd>\n    <cmdline>panic=1 console=hvc0 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color</cmdline>\n  </os>\n  <on_reboot>destroy</on_reboot>\n  <devices>\n    <rng model="virtio">\n      <backend model="random">/dev/urandom</backend>\n    </rng>\n    <controller type="scsi" index="0" model="virtio-scsi"/>\n    <disk device="disk" type="file">\n      <source file="/tmp/libguestfsko3SYD/scratch1.img"/>\n      <target dev="sda" bus="scsi"/>\n      <driver name="qemu" type="raw" cache="unsafe"/>\n      <address type="drive" controller="0" bus="0" target="0" unit="0"/>\n    </disk>\n    <disk type="file" device="disk">\n      <source file="/tmp/libguestfsko3SYD/overlay2.qcow2"/>\n      <target dev="sdb" bus="scsi"/>\n      <driver name="qemu" type="qcow2" cache="unsafe"/>\n      <address type="drive" controller="0" bus="0" target="1" unit="0"/>\n    </disk>\n    <serial type="unix">\n      <source mode="connect" path="/tmp/libguestfsQ07zf9/console.sock"/>\n      <target port="0"/>\n    </serial>\n    <channel type="unix">\n      <source mode="connect" path="/tmp/libguestfsQ07zf9/guestfsd.sock"/>\n      <target type="virtio" name="org.libguestfs.channel.0"/>\n    </channel>\n    <controller type="usb" model="none"/>\n    <memballoon model="none"/>\n  </devices>\n  <qemu:commandline>\n    <qemu:env name="TMPDIR" value="/var/tmp"/>\n  </qemu:commandline>\n</domain>\n
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -R
libguestfs: command: run: \ -Z /var/tmp/.guestfs-0
libguestfs: /var/tmp/.guestfs-0:
libguestfs: drwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 .
libguestfs: drwxrwxrwt. root root system_u:object_r:tmp_t:s0       ..
libguestfs: drwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 appliance.d
libguestfs: -rw-r--r--. root root unconfined_u:object_r:user_tmp_t:s0 lock
libguestfs: 
libguestfs: /var/tmp/.guestfs-0/appliance.d:
libguestfs: drwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 .
libguestfs: drwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 ..
libguestfs: -rw-r--r--. root root unconfined_u:object_r:user_tmp_t:s0 initrd
libguestfs: -rwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 kernel
libguestfs: -rw-r--r--. root root unconfined_u:object_r:user_tmp_t:s0 root
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -Z /tmp/libguestfsQ07zf9
libguestfs: drwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 .
libguestfs: drwxrwxrwt. root root system_u:object_r:tmp_t:s0       ..
libguestfs: srw-rw----. root qemu unconfined_u:object_r:user_tmp_t:s0 console.sock
libguestfs: srw-rw----. root qemu unconfined_u:object_r:user_tmp_t:s0 guestfsd.sock
libguestfs: launch libvirt guest
libguestfs: error: could not create appliance through libvirt.

Try running qemu directly without libvirt using this environment variable:
export LIBGUESTFS_BACKEND=direct

Original error from libvirt: internal error: qemu unexpectedly closed the monitor: ioctl(KVM_CREATE_VM) failed: 22 Invalid argument
2019-01-25T11:29:32.459698Z qemu-kvm: failed to initialize KVM: Invalid argument [code=1 int1=-1]
libguestfs: closing guestfs handle 0x10023b31250 (state 0)
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsko3SYD
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsQ07zf9

Comment 62 Richard W.M. Jones 2019-01-25 12:03:45 UTC
The problem here is that two different bugs have been fixed, but we only have this one BZ for both.

Bug A: (described fully in comment 20 and comment 21)

This applies to both LIBGUESTFS_BACKEND=direct and LIBGUESTFS_BACKEND=libvirt:
libguestfs should not pass -cpu host (or the libvirt equivalent "host-passthrough") on
ppc64le, because this is incompatible with TCG.  In 1.40 we fixed this by never using
"host"/"host-passthrough" if the platform is ppc64 or ppc64le.

Bug B: (described fully in comment 47)

With LIBGUESTFS_BACKEND=direct: libguestfs used an incorrect test to see if
KVM is working and fall back to TCG if not.  On ppc64le this reported that KVM
was always available.  This test was fixed in 1.40.

With LIBGUESTFS_BACKEND=libvirt: the same thing had to be fixed in libvirt,
tracked in bug 1628469.  Since this bug depends on bug 1628469 you probably want
the version of libvirt that fixes that before testing.  Unfortunately that is not
available yet.

In the trace in comment 61 you can see that it's using the libvirt backend, and libguestfs
is not passing the "host-passthrough", but libvirt still uses KVM anyway and it
still fails.  So libguestfs bug A has been fixed, but the libvirt bug 1628469 has not,
and nothing about bug B has been tested.

Comment 63 YongkuiGuo 2019-01-28 02:34:51 UTC
(In reply to Richard W.M. Jones from comment #62)
> The problem here is that two different bugs have been fixed, but we only
> have this one BZ for both.
> 
> Bug A: (described fully in comment 20 and comment 21)
> 
> This applies to both LIBGUESTFS_BACKEND=direct and
> LIBGUESTFS_BACKEND=libvirt:
> libguestfs should not pass -cpu host (or the libvirt equivalent
> "host-passthrough") on
> ppc64le, because this is incompatible with TCG.  In 1.40 we fixed this by
> never using
> "host"/"host-passthrough" if the platform is ppc64 or ppc64le.
> 
> Bug B: (described fully in comment 47)
> 
> With LIBGUESTFS_BACKEND=direct: libguestfs used an incorrect test to see if
> KVM is working and fall back to TCG if not.  On ppc64le this reported that
> KVM
> was always available.  This test was fixed in 1.40.
> 
> With LIBGUESTFS_BACKEND=libvirt: the same thing had to be fixed in libvirt,
> tracked in bug 1628469.  Since this bug depends on bug 1628469 you probably
> want
> the version of libvirt that fixes that before testing.  Unfortunately that
> is not
> available yet.
> 
> In the trace in comment 61 you can see that it's using the libvirt backend,
> and libguestfs
> is not passing the "host-passthrough", but libvirt still uses KVM anyway and
> it
> still fails.  So libguestfs bug A has been fixed, but the libvirt bug
> 1628469 has not,
> and nothing about bug B has been tested.

I see. I will verify the bug B once the libvirt bug 1628469 is fixed. Thanks for your full explanation.

Comment 64 YongkuiGuo 2019-04-25 03:10:18 UTC
Verified with packages:
libguestfs-1.40.2-3.el7.ppc64le
libvirt-4.5.0-13.el7.ppc64le

Steps:

1. On ppc64le(p9+alt) vm env
#LIBGUESTFS_BACKEND=direct libguestfs-test-tool   --- OK
...
/usr/libexec/qemu-kvm \
    -global virtio-blk-pci.scsi=off \
    -no-user-config \
    -enable-fips \
    -nodefaults \
    -display none \
    -machine pseries,accel=kvm:tcg \
    -m 1024 \
    -no-reboot \
    -rtc driftfix=slew \
    -kernel /var/tmp/.guestfs-0/appliance.d/kernel \
    -initrd /var/tmp/.guestfs-0/appliance.d/initrd \
    -object rng-random,filename=/dev/urandom,id=rng0 \
    -device virtio-rng-pci,rng=rng0 \
    -device virtio-scsi-pci,id=scsi \
    -drive file=/tmp/libguestfs9pxlCB/scratch1.img,cache=unsafe,format=raw,id=hd0,if=none \
    -device scsi-hd,drive=hd0 \
    -drive file=/var/tmp/.guestfs-0/appliance.d/root,snapshot=on,id=appliance,cache=unsafe,if=none,format=raw \
    -device scsi-hd,drive=appliance \
    -device virtio-serial-pci \
    -serial stdio \
    -chardev socket,path=/tmp/libguestfsxTF6sc/guestfsd.sock,id=channel0 \
    -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 \
    -append "panic=1 console=hvc0 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color"
ioctl(KVM_CREATE_VM) failed: 22 Invalid argument
qemu-kvm: failed to initialize KVM: Invalid argument
qemu-kvm: Back to tcg accelerator
...

libguestfs doesn't pass '-cpu host', and can fall back to tcg if kvm does not work.


2.
# LIBGUESTFS_BACKEND=libvirt libguestfs-test-tool  --- OK
...
libguestfs: libvirt XML:\n<?xml version="1.0"?>\n<domain type="qemu" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0">\n  <name>guestfs-vl57proi7s3w2xf9</name>\n  <memory unit="MiB">1024</memory>\n  <currentMemory unit="MiB">1024</currentMemory>\n  <vcpu>1</vcpu>\n  <clock offset="utc">\n    <timer name="rtc" tickpolicy="catchup"/>\n    <timer name="pit" tickpolicy="delay"/>\n  </clock>\n  <os>\n    <type machine="pseries">hvm</type>\n
...

libguestfs doesn't pass 'host-passthrough' and libvirt uses qemu rather than kvm.

So verified this bug.

Comment 66 errata-xmlrpc 2019-08-06 12:44:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:2096


Note You need to log in before you can comment on or make changes to this bug.