Bug 1843865 - on POWER 9, libvirt says KVM is available in an LPAR, but it's KVM-PR which cannot run guests because: No large decrementer support, try appending -machine cap-large-decr=off
Summary: on POWER 9, libvirt says KVM is available in an LPAR, but it's KVM-PR which c...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: libvirt
Version: 8.2
Hardware: ppc64le
OS: Unspecified
medium
medium
Target Milestone: rc
: 8.4
Assignee: Daniel Henrique Barboza (IBM)
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 1861941
Blocks: TRACKER-bugs-affecting-libguestfs
TreeView+ depends on / blocked
 
Reported: 2020-06-04 10:38 UTC by YongkuiGuo
Modified: 2020-10-19 11:24 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-19 11:24:52 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
virsh capabilities output (4.67 KB, text/plain)
2020-06-05 09:56 UTC, YongkuiGuo
no flags Details
virsh domcapabilities output (2.68 KB, text/plain)
2020-06-05 09:57 UTC, YongkuiGuo
no flags Details

Description YongkuiGuo 2020-06-04 10:38:51 UTC
Description of problem:
The libguestfs-test-tool command fails on ppc64le(Power 9) when running gating test for module-virt-8.2-8020020200604071508-4cda2c84.


Version-Release number of selected component (if applicable):
libguestfs-1.40.2-22.module+el8.2.0+6029+618ef2ec.ppc64le
qemu-kvm-4.2.0-19.module+el8.2.0+6296+6b821950.ppc64le
kernel-4.18.0-193.el8.ppc64le


How reproducible:
100%


Steps:

1. Prepare rhel8.2 env with RHEL-8.2.0-20200404.0 compose and install RHEL-8.2.0.z AV related packages
The hostname of env: ibm-p9z-25-lp8.virt.pnr.lab.eng.rdu2.redhat.com

2.
# libguestfs-test-tool
     ************************************************************
     *                    IMPORTANT NOTICE
     *
     * When reporting bugs, include the COMPLETE, UNEDITED
     * output below in your bug report.
     *
     ************************************************************
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
XDG_RUNTIME_DIR=/run/user/0
SELinux: Enforcing
guestfs_get_append: (null)
guestfs_get_autosync: 1
guestfs_get_backend: libvirt
guestfs_get_backend_settings: []
guestfs_get_cachedir: /var/tmp
guestfs_get_hv: /usr/libexec/qemu-kvm
guestfs_get_memsize: 1024
guestfs_get_network: 0
guestfs_get_path: /usr/lib64/guestfs
guestfs_get_pgroup: 0
guestfs_get_program: libguestfs-test-tool
guestfs_get_recovery_proc: 1
guestfs_get_smp: 1
guestfs_get_sockdir: /tmp
guestfs_get_tmpdir: /tmp
guestfs_get_trace: 0
guestfs_get_verbose: 1
host_cpu: powerpc64le
Launching appliance, timeout set to 600 seconds.
libguestfs: launch: program=libguestfs-test-tool
libguestfs: launch: version=1.40.2rhel=8,release=22.module+el8.2.0+6029+618ef2ec,libvirt
libguestfs: launch: backend registered: unix
libguestfs: launch: backend registered: uml
libguestfs: launch: backend registered: libvirt
libguestfs: launch: backend registered: direct
libguestfs: launch: backend=libvirt
libguestfs: launch: tmpdir=/tmp/libguestfsoLK2aR
libguestfs: launch: umask=0022
libguestfs: launch: euid=0
libguestfs: libvirt version = 6000000 (6.0.0)
libguestfs: guest random name = guestfs-ahqk4c8fzchlj62w
libguestfs: connect to libvirt
libguestfs: opening libvirt handle: URI = qemu:///system, auth = default+wrapper, flags = 0
libguestfs: successfully opened libvirt handle: conn = 0x10025a2e510
libguestfs: qemu version (reported by libvirt) = 4002000 (4.2.0)
libguestfs: get libvirt capabilities
libguestfs: parsing capabilities XML
libguestfs: build appliance
libguestfs: begin building supermin appliance
libguestfs: run supermin
libguestfs: command: run: /usr/bin/supermin
libguestfs: command: run: \ --build
libguestfs: command: run: \ --verbose
libguestfs: command: run: \ --if-newer
libguestfs: command: run: \ --lock /var/tmp/.guestfs-0/lock
libguestfs: command: run: \ --copy-kernel
libguestfs: command: run: \ -f ext2
libguestfs: command: run: \ --host-cpu powerpc64le
libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d
libguestfs: command: run: \ -o /var/tmp/.guestfs-0/appliance.d
supermin: version: 5.1.19
supermin: rpm: detected RPM version 4.14
supermin: package handler: fedora/rpm
supermin: acquiring lock on /var/tmp/.guestfs-0/lock
supermin: build: /usr/lib64/guestfs/supermin.d
supermin: reading the supermin appliance
supermin: build: visiting /usr/lib64/guestfs/supermin.d/base.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib64/guestfs/supermin.d/daemon.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib64/guestfs/supermin.d/excludefiles type uncompressed excludefiles
supermin: build: visiting /usr/lib64/guestfs/supermin.d/hostfiles type uncompressed hostfiles
supermin: build: visiting /usr/lib64/guestfs/supermin.d/init.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib64/guestfs/supermin.d/packages type uncompressed packages
supermin: build: visiting /usr/lib64/guestfs/supermin.d/udev-rules.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib64/guestfs/supermin.d/zz-packages-gfs2 type uncompressed packages
supermin: build: visiting /usr/lib64/guestfs/supermin.d/zz-packages-rescue type uncompressed packages
supermin: build: visiting /usr/lib64/guestfs/supermin.d/zz-packages-rsync type uncompressed packages
supermin: build: visiting /usr/lib64/guestfs/supermin.d/zz-packages-xfs type uncompressed packages
supermin: build: visiting /usr/lib64/guestfs/supermin.d/zz-winsupport.tar.gz type gzip base image (tar)
supermin: mapping package names to installed packages
supermin: resolving full list of package dependencies
supermin: build: 199 packages, including dependencies
supermin: build: 38194 files
supermin: build: 12511 files, after matching excludefiles
supermin: build: 12525 files, after adding hostfiles
supermin: build: 12510 files, after removing unreadable files
supermin: build: 12544 files, after munging
supermin: kernel: looking for kernel using environment variables ...
supermin: kernel: looking for kernels in /lib/modules/*/vmlinuz ...
supermin: kernel: picked vmlinuz /lib/modules/4.18.0-193.el8.ppc64le/vmlinuz
supermin: kernel: kernel_version 4.18.0-193.el8.ppc64le
supermin: kernel: modpath /lib/modules/4.18.0-193.el8.ppc64le
supermin: ext2: creating empty ext2 filesystem '/var/tmp/.guestfs-0/appliance.d.it6irzeu/root'
supermin: ext2: populating from base image
supermin: ext2: copying files from host filesystem
supermin: ext2: copying kernel modules
supermin: ext2: creating minimal initrd '/var/tmp/.guestfs-0/appliance.d.it6irzeu/initrd'
supermin: ext2: wrote 24 modules to minimal initrd
supermin: renaming /var/tmp/.guestfs-0/appliance.d.it6irzeu to /var/tmp/.guestfs-0/appliance.d
libguestfs: finished building supermin appliance
libguestfs: command: run: qemu-img
libguestfs: command: run: \ create
libguestfs: command: run: \ -f qcow2
libguestfs: command: run: \ -o backing_file=/var/tmp/.guestfs-0/appliance.d/root,backing_fmt=raw
libguestfs: command: run: \ /tmp/libguestfsoLK2aR/overlay2.qcow2
Formatting '/tmp/libguestfsoLK2aR/overlay2.qcow2', fmt=qcow2 size=4294967296 backing_file=/var/tmp/.guestfs-0/appliance.d/root backing_fmt=raw cluster_size=65536 lazy_refcounts=off refcount_bits=16
libguestfs: create libvirt XML
libguestfs: libvirt XML:\n<?xml version="1.0"?>\n<domain type="kvm" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0">\n  <name>guestfs-ahqk4c8fzchlj62w</name>\n  <memory unit="MiB">1024</memory>\n  <currentMemory unit="MiB">1024</currentMemory>\n  <vcpu>1</vcpu>\n  <clock offset="utc">\n    <timer name="rtc" tickpolicy="catchup"/>\n    <timer name="pit" tickpolicy="delay"/>\n  </clock>\n  <os>\n    <type machine="pseries">hvm</type>\n    <kernel>/var/tmp/.guestfs-0/appliance.d/kernel</kernel>\n    <initrd>/var/tmp/.guestfs-0/appliance.d/initrd</initrd>\n    <cmdline>panic=1 console=hvc0 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color</cmdline>\n  </os>\n  <on_reboot>destroy</on_reboot>\n  <devices>\n    <rng model="virtio">\n      <backend model="random">/dev/urandom</backend>\n    !
 </rng>\n    <controller type="scsi" index="0" model="virtio-scsi"/>\n    <disk device="disk" type="file">\n      <source file="/tmp/libguestfsoLK2aR/scratch1.img"/>\n      <target dev="sda" bus="scsi"/>\n      <driver name="qemu" type="raw" cache="unsafe"/>\n      <address type="drive" controller="0" bus="0" target="0" unit="0"/>\n    </disk>\n    <disk type="file" device="disk">\n      <source file="/tmp/libguestfsoLK2aR/overlay2.qcow2"/>\n      <target dev="sdb" bus="scsi"/>\n      <driver name="qemu" type="qcow2" cache="unsafe"/>\n      <address type="drive" controller="0" bus="0" target="1" unit="0"/>\n    </disk>\n    <serial type="unix">\n      <source mode="connect" path="/tmp/libguestfs5KB8SX/console.sock"/>\n      <target port="0"/>\n    </serial>\n    <channel type="unix">\n      <source mode="connect" path="/tmp/libguestfs5KB8SX/guestfsd.sock"/>\n      <target type="virtio" name="org.libguestfs.channel.0"/>\n    </channel>\n    <controller type="usb" model="none"!
 />\n    <memballoon model="none"/>\n  </devices>\n  <qemu:commandline>
\n    <qemu:env name="TMPDIR" value="/var/tmp"/>\n  </qemu:commandline>\n</domain>\n
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -R
libguestfs: command: run: \ -Z /var/tmp/.guestfs-0
libguestfs: /var/tmp/.guestfs-0:
libguestfs: total 0
libguestfs: drwxr-xr-x. 3 root root unconfined_u:object_r:user_tmp_t:s0  37 Jun  4 06:32 .
libguestfs: drwxrwxrwt. 4 root root system_u:object_r:tmp_t:s0          103 Jun  4 06:32 ..
libguestfs: drwxr-xr-x. 2 root root unconfined_u:object_r:user_tmp_t:s0  46 Jun  4 06:32 appliance.d
libguestfs: -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0   0 Jun  4 06:32 lock
libguestfs:
libguestfs: /var/tmp/.guestfs-0/appliance.d:
libguestfs: total 501292
libguestfs: drwxr-xr-x. 2 root root unconfined_u:object_r:user_tmp_t:s0         46 Jun  4 06:32 .
libguestfs: drwxr-xr-x. 3 root root unconfined_u:object_r:user_tmp_t:s0         37 Jun  4 06:32 ..
libguestfs: -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0    5000192 Jun  4 06:32 initrd
libguestfs: -rwxr-xr-x. 1 root root unconfined_u:object_r:user_tmp_t:s0   26837261 Jun  4 06:32 kernel
libguestfs: -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0 4294967296 Jun  4 06:32 root
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -Z /tmp/libguestfs5KB8SX
libguestfs: total 4
libguestfs: drwxr-xr-x.  2 root root unconfined_u:object_r:user_tmp_t:s0   47 Jun  4 06:32 .
libguestfs: drwxrwxrwt. 13 root root system_u:object_r:tmp_t:s0          4096 Jun  4 06:32 ..
libguestfs: srw-rw----.  1 root qemu unconfined_u:object_r:user_tmp_t:s0    0 Jun  4 06:32 console.sock
libguestfs: srw-rw----.  1 root qemu unconfined_u:object_r:user_tmp_t:s0    0 Jun  4 06:32 guestfsd.sock
libguestfs: launch libvirt guest
libguestfs: error: could not create appliance through libvirt.

Try running qemu directly without libvirt using this environment variable:
export LIBGUESTFS_BACKEND=direct

Original error from libvirt: internal error: qemu unexpectedly closed the monitor: 2020-06-04T10:32:27.634628Z qemu-kvm: No large decrementer support, try appending -machine cap-large-decr=off [code=1 int1=-1]
libguestfs: closing guestfs handle 0x10025a2c6f0 (state 0)
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsoLK2aR
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfs5KB8SX


Actual results:
The libguestfs-test-tool command fails.

Expected results:
The libguestfs-test-tool command run successfully.


Additional info:

# cat /proc/cpuinfo
processor        : 0
cpu                : POWER9 (architected), altivec supported
clock                : 2900.000000MHz
revision        : 2.2 (pvr 004e 0202)

processor        : 1
cpu                : POWER9 (architected), altivec supported
clock                : 2900.000000MHz
revision        : 2.2 (pvr 004e 0202)
...
processor        : 15
cpu                : POWER9 (architected), altivec supported
clock                : 2900.000000MHz
revision        : 2.2 (pvr 004e 0202)

timebase        : 512000000
platform        : pSeries
model                : IBM,8375-42A
machine                : CHRP IBM,8375-42A
MMU

Comment 1 Richard W.M. Jones 2020-06-04 11:14:07 UTC
David: Another firmware update required :-?

Comment 2 David Gibson 2020-06-05 01:12:21 UTC
Um.. probably not.

ibm-p9z-25-lp8.virt.pnr.lab.eng.rdu2.redhat.com

So this looks to be an LPAR (i.e. a PowerVM guest rather than a bare metal machine or KVM guest).

Which means we don't support KVM at all on such a machine.  I can't immediately tell from the logs if it's attempting to use KVM, or if it's using TCG.

If the latter, I guess we need large decr support in TCG.  If the former... uh... probably need to have it not do that.

Comment 3 YongkuiGuo 2020-06-05 01:39:02 UTC
(In reply to David Gibson from comment #2)
> Um.. probably not.
> 
> ibm-p9z-25-lp8.virt.pnr.lab.eng.rdu2.redhat.com
> 
> So this looks to be an LPAR (i.e. a PowerVM guest rather than a bare metal
> machine or KVM guest).

Yes, it should be an LPAR.

# virt-what
ibm_power-lpar_dedicated

> 
> Which means we don't support KVM at all on such a machine.  I can't
> immediately tell from the logs if it's attempting to use KVM, or if it's
> using TCG.
> 
> If the latter, I guess we need large decr support in TCG.  If the former...
> uh... probably need to have it not do that.

libguestfs-test-tool use KVM by default (domain type="kvm" from above log). I tried to use tcg and libguestfs-test-tool works well.

# LIBGUESTFS_BACKEND_SETTINGS=force_tcg libguestfs-test-tool
     ************************************************************
     *                    IMPORTANT NOTICE
     *
     * When reporting bugs, include the COMPLETE, UNEDITED
     * output below in your bug report.
     *
     ************************************************************
LIBGUESTFS_BACKEND_SETTINGS=force_tcg
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
XDG_RUNTIME_DIR=/run/user/0
SELinux: Enforcing
guestfs_get_append: (null)
guestfs_get_autosync: 1
guestfs_get_backend: libvirt
guestfs_get_backend_settings: [force_tcg]
guestfs_get_cachedir: /var/tmp
guestfs_get_hv: /usr/libexec/qemu-kvm
guestfs_get_memsize: 1024
guestfs_get_network: 0
guestfs_get_path: /usr/lib64/guestfs
guestfs_get_pgroup: 0
guestfs_get_program: libguestfs-test-tool
guestfs_get_recovery_proc: 1
guestfs_get_smp: 1
guestfs_get_sockdir: /tmp
guestfs_get_tmpdir: /tmp
guestfs_get_trace: 0
guestfs_get_verbose: 1
host_cpu: powerpc64le
Launching appliance, timeout set to 600 seconds.
libguestfs: launch: program=libguestfs-test-tool
libguestfs: launch: version=1.40.2rhel=8,release=22.module+el8.2.0+6029+618ef2ec,libvirt
libguestfs: launch: backend registered: unix
libguestfs: launch: backend registered: uml
libguestfs: launch: backend registered: libvirt
libguestfs: launch: backend registered: direct
libguestfs: launch: backend=libvirt
libguestfs: launch: tmpdir=/tmp/libguestfs2bfz2S
libguestfs: launch: umask=0022
libguestfs: launch: euid=0
libguestfs: libvirt version = 6000000 (6.0.0)
libguestfs: guest random name = guestfs-t6w6vkpf3406b9jm
libguestfs: connect to libvirt
libguestfs: opening libvirt handle: URI = qemu:///system, auth = default+wrapper, flags = 0
libguestfs: successfully opened libvirt handle: conn = 0x10033f9e510
libguestfs: qemu version (reported by libvirt) = 4002000 (4.2.0)
libguestfs: get libvirt capabilities
libguestfs: parsing capabilities XML
libguestfs: build appliance
libguestfs: begin building supermin appliance
libguestfs: run supermin
libguestfs: command: run: /usr/bin/supermin
libguestfs: command: run: \ --build
libguestfs: command: run: \ --verbose
libguestfs: command: run: \ --if-newer
libguestfs: command: run: \ --lock /var/tmp/.guestfs-0/lock
libguestfs: command: run: \ --copy-kernel
libguestfs: command: run: \ -f ext2
libguestfs: command: run: \ --host-cpu powerpc64le
libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d
libguestfs: command: run: \ -o /var/tmp/.guestfs-0/appliance.d
supermin: version: 5.1.19
supermin: rpm: detected RPM version 4.14
supermin: package handler: fedora/rpm
supermin: acquiring lock on /var/tmp/.guestfs-0/lock
supermin: build: /usr/lib64/guestfs/supermin.d
supermin: reading the supermin appliance
supermin: build: visiting /usr/lib64/guestfs/supermin.d/base.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib64/guestfs/supermin.d/daemon.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib64/guestfs/supermin.d/excludefiles type uncompressed excludefiles
supermin: build: visiting /usr/lib64/guestfs/supermin.d/hostfiles type uncompressed hostfiles
supermin: build: visiting /usr/lib64/guestfs/supermin.d/init.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib64/guestfs/supermin.d/packages type uncompressed packages
supermin: build: visiting /usr/lib64/guestfs/supermin.d/udev-rules.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib64/guestfs/supermin.d/zz-packages-gfs2 type uncompressed packages
supermin: build: visiting /usr/lib64/guestfs/supermin.d/zz-packages-rescue type uncompressed packages
supermin: build: visiting /usr/lib64/guestfs/supermin.d/zz-packages-rsync type uncompressed packages
supermin: build: visiting /usr/lib64/guestfs/supermin.d/zz-packages-xfs type uncompressed packages
supermin: build: visiting /usr/lib64/guestfs/supermin.d/zz-winsupport.tar.gz type gzip base image (tar)
supermin: mapping package names to installed packages
supermin: resolving full list of package dependencies
supermin: build: 199 packages, including dependencies
supermin: build: 38194 files
supermin: build: 12511 files, after matching excludefiles
supermin: build: 12525 files, after adding hostfiles
supermin: build: 12510 files, after removing unreadable files
supermin: build: 12544 files, after munging
supermin: kernel: looking for kernel using environment variables ...
supermin: kernel: looking for kernels in /lib/modules/*/vmlinuz ...
supermin: kernel: picked vmlinuz /lib/modules/4.18.0-193.el8.ppc64le/vmlinuz
supermin: kernel: kernel_version 4.18.0-193.el8.ppc64le
supermin: kernel: modpath /lib/modules/4.18.0-193.el8.ppc64le
supermin: ext2: creating empty ext2 filesystem '/var/tmp/.guestfs-0/appliance.d.ikit2ht4/root'
supermin: ext2: populating from base image
supermin: ext2: copying files from host filesystem
supermin: ext2: copying kernel modules
supermin: ext2: creating minimal initrd '/var/tmp/.guestfs-0/appliance.d.ikit2ht4/initrd'
supermin: ext2: wrote 24 modules to minimal initrd
supermin: renaming /var/tmp/.guestfs-0/appliance.d.ikit2ht4 to /var/tmp/.guestfs-0/appliance.d
libguestfs: finished building supermin appliance
libguestfs: command: run: qemu-img
libguestfs: command: run: \ create
libguestfs: command: run: \ -f qcow2
libguestfs: command: run: \ -o backing_file=/var/tmp/.guestfs-0/appliance.d/root,backing_fmt=raw
libguestfs: command: run: \ /tmp/libguestfs2bfz2S/overlay2.qcow2
Formatting '/tmp/libguestfs2bfz2S/overlay2.qcow2', fmt=qcow2 size=4294967296 backing_file=/var/tmp/.guestfs-0/appliance.d/root backing_fmt=raw cluster_size=65536 lazy_refcounts=off refcount_bits=16
libguestfs: create libvirt XML
libguestfs: command: run: dmesg | grep -Eoh 'lpj=[[:digit:]]+'
libguestfs: read_lpj_from_dmesg: external command exited with error status 1
libguestfs: read_lpj_from_files: no boot messages files are readable
libguestfs: libvirt XML:\n<?xml version="1.0"?>\n<domain type="qemu" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0">\n  <name>guestfs-t6w6vkpf3406b9jm</name>\n  <memory unit="MiB">1024</memory>\n  <currentMemory unit="MiB">1024</currentMemory>\n  <vcpu>1</vcpu>\n  <clock offset="utc">\n    <timer name="rtc" tickpolicy="catchup"/>\n    <timer name="pit" tickpolicy="delay"/>\n  </clock>\n  <os>\n    <type machine="pseries">hvm</type>\n    <kernel>/var/tmp/.guestfs-0/appliance.d/kernel</kernel>\n    <initrd>/var/tmp/.guestfs-0/appliance.d/initrd</initrd>\n    <cmdline>panic=1 console=hvc0 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color</cmdline>\n  </os>\n  <on_reboot>destroy</on_reboot>\n  <devices>\n    <rng model="virtio">\n      <backend model="random">/dev/urandom</backend>\n    </rng>\n    <controller type="scsi" index="0" model="virtio-scsi"/>\n    <disk device="disk" type="file">\n      <source file="/tmp/libguestfs2bfz2S/scratch1.img"/>\n      <target dev="sda" bus="scsi"/>\n      <driver name="qemu" type="raw" cache="unsafe"/>\n      <address type="drive" controller="0" bus="0" target="0" unit="0"/>\n    </disk>\n    <disk type="file" device="disk">\n      <source file="/tmp/libguestfs2bfz2S/overlay2.qcow2"/>\n      <target dev="sdb" bus="scsi"/>\n      <driver name="qemu" type="qcow2" cache="unsafe"/>\n      <address type="drive" controller="0" bus="0" target="1" unit="0"/>\n    </disk>\n    <serial type="unix">\n      <source mode="connect" path="/tmp/libguestfsrVfJmE/console.sock"/>\n      <target port="0"/>\n    </serial>\n    <channel type="unix">\n      <source mode="connect" path="/tmp/libguestfsrVfJmE/guestfsd.sock"/>\n      <target type="virtio" name="org.libguestfs.channel.0"/>\n    </channel>\n    <controller type="usb" model="none"/>\n    <memballoon model="none"/>\n  </devices>\n  <qemu:commandline>\n    <qemu:env name="TMPDIR" value="/var/tmp"/>\n  </qemu:commandline>\n</domain>\n
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -R
libguestfs: command: run: \ -Z /var/tmp/.guestfs-0
libguestfs: /var/tmp/.guestfs-0:
libguestfs: total 0
libguestfs: drwxr-xr-x. 3 root root unconfined_u:object_r:user_tmp_t:s0  37 Jun  4 21:24 .
libguestfs: drwxrwxrwt. 4 root root system_u:object_r:tmp_t:s0          103 Jun  4 21:24 ..
libguestfs: drwxr-xr-x. 2 root root unconfined_u:object_r:user_tmp_t:s0  46 Jun  4 21:24 appliance.d
libguestfs: -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0   0 Jun  4 21:24 lock
libguestfs: 
libguestfs: /var/tmp/.guestfs-0/appliance.d:
libguestfs: total 501292
libguestfs: drwxr-xr-x. 2 root root unconfined_u:object_r:user_tmp_t:s0         46 Jun  4 21:24 .
libguestfs: drwxr-xr-x. 3 root root unconfined_u:object_r:user_tmp_t:s0         37 Jun  4 21:24 ..
libguestfs: -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0    5000192 Jun  4 21:24 initrd
libguestfs: -rwxr-xr-x. 1 root root unconfined_u:object_r:user_tmp_t:s0   26837261 Jun  4 21:24 kernel
libguestfs: -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0 4294967296 Jun  4 21:24 root
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -Z /tmp/libguestfsrVfJmE
libguestfs: total 4
libguestfs: drwxr-xr-x.  2 root root unconfined_u:object_r:user_tmp_t:s0   47 Jun  4 21:24 .
libguestfs: drwxrwxrwt. 13 root root system_u:object_r:tmp_t:s0          4096 Jun  4 21:24 ..
libguestfs: srw-rw----.  1 root qemu unconfined_u:object_r:user_tmp_t:s0    0 Jun  4 21:24 console.sock
libguestfs: srw-rw----.  1 root qemu unconfined_u:object_r:user_tmp_t:s0    0 Jun  4 21:24 guestfsd.sock
libguestfs: launch libvirt guest


SLOF\x1b[0m\x1b[?25l **********************************************************************
\x1b[1mQEMU Starting
\x1b[0m Build Date = Jan 15 2020 18:38:09
 FW Version = mockbuild@ release 20191022
 Press "s" to enter Open Firmware.

Populating /vdevice methods
Populating /vdevice/vty@30000000
Populating /vdevice/nvram@71000000
Populating /pci@800000020000000
                     00 0800 (D) : 1af4 1004    virtio [ scsi ]
Populating /pci@800000020000000/scsi@1
       SCSI: Looking for devices
          100000000000000 DISK     : "QEMU     QEMU HARDDISK    2.5+"
          101000000000000 DISK     : "QEMU     QEMU HARDDISK    2.5+"
                     00 1000 (D) : 1af4 1003    virtio [ serial ]
                     00 1800 (D) : 1af4 1005    legacy-device*
No NVRAM common partition, re-initializing...
Scanning USB 
Using default console: /vdevice/vty@30000000
Detected RAM kernel at 400000 (1d3d3d0 bytes) 
     
  Welcome to Open Firmware

  Copyright (c) 2004, 2017 IBM Corporation All rights reserved.
  This program and the accompanying materials are made available
  under the terms of the BSD License available at
  http://www.opensource.org/licenses/bsd-license.php

Booting from memory...
OF stdout device is: /vdevice/vty@30000000
Preparing to boot Linux version 4.18.0-193.el8.ppc64le (mockbuild.eng.bos.redhat.com) (gcc version 8.3.1 20191121 (Red Hat 8.3.1-5) (GCC)) #1 SMP Fri Mar 27 14:40:12 UTC 2020
Detected machine type: 0000000000000101
command line: panic=1 console=hvc0 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color
...
===== TEST FINISHED OK =====


So probably we should filter out this kind of machine. Thanks David.

Comment 4 Richard W.M. Jones 2020-06-05 09:27:47 UTC
What happens precisely is we get the libvirt capabilities and process
them to see if the machine supports KVM.  In this case we decided that
the machine should support KVM, which is why we added: <domain type="kvm" ...>
to the XML.

Could you run:

# virsh capabilities
# virsh domcapabilities

and attach the output of those (which may be quite large) to this bug?

Here is the logic we use to decide if KVM is supported:

https://github.com/libguestfs/libguestfs/blob/6670dc0fbf318768636f5246cfe9fa4b9f3188a9/lib/launch-libvirt.c#L765

I guess the comment "XXX It ignores architecture, but let's not worry about that." may be
relevant here.

Comment 5 YongkuiGuo 2020-06-05 09:56:25 UTC
Created attachment 1695400 [details]
virsh capabilities output

Comment 6 YongkuiGuo 2020-06-05 09:57:07 UTC
Created attachment 1695401 [details]
virsh domcapabilities output

Comment 7 YongkuiGuo 2020-06-05 09:58:07 UTC
(In reply to Richard W.M. Jones from comment #4)
> What happens precisely is we get the libvirt capabilities and process
> them to see if the machine supports KVM.  In this case we decided that
> the machine should support KVM, which is why we added: <domain type="kvm"
> ...>
> to the XML.
> 
> Could you run:
> 
> # virsh capabilities
> # virsh domcapabilities
> 
> and attach the output of those (which may be quite large) to this bug?

The output files have been attached.

> 
> Here is the logic we use to decide if KVM is supported:
> 
> https://github.com/libguestfs/libguestfs/blob/
> 6670dc0fbf318768636f5246cfe9fa4b9f3188a9/lib/launch-libvirt.c#L765
> 
> I guess the comment "XXX It ignores architecture, but let's not worry about
> that." may be
> relevant here.

Thanks for the details.

Comment 8 Richard W.M. Jones 2020-06-05 10:30:31 UTC
The virsh capabilities has:
  <guest>
    <os_type>hvm</os_type>
    <arch name='ppc64'>
...
      <domain type='qemu'/>
      <domain type='kvm'/>
...
  <guest>
    <os_type>hvm</os_type>
    <arch name='ppc64le'>
...
      <domain type='qemu'/>
      <domain type='kvm'/>

Because this matches the xpath expression
/capabilities/guest/arch/domain/@type == "kvm"
I think libguestfs is doing the right thing here, and (assuming this
hardware really cannot support KVM) then libvirt is wrong.

Comment 9 Richard W.M. Jones 2020-06-05 10:32:59 UTC
(In reply to David Gibson from comment #2)
> Um.. probably not.
> 
> ibm-p9z-25-lp8.virt.pnr.lab.eng.rdu2.redhat.com
> 
> So this looks to be an LPAR (i.e. a PowerVM guest rather than a bare metal
> machine or KVM guest).
> 
> Which means we don't support KVM at all on such a machine.

David - when you say "we don't support KVM", do you mean KVM won't work at
all, or Red Hat doesn't support KVM but it might work?  And if we did
enable KVM and it's not supposed to work at all, is the error message we
are seeing symptomatic of that (I would expect something more like
"error: KVM does not work on this platform").

Comment 10 Greg Kurz 2020-06-05 13:36:58 UTC
(In reply to Richard W.M. Jones from comment #9)
> (In reply to David Gibson from comment #2)
> > Um.. probably not.
> > 
> > ibm-p9z-25-lp8.virt.pnr.lab.eng.rdu2.redhat.com
> > 
> > So this looks to be an LPAR (i.e. a PowerVM guest rather than a bare metal
> > machine or KVM guest).
> > 
> > Which means we don't support KVM at all on such a machine.
> 
> David - when you say "we don't support KVM", do you mean KVM won't work at
> all, or Red Hat doesn't support KVM but it might work?  And if we did
> enable KVM and it's not supposed to work at all, is the error message we
> are seeing symptomatic of that (I would expect something more like
> "error: KVM does not work on this platform").

KVM on POWER comes in two flavors: PR KVM (user mode) and HV KVM (hypervisor
mode). HV KVM is what we use on the field with baremetal P8 and P9 systems.
PR KVM is a _legacy_ implementation to be used when you don't have access
to hypervisor mode, typically the case within an LPAR.

HV KVM cannot work currently in an LPAR because IBM's pHyp hypervisor
doesn't support it (but I've heard rumors that this could possibly
evolve).

PR KVM is definitely not a business priority for IBM and is very poorly
maintained, but it might work in an LPAR. The problem is that PR and HV
have different level of support for various features and the default
settings of our pseries machine types in QEMU largely favor HV KVM.
So you might need to turn some knobs with to have PR working,
eg. cap-large-decr=off.

Comment 11 David Gibson 2020-06-09 05:35:57 UTC
Yeah, I was meaning "KVM won't work at all" under an LPAR, but I'd forgotten about KVM PR.  The existence of KVM PR is why we're not getting a "no KVM on this platform" error.  But it looks like large decr was never implemented in KVM PR.  That doesn't surprise me, KVM PR was always kind of flaky and is now basically unmaintained.

We never supported KVM PR in RHEL, but did include it in the build because it had some internal test uses.  That's becoming less and less so, since we do have nested KVM (KVM on KVM, though not KVM on LPAR, so far) on POWER9 and more and more stuff is being implemented in HV but not PR.

I'm a bit surprised the kvm_pr module was loaded at all.  Do you know if that is happening automatically, or is something in your test setup loading that manually, probably for some old reasons that don't make sense any more?

I'm going to look into dropping kvm_pr from the build entirely.  In the meantime, I think unloading and/or blacklisting the kvm_pr module should work around the problem.

Comment 12 Richard W.M. Jones 2020-06-09 08:22:33 UTC
I'm going to reassign this bug to libvirt since it's essentially a libvirt issue.

I don't think libvirt should be loading modules, but a quick look at the
source shows that it can run the commands “modprobe -c” (shows kernel config), 
“modprobe nbd” (loads NBD module), and it can modprobe some PCI drivers.  It
doesn't appear to be loading any KVM related modules, but of course it will
look for /dev/kvm which might cause udev to load the modules instead.

Comment 13 Richard W.M. Jones 2020-06-09 08:25:22 UTC
I wonder if libvirt can tell it's KVM PR and add an extra flag
to the XML, like:

      <domain type='kvm' subtype='pr'/>

We could then avoid using it in libguestfs by a simple adjustment to
the XPath expr.

Or course if KVM PR is really unusable and won't be maintained in future
then your other idea of dropping it works too, although we'll fall back
to using TCG for nested then.

Comment 14 Daniel Henrique Barboza (IBM) 2020-06-15 21:11:37 UTC
(In reply to Richard W.M. Jones from comment #13)
> I wonder if libvirt can tell it's KVM PR and add an extra flag
> to the XML, like:
> 
>       <domain type='kvm' subtype='pr'/>
> 
> We could then avoid using it in libguestfs by a simple adjustment to
> the XPath expr.


I worked in a virt-host-validate code that detects whether KVM is enabled or
not in PPC64 by checking if the kvm_hv module is loaded. If needed, I can
grab pieces of it and check for kvm_hv/kvm_pr in Libvirt domain code. The
flag itself and whether it would go on the root of the XML or another
place would need some thought. Perhaps even a new domain type='kvm-pr'
would need to be considered.


Another alternative, more in line with the deprecating kvm_pr idea, would be
to consider kvm_pr "not real KVM". This would mean that "<domain type='kvm'/>"
would not appear in the output of the capabilities XML if the kvm module loaded
is kvm_pr.

Comment 15 Greg Kurz 2020-06-16 09:00:30 UTC
Hi Daniel ! :)

(In reply to dbarboza from comment #14)
> (In reply to Richard W.M. Jones from comment #13)
> > I wonder if libvirt can tell it's KVM PR and add an extra flag
> > to the XML, like:
> > 
> >       <domain type='kvm' subtype='pr'/>
> > 
> > We could then avoid using it in libguestfs by a simple adjustment to
> > the XPath expr.
> 
> 
> I worked in a virt-host-validate code that detects whether KVM is enabled or
> not in PPC64 by checking if the kvm_hv module is loaded. If needed, I can
> grab pieces of it and check for kvm_hv/kvm_pr in Libvirt domain code. The
> flag itself and whether it would go on the root of the XML or another
> place would need some thought. Perhaps even a new domain type='kvm-pr'
> would need to be considered.
> 

On some setups, eg. powernv+POWER8, both kvm_hv and kvm_pr can be loaded
and used at the same time. Not sure what libvirt could add to the XML
in this case.

> 
> Another alternative, more in line with the deprecating kvm_pr idea, would be
> to consider kvm_pr "not real KVM". This would mean that "<domain
> type='kvm'/>"
> would not appear in the output of the capabilities XML if the kvm module
> loaded
> is kvm_pr.

Or rather that kvm_hv isn't loaded for the same reason as above.

Comment 16 Daniel Henrique Barboza (IBM) 2020-06-16 11:08:01 UTC
(In reply to Greg Kurz from comment #15)
> Hi Daniel ! :)

Hey Greg! Fancy seeing you here :)

> 
> (In reply to dbarboza from comment #14)
> > (In reply to Richard W.M. Jones from comment #13)
> > > I wonder if libvirt can tell it's KVM PR and add an extra flag
> > > to the XML, like:
> > > 
> > >       <domain type='kvm' subtype='pr'/>
> > > 
> > > We could then avoid using it in libguestfs by a simple adjustment to
> > > the XPath expr.
> > 
> > 
> > I worked in a virt-host-validate code that detects whether KVM is enabled or
> > not in PPC64 by checking if the kvm_hv module is loaded. If needed, I can
> > grab pieces of it and check for kvm_hv/kvm_pr in Libvirt domain code. The
> > flag itself and whether it would go on the root of the XML or another
> > place would need some thought. Perhaps even a new domain type='kvm-pr'
> > would need to be considered.
> > 
> 
> On some setups, eg. powernv+POWER8, both kvm_hv and kvm_pr can be loaded
> and used at the same time. Not sure what libvirt could add to the XML
> in this case.


Good point. I forgot that there is a valid use case of kvm_hv + kvm_pr.
In this case, what Libvirt can do is to not declare KVM support in the
capabilities if kvm_hv isn't loaded - which is something that virt-host-validate
already does.

Comment 17 Greg Kurz 2020-06-16 15:01:10 UTC
(In reply to Daniel Henrique Barboza from comment #16)
> (In reply to Greg Kurz from comment #15)
> > Hi Daniel ! :)
> 
> Hey Greg! Fancy seeing you here :)
> 
> > 
> > (In reply to dbarboza from comment #14)
> > > (In reply to Richard W.M. Jones from comment #13)
> > > > I wonder if libvirt can tell it's KVM PR and add an extra flag
> > > > to the XML, like:
> > > > 
> > > >       <domain type='kvm' subtype='pr'/>
> > > > 
> > > > We could then avoid using it in libguestfs by a simple adjustment to
> > > > the XPath expr.
> > > 
> > > 
> > > I worked in a virt-host-validate code that detects whether KVM is enabled or
> > > not in PPC64 by checking if the kvm_hv module is loaded. If needed, I can
> > > grab pieces of it and check for kvm_hv/kvm_pr in Libvirt domain code. The
> > > flag itself and whether it would go on the root of the XML or another
> > > place would need some thought. Perhaps even a new domain type='kvm-pr'
> > > would need to be considered.
> > > 
> > 
> > On some setups, eg. powernv+POWER8, both kvm_hv and kvm_pr can be loaded
> > and used at the same time. Not sure what libvirt could add to the XML
> > in this case.
> 
> 
> Good point. I forgot that there is a valid use case of kvm_hv + kvm_pr.
> In this case, what Libvirt can do is to not declare KVM support in the
> capabilities if kvm_hv isn't loaded - which is something that
> virt-host-validate
> already does.

If this doesn't prevent an experienced user who knows about PR from
still being able to use it, I guess it's okay.

Comment 18 Daniel Henrique Barboza (IBM) 2020-06-19 21:10:48 UTC
Richard, Greg, David,

I took the liberty of sending a Libvirt patch that attempts to address this
bug:

https://www.redhat.com/archives/libvir-list/2020-June/msg00919.html


My idea is to not claim KVM support in Libvirt for kvm_pr, unless in Power 8
hosts where there are (unsupported) cases of usage. When kvm_pr gets removes
from the tree QEMU will stop reporting KVM support for kvm_pr anyway. Might
as well let any existing Power 8 guests that are using kvm_pr to still run
while they can.

Comment 19 David Gibson 2020-07-27 05:46:30 UTC
Aiming to sort this out in the 8.4 timeframe.

Comment 20 David Gibson 2020-10-19 02:04:57 UTC
We've now removed KVM PR entirely from the RHEL kernel - it was more trouble than it was worth.

Does that removing the confusing behaviour in this case?

Comment 22 YongkuiGuo 2020-10-19 07:28:04 UTC
(In reply to David Gibson from comment #20)
> We've now removed KVM PR entirely from the RHEL kernel - it was more trouble
> than it was worth.
> 
> Does that removing the confusing behaviour in this case?

Hi David,

Yes, it works with kernel-4.18.0-240.2.el8.dt3.

# virsh capabilities
  <guest>
    <os_type>hvm</os_type>
    <arch name='ppc64'>
      ...
      <domain type='qemu'/>
    </arch>
  </guest>
  <guest>
    <os_type>hvm</os_type>
    <arch name='ppc64le'>
      ...
      <domain type='qemu'/>
  </guest> 

# libguestfs-test-tool
...
Launching appliance, timeout set to 600 seconds.
libguestfs: launch: program=libguestfs-test-tool
libguestfs: launch: version=1.40.2rhel=8,release=25.module+el8.3.0+7421+642fe24f,libvirt
libguestfs: launch: backend registered: unix
libguestfs: launch: backend registered: uml
libguestfs: launch: backend registered: libvirt
libguestfs: launch: backend registered: direct
libguestfs: launch: backend=libvirt
libguestfs: launch: tmpdir=/tmp/libguestfsXXdxCi
libguestfs: launch: umask=0022
libguestfs: launch: euid=0
libguestfs: libvirt version = 6000000 (6.0.0)
libguestfs: guest random name = guestfs-rpz3oubjn5wljj1l
libguestfs: connect to libvirt
libguestfs: opening libvirt handle: URI = qemu:///system, auth = default+wrapper, flags = 0
libguestfs: successfully opened libvirt handle: conn = 0x100149f5040
libguestfs: qemu version (reported by libvirt) = 4002000 (4.2.0)
libguestfs: get libvirt capabilities
libguestfs: parsing capabilities XML
libguestfs: build appliance
libguestfs: begin building supermin appliance
libguestfs: run supermin
libguestfs: command: run: /usr/bin/supermin
libguestfs: command: run: \ --build
libguestfs: command: run: \ --verbose
libguestfs: command: run: \ --if-newer
libguestfs: command: run: \ --lock /var/tmp/.guestfs-0/lock
libguestfs: command: run: \ --copy-kernel
libguestfs: command: run: \ -f ext2
libguestfs: command: run: \ --host-cpu powerpc64le
libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d
libguestfs: command: run: \ -o /var/tmp/.guestfs-0/appliance.d
supermin: version: 5.1.19
supermin: rpm: detected RPM version 4.14
supermin: package handler: fedora/rpm
supermin: acquiring lock on /var/tmp/.guestfs-0/lock
supermin: if-newer: output does not need rebuilding
libguestfs: finished building supermin appliance
libguestfs: command: run: qemu-img
libguestfs: command: run: \ create
libguestfs: command: run: \ -f qcow2
libguestfs: command: run: \ -o backing_file=/var/tmp/.guestfs-0/appliance.d/root,backing_fmt=raw
libguestfs: command: run: \ /tmp/libguestfsXXdxCi/overlay2.qcow2
Formatting '/tmp/libguestfsXXdxCi/overlay2.qcow2', fmt=qcow2 size=4294967296 backing_file=/var/tmp/.guestfs-0/appliance.d/root backing_fmt=raw cluster_size=65536 lazy_refcounts=off refcount_bits=16
libguestfs: create libvirt XML
libguestfs: command: run: dmesg | grep -Eoh 'lpj=[[:digit:]]+'
libguestfs: read_lpj_from_dmesg: external command exited with error status 1
libguestfs: read_lpj_from_files: no boot messages files are readable
libguestfs: libvirt XML:\n<?xml version="1.0"?>\n<domain type="qemu" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0">\n  <name>guestfs-rpz3oubjn5wljj1l</name>\n  <memory unit="MiB">1024</memory>\n  <currentMemory unit="MiB">1024</currentMemory>\n  <vcpu>1</vcpu>\n  <clock offset="utc">\n    <timer name="rtc" tickpolicy="catchup"/>\n    <timer name="pit" tickpolicy="delay"/>\n  </clock>\n  <os>\n    <type machine="pseries">hvm</type>\n    <kernel>/var/tmp/.guestfs-0/appliance.d/kernel</kernel>\n    <initrd>/var/tmp/.guestfs-0/appliance.d/initrd</initrd>\n    <cmdline>panic=1 console=hvc0 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color</cmdline>\n  </os>\n  <on_reboot>destroy</on_reboot>\n  <devices>\n    <rng model="virtio">\n      <backend model="random">/dev/urandom</backend>\n    </rng>\n    <controller type="scsi" index="0" model="virtio-scsi"/>\n    <disk device="disk" type="file">\n      <source file="/tmp/libguestfsXXdxCi/scratch1.img"/>\n      <target dev="sda" bus="scsi"/>\n      <driver name="qemu" type="raw" cache="unsafe"/>\n      <address type="drive" controller="0" bus="0" target="0" unit="0"/>\n    </disk>\n    <disk type="file" device="disk">\n      <source file="/tmp/libguestfsXXdxCi/overlay2.qcow2"/>\n      <target dev="sdb" bus="scsi"/>\n      <driver name="qemu" type="qcow2" cache="unsafe"/>\n      <address type="drive" controller="0" bus="0" target="1" unit="0"/>\n    </disk>\n    <serial type="unix">\n      <source mode="connect" path="/tmp/libguestfstvWgb0/console.sock"/>\n      <target port="0"/>\n    </serial>\n    <channel type="unix">\n      <source mode="connect" path="/tmp/libguestfstvWgb0/guestfsd.sock"/>\n      <target type="virtio" name="org.libguestfs.channel.0"/>\n    </channel>\n    <controller type="usb" model="none"/>\n    <memballoon model="none"/>\n  </devices>\n  <qemu:commandline>\n    <qemu:env name="TMPDIR" value="/var/tmp"/>\n  </qemu:commandline>\n</domain>\n
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -R
libguestfs: command: run: \ -Z /var/tmp/.guestfs-0
libguestfs: /var/tmp/.guestfs-0:
libguestfs: total 0
libguestfs: drwxr-xr-x. 3 root root unconfined_u:object_r:user_tmp_t:s0  37 Oct 19 02:48 .
libguestfs: drwxrwxrwt. 4 root root system_u:object_r:tmp_t:s0          103 Oct 19 02:48 ..
libguestfs: drwxr-xr-x. 2 root root unconfined_u:object_r:user_tmp_t:s0  46 Oct 19 02:38 appliance.d
libguestfs: -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0   0 Oct 18 23:21 lock
libguestfs: 
libguestfs: /var/tmp/.guestfs-0/appliance.d:
libguestfs: total 365968
libguestfs: drwxr-xr-x. 2 root root unconfined_u:object_r:user_tmp_t:s0         46 Oct 19 02:38 .
libguestfs: drwxr-xr-x. 3 root root unconfined_u:object_r:user_tmp_t:s0         37 Oct 19 02:48 ..
libguestfs: -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0    4841984 Oct 19 02:48 initrd
libguestfs: -rwxr-xr-x. 1 root root unconfined_u:object_r:user_tmp_t:s0   27893786 Oct 19 02:48 kernel
libguestfs: -rw-r--r--. 1 qemu qemu system_u:object_r:virt_content_t:s0 4294967296 Oct 19 02:48 root
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -Z /tmp/libguestfstvWgb0
libguestfs: total 4
libguestfs: drwxr-xr-x.  2 root root unconfined_u:object_r:user_tmp_t:s0   47 Oct 19 02:48 .
libguestfs: drwxrwxrwt. 10 root root system_u:object_r:tmp_t:s0          4096 Oct 19 02:48 ..
libguestfs: srw-rw----.  1 root qemu unconfined_u:object_r:user_tmp_t:s0    0 Oct 19 02:48 console.sock
libguestfs: srw-rw----.  1 root qemu unconfined_u:object_r:user_tmp_t:s0    0 Oct 19 02:48 guestfsd.sock
libguestfs: launch libvirt guest


SLOF\x1b[0m\x1b[?25l **********************************************************************
\x1b[1mQEMU Starting
\x1b[0m Build Date = Apr 28 2020 01:44:26
 FW Version = mockbuild@ release 20191022
 Press "s" to enter Open Firmware.

Populating /vdevice methods
Populating /vdevice/vty@30000000
Populating /vdevice/nvram@71000000
Populating /pci@800000020000000
                     00 0800 (D) : 1af4 1004    virtio [ scsi ]
Populating /pci@800000020000000/scsi@1
       SCSI: Looking for devices
          100000000000000 DISK     : "QEMU     QEMU HARDDISK    2.5+"
          101000000000000 DISK     : "QEMU     QEMU HARDDISK    2.5+"
                     00 1000 (D) : 1af4 1003    virtio [ serial ]
                     00 1800 (D) : 1af4 1005    legacy-device*
No NVRAM common partition, re-initializing...
Scanning USB 
Using default console: /vdevice/vty@30000000
Detected RAM kernel at 400000 (2668e10 bytes) 
     
  Welcome to Open Firmware

  Copyright (c) 2004, 2017 IBM Corporation All rights reserved.
  This program and the accompanying materials are made available
  under the terms of the BSD License available at
  http://www.opensource.org/licenses/bsd-license.php

Booting from memory...
OF stdout device is: /vdevice/vty@30000000
Preparing to boot Linux version 4.18.0-240.2.el8.dt3.ppc64le (mockbuild.eng.bos.redhat.com) (gcc version 8.4.1 20200928 (Red Hat 8.4.1-1) (GCC)) #1 SMP Sun Oct 11 14:39:58 EDT 2020
...
===== TEST FINISHED OK =====

Comment 23 David Gibson 2020-10-19 11:24:52 UTC
Great, thanks.


Note You need to log in before you can comment on or make changes to this bug.