Bug 1779656 - Add gating test that runs libguestfs-test-tool
Summary: Add gating test that runs libguestfs-test-tool
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: libguestfs
Version: 8.4
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 8.0
Assignee: Richard W.M. Jones
QA Contact: YongkuiGuo
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-12-04 13:07 UTC by Richard W.M. Jones
Modified: 2020-11-17 17:46 UTC (History)
6 users (show)

Fixed In Version: virt-8.3-8030020200526090020-30b713e6
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-17 17:46:15 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
0001-gating-tests-Run-libguestfs-test-tool.patch (1.19 KB, patch)
2020-01-28 13:16 UTC, Richard W.M. Jones
no flags Details | Diff

Description Richard W.M. Jones 2019-12-04 13:07:07 UTC
Description of problem:

We keep building broken kernels and qemus which cannot boot one
on the other.  Unfortunately this frequently manifests itself
in the basic libguestfs %check section where it tries to boot
the current kernel on current qemu.  Doubly unfortunately if
this fails it blocks the entire module build.

The right way to do this would be to have gating test in the
Virt module which runs libguestfs-test-tool.  We can then
remove the %check section, but still ensure we are not shipping
a broken Virt module (since the whole module would still be
gated if libguestfs doesn't work because of the broken kernels
and qemus).

Version-Release number of selected component (if applicable):

RHEL AV 8.1.0

How reproducible:

Quite often.

eg:
http://download.eng.bos.redhat.com/brewroot/work/tasks/8191/25138191/build.log

Comment 1 Richard W.M. Jones 2019-12-04 13:16:56 UTC
I think we should add several tests actually:

libguestfs-test-tool
LIBGUESTFS_BACKEND=direct libguestfs-test-tool
LIBGUESTFS_BACKEND_SETTINGS=force_tcg libguestfs-test-tool

These should run before any other libguestfs tests (maybe before any other
Virt module tests).

Comment 2 Richard W.M. Jones 2019-12-04 13:17:17 UTC
Perhaps root vs non-root too?

Comment 3 Danilo de Paula 2019-12-04 15:46:43 UTC
please, do it.

Remember that not every test needs to be gating tests. Specially at the beginning.
@Yash, can you instruct rjones how to do this?

Comment 4 Richard W.M. Jones 2019-12-05 08:50:10 UTC
I realized last night that the hang we saw in the nbdkit tests
yesterday is likely to be the same thing as the problem observed in
the libguestfs tests.  nbdkit uses libguestfs for some tests.  The
libguestfs %check section includes an alarm signal to stop the test
from hanging, but regular use of libguestfs will hang - hence the
nbdkit hang.

I'll note this is still evidence that kernel doesn't boot in qemu, and
is nothing to do with libguestfs.

Anyway my point is that even removing the libguestfs %check section
won't necessarily make module builds any easier.

But, I think adding gating tests is still a good idea because it
allows us to test kernel on qemu in a number of different scenarios,
notably using both KVM and TCG.

I would really like to talk to Yash about the proposed tests because
the yaml syntax is awkward and confusing and I don't know any way to
run the tests before pushing them.

Comment 5 YongkuiGuo 2019-12-17 12:07:03 UTC
The test case for running libguestfs-test-tool(libvirt mode and direct mode) has been included in libguestfs tier1 gating test(x86_64). We will modify the case and add the force_tcg mode into the gating test asap.

Comment 6 Pino Toscano 2019-12-17 12:18:39 UTC
(In reply to YongkuiGuo from comment #5)
> The test case for running libguestfs-test-tool(libvirt mode and direct mode)
> has been included in libguestfs tier1 gating test(x86_64). We will modify
> the case and add the force_tcg mode into the gating test asap.

Thanks!

OTOH, running libguestfs-test-tool as part of %check is a very good thing, and it ought to not be removed.

Comment 7 Yash Mankad 2019-12-18 20:32:15 UTC
(In reply to Richard W.M. Jones from comment #4)
> 
> I would really like to talk to Yash about the proposed tests because
> the yaml syntax is awkward and confusing and I don't know any way to
> run the tests before pushing them.

Sorry for the delay.

So, YongKui seems to have added the tests, but I will work with him to
make sure we are gating on it in the forthcoming builds.

Thanks for making the addition Rich!

@YongKui - do you need any help in merging a PR for the update ? or has it
already been done ?

Thanks.

Comment 8 YongkuiGuo 2019-12-19 06:48:14 UTC
Hi Yash,
There is no need to create a PR, just need to modify the current test case(add force_tcg mode).  

@Yash, @Rjones, @Pino
However, sometimes it failed to run libguestfs-test-tool with force_tcg mode on openstack platform in recent gating test (Build 5296). I have tried 3 instances and run the command manually, it still failed from time to time. It's easy to reproduce this issue,but not 100%. It's no problem on a beaker bare metal.


# LIBGUESTFS_BACKEND_SETTINGS=force_tcg libguestfs-test-tool
     ************************************************************
     *                    IMPORTANT NOTICE
     *
     * When reporting bugs, include the COMPLETE, UNEDITED
     * output below in your bug report.
     *
     ************************************************************
LIBGUESTFS_BACKEND_SETTINGS=force_tcg
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
SELinux: Enforcing
guestfs_get_append: (null)
guestfs_get_autosync: 1
guestfs_get_backend: libvirt
guestfs_get_backend_settings: [force_tcg]
guestfs_get_cachedir: /var/tmp
guestfs_get_hv: /usr/libexec/qemu-kvm
guestfs_get_memsize: 500
guestfs_get_network: 0
guestfs_get_path: /usr/lib64/guestfs
guestfs_get_pgroup: 0
guestfs_get_program: libguestfs-test-tool
guestfs_get_recovery_proc: 1
guestfs_get_smp: 1
guestfs_get_sockdir: /tmp
guestfs_get_tmpdir: /tmp
guestfs_get_trace: 0
guestfs_get_verbose: 1
host_cpu: x86_64
Launching appliance, timeout set to 600 seconds.
libguestfs: launch: program=libguestfs-test-tool
libguestfs: launch: version=1.38.4rhel=8,release=15.module+el8.2.0+5296+deb8203b,libvirt
libguestfs: launch: backend registered: unix
libguestfs: launch: backend registered: uml
libguestfs: launch: backend registered: libvirt
libguestfs: launch: backend registered: direct
libguestfs: launch: backend=libvirt
libguestfs: launch: tmpdir=/tmp/libguestfsywPFwG
libguestfs: launch: umask=0022
libguestfs: launch: euid=0
libguestfs: libvirt version = 4005000 (4.5.0)
libguestfs: guest random name = guestfs-6wquu66pya8mz2n7
libguestfs: connect to libvirt
libguestfs: opening libvirt handle: URI = qemu:///system, auth = default+wrapper, flags = 0
libguestfs: successfully opened libvirt handle: conn = 0x55baaa9c56e0
libguestfs: qemu version (reported by libvirt) = 2012000 (2.12.0)
libguestfs: get libvirt capabilities
libguestfs: parsing capabilities XML
libguestfs: build appliance
libguestfs: begin building supermin appliance
libguestfs: run supermin
libguestfs: command: run: /usr/bin/supermin
libguestfs: command: run: \ --build
libguestfs: command: run: \ --verbose
libguestfs: command: run: \ --if-newer
libguestfs: command: run: \ --lock /var/tmp/.guestfs-0/lock
libguestfs: command: run: \ --copy-kernel
libguestfs: command: run: \ -f ext2
libguestfs: command: run: \ --host-cpu x86_64
libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d
libguestfs: command: run: \ -o /var/tmp/.guestfs-0/appliance.d
supermin: version: 5.1.19
supermin: rpm: detected RPM version 4.14
supermin: package handler: fedora/rpm
supermin: acquiring lock on /var/tmp/.guestfs-0/lock
supermin: if-newer: output does not need rebuilding
libguestfs: finished building supermin appliance
libguestfs: command: run: qemu-img
libguestfs: command: run: \ create
libguestfs: command: run: \ -f qcow2
libguestfs: command: run: \ -o backing_file=/var/tmp/.guestfs-0/appliance.d/root,backing_fmt=raw
libguestfs: command: run: \ /tmp/libguestfsywPFwG/overlay2.qcow2
Formatting '/tmp/libguestfsywPFwG/overlay2.qcow2', fmt=qcow2 size=4294967296 backing_file=/var/tmp/.guestfs-0/appliance.d/root backing_fmt=raw cluster_size=65536 lazy_refcounts=off refcount_bits=16
libguestfs: create libvirt XML
libguestfs: command: run: dmesg | grep -Eoh 'lpj=[[:digit:]]+'
libguestfs: read_lpj_from_dmesg: external command exited with error status 1
libguestfs: read_lpj_from_files: no boot messages files are readable
libguestfs: libvirt XML:\n<?xml version="1.0"?>\n<domain type="qemu" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0">\n  <name>guestfs-6wquu66pya8mz2n7</name>\n  <memory unit="MiB">500</memory>\n  <currentMemory unit="MiB">500</currentMemory>\n  <vcpu>1</vcpu>\n  <clock offset="utc">\n    <timer name="rtc" tickpolicy="catchup"/>\n    <timer name="pit" tickpolicy="delay"/>\n    <timer name="hpet" present="no"/>\n  </clock>\n  <os>\n    <type>hvm</type>\n    <kernel>/var/tmp/.guestfs-0/appliance.d/kernel</kernel>\n    <initrd>/var/tmp/.guestfs-0/appliance.d/initrd</initrd>\n    <cmdline>panic=1 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color</cmdline>\n    <bios useserial="yes"/>\n  </os>\n  <on_reboot>destroy</on_reboot>\n  <devices>\n    <rng model="virtio">\n      <backend model="random">/dev/urandom</backend>\n    </rng>\n    <controller type="scsi" index="0" model="virtio-scsi"/>\n    <disk device="disk" type="file">\n      <source file="/tmp/libguestfsywPFwG/scratch1.img"/>\n      <target dev="sda" bus="scsi"/>\n      <driver name="qemu" type="raw" cache="unsafe"/>\n      <address type="drive" controller="0" bus="0" target="0" unit="0"/>\n    </disk>\n    <disk type="file" device="disk">\n      <source file="/tmp/libguestfsywPFwG/overlay2.qcow2"/>\n      <target dev="sdb" bus="scsi"/>\n      <driver name="qemu" type="qcow2" cache="unsafe"/>\n      <address type="drive" controller="0" bus="0" target="1" unit="0"/>\n    </disk>\n    <serial type="unix">\n      <source mode="connect" path="/tmp/libguestfsv8IuFR/console.sock"/>\n      <target port="0"/>\n    </serial>\n    <channel type="unix">\n      <source mode="connect" path="/tmp/libguestfsv8IuFR/guestfsd.sock"/>\n      <target type="virtio" name="org.libguestfs.channel.0"/>\n    </channel>\n    <controller type="usb" model="none"/>\n    <memballoon model="none"/>\n  </devices>\n  <qemu:commandline>\n    <qemu:env name="TMPDIR" value="/var/tmp"/>\n  </qemu:commandline>\n</domain>\n
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -R
libguestfs: command: run: \ -Z /var/tmp/.guestfs-0
libguestfs: /var/tmp/.guestfs-0:
libguestfs: total 164
libguestfs: drwxr-xr-x. 3 root root unconfined_u:object_r:user_tmp_t:s0    194 Dec 19 01:08 .
libguestfs: drwxrwxrwt. 5 root root system_u:object_r:tmp_t:s0             121 Dec 19 01:08 ..
libguestfs: drwxr-xr-x. 2 root root unconfined_u:object_r:user_tmp_t:s0     46 Dec 18 22:52 appliance.d
libguestfs: -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0      0 Dec 18 22:51 lock
libguestfs: -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0   6313 Dec 18 22:53 qemu-13191528-1576080514.devices
libguestfs: -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0  26176 Dec 18 22:53 qemu-13191528-1576080514.help
libguestfs: -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0 123635 Dec 18 22:53 qemu-13191528-1576080514.qmp-schema
libguestfs: -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0     44 Dec 18 22:53 qemu-13191528-1576080514.stat
libguestfs: 
libguestfs: /var/tmp/.guestfs-0/appliance.d:
libguestfs: total 385644
libguestfs: drwxr-xr-x. 2 root root unconfined_u:object_r:user_tmp_t:s0         46 Dec 18 22:52 .
libguestfs: drwxr-xr-x. 3 root root unconfined_u:object_r:user_tmp_t:s0        194 Dec 19 01:08 ..
libguestfs: -rw-r--r--. 1 qemu qemu system_u:object_r:virt_content_t:s0    4594176 Dec 19 01:08 initrd
libguestfs: -rwxr-xr-x. 1 qemu qemu system_u:object_r:virt_content_t:s0    8250000 Dec 19 01:08 kernel
libguestfs: -rw-r--r--. 1 qemu qemu system_u:object_r:virt_content_t:s0 4294967296 Dec 19 01:08 root
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -Z /tmp/libguestfsv8IuFR
libguestfs: total 4
libguestfs: drwxr-xr-x.  2 root root unconfined_u:object_r:user_tmp_t:s0   47 Dec 19 01:08 .
libguestfs: drwxrwxrwt. 13 root root system_u:object_r:tmp_t:s0          4096 Dec 19 01:08 ..
libguestfs: srw-rw----.  1 root qemu unconfined_u:object_r:user_tmp_t:s0    0 Dec 19 01:08 console.sock
libguestfs: srw-rw----.  1 root qemu unconfined_u:object_r:user_tmp_t:s0    0 Dec 19 01:08 guestfsd.sock
libguestfs: launch libvirt guest
libguestfs: responding to serial console Device Status Report
\x1b[1;256r\x1b[256;256H\x1b[6n
Google, Inc.
Serial Graphics Adapter 08/26/19
SGABIOS $Id$ (mockbuild@) Mon Aug 26 10:27:06 UTC 2019
Term: 80x24
4 0
SeaBIOS (version 1.11.1-4.module+el8.1.0+4066+0f1aadab)
Machine UUID 52ab6375-5f5a-495e-a262-c89f8ae931ed
Booting from ROM...
\x1b[2JAlarm clock    ----  stuck 


So I have to remove the force_tcg mode and remain the libvirt and direct mode. I guess the issue is relevant to openstack env.

Comment 9 Richard W.M. Jones 2019-12-19 08:48:38 UTC
This is just caused by a timeout, probably because it's using nested virt.

Try adding -t 6000 to the libguestfs-test-tool command, ie:
  LIBGUESTFS_BACKEND_SETTINGS=force_tcg libguestfs-test-tool -t 6000

(You can try even larger values if it still times out, but 6000 seconds = 100
minutes so that should be enough)

Comment 10 YongkuiGuo 2019-12-19 10:49:35 UTC
(In reply to Richard W.M. Jones from comment #9)
> This is just caused by a timeout, probably because it's using nested virt.
> 
> Try adding -t 6000 to the libguestfs-test-tool command, ie:
>   LIBGUESTFS_BACKEND_SETTINGS=force_tcg libguestfs-test-tool -t 6000
> 
> (You can try even larger values if it still times out, but 6000 seconds = 100
> minutes so that should be enough)

It doesn't work even if the '-t 6000' option is added to the libguestfs-test-tool command.

Comment 11 Richard W.M. Jones 2019-12-19 11:29:40 UTC
To my mind this indicates a real and potentially serious bug.  Please don't
remove the test - we need to investigate TCG failures.  Does it only happen
on x86_64?  Do you have a test log with the -t option added?

Comment 12 YongkuiGuo 2019-12-20 02:35:55 UTC
It only happens in openstack instance with x86_64. There is no other arch except x86_64 on openstack platform.

# LIBGUESTFS_BACKEND_SETTINGS=force_tcg libguestfs-test-tool -t 6000
     ************************************************************
     *                    IMPORTANT NOTICE
     *
     * When reporting bugs, include the COMPLETE, UNEDITED
     * output below in your bug report.
     *
     ************************************************************
LIBGUESTFS_BACKEND_SETTINGS=force_tcg
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
SELinux: Enforcing
guestfs_get_append: (null)
guestfs_get_autosync: 1
guestfs_get_backend: libvirt
guestfs_get_backend_settings: [force_tcg]
guestfs_get_cachedir: /var/tmp
guestfs_get_hv: /usr/libexec/qemu-kvm
guestfs_get_memsize: 500
guestfs_get_network: 0
guestfs_get_path: /usr/lib64/guestfs
guestfs_get_pgroup: 0
guestfs_get_program: libguestfs-test-tool
guestfs_get_recovery_proc: 1
guestfs_get_smp: 1
guestfs_get_sockdir: /tmp
guestfs_get_tmpdir: /tmp
guestfs_get_trace: 0
guestfs_get_verbose: 1
host_cpu: x86_64
Launching appliance, timeout set to 6000 seconds.
libguestfs: launch: program=libguestfs-test-tool
libguestfs: launch: version=1.38.4rhel=8,release=15.module+el8.2.0+5296+deb8203b,libvirt
libguestfs: launch: backend registered: unix
libguestfs: launch: backend registered: uml
libguestfs: launch: backend registered: libvirt
libguestfs: launch: backend registered: direct
libguestfs: launch: backend=libvirt
libguestfs: launch: tmpdir=/tmp/libguestfsUTHvoK
libguestfs: launch: umask=0022
libguestfs: launch: euid=0
libguestfs: libvirt version = 4005000 (4.5.0)
libguestfs: guest random name = guestfs-eg3gmq017xq292hk
libguestfs: connect to libvirt
libguestfs: opening libvirt handle: URI = qemu:///system, auth = default+wrapper, flags = 0
libguestfs: successfully opened libvirt handle: conn = 0x5638db72f6e0
libguestfs: qemu version (reported by libvirt) = 2012000 (2.12.0)
libguestfs: get libvirt capabilities
libguestfs: parsing capabilities XML
libguestfs: build appliance
libguestfs: begin building supermin appliance
libguestfs: run supermin
libguestfs: command: run: /usr/bin/supermin
libguestfs: command: run: \ --build
libguestfs: command: run: \ --verbose
libguestfs: command: run: \ --if-newer
libguestfs: command: run: \ --lock /var/tmp/.guestfs-0/lock
libguestfs: command: run: \ --copy-kernel
libguestfs: command: run: \ -f ext2
libguestfs: command: run: \ --host-cpu x86_64
libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d
libguestfs: command: run: \ -o /var/tmp/.guestfs-0/appliance.d
supermin: version: 5.1.19
supermin: rpm: detected RPM version 4.14
supermin: package handler: fedora/rpm
supermin: acquiring lock on /var/tmp/.guestfs-0/lock
supermin: if-newer: output does not need rebuilding
libguestfs: finished building supermin appliance
libguestfs: command: run: qemu-img
libguestfs: command: run: \ create
libguestfs: command: run: \ -f qcow2
libguestfs: command: run: \ -o backing_file=/var/tmp/.guestfs-0/appliance.d/root,backing_fmt=raw
libguestfs: command: run: \ /tmp/libguestfsUTHvoK/overlay2.qcow2
Formatting '/tmp/libguestfsUTHvoK/overlay2.qcow2', fmt=qcow2 size=4294967296 backing_file=/var/tmp/.guestfs-0/appliance.d/root backing_fmt=raw cluster_size=65536 lazy_refcounts=off refcount_bits=16
libguestfs: create libvirt XML
libguestfs: command: run: dmesg | grep -Eoh 'lpj=[[:digit:]]+'
libguestfs: read_lpj_from_dmesg: external command exited with error status 1
libguestfs: read_lpj_from_files: no boot messages files are readable
libguestfs: libvirt XML:\n<?xml version="1.0"?>\n<domain type="qemu" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0">\n  <name>guestfs-eg3gmq017xq292hk</name>\n  <memory unit="MiB">500</memory>\n  <currentMemory unit="MiB">500</currentMemory>\n  <vcpu>1</vcpu>\n  <clock offset="utc">\n    <timer name="rtc" tickpolicy="catchup"/>\n    <timer name="pit" tickpolicy="delay"/>\n    <timer name="hpet" present="no"/>\n  </clock>\n  <os>\n    <type>hvm</type>\n    <kernel>/var/tmp/.guestfs-0/appliance.d/kernel</kernel>\n    <initrd>/var/tmp/.guestfs-0/appliance.d/initrd</initrd>\n    <cmdline>panic=1 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm-256color</cmdline>\n    <bios useserial="yes"/>\n  </os>\n  <on_reboot>destroy</on_reboot>\n  <devices>\n    <rng model="virtio">\n      <backend model="random">/dev/urandom</backend>\n    </rng>\n    <controller type="scsi" index="0" model="virtio-scsi"/>\n    <disk device="disk" type="file">\n      <source file="/tmp/libguestfsUTHvoK/scratch1.img"/>\n      <target dev="sda" bus="scsi"/>\n      <driver name="qemu" type="raw" cache="unsafe"/>\n      <address type="drive" controller="0" bus="0" target="0" unit="0"/>\n    </disk>\n    <disk type="file" device="disk">\n      <source file="/tmp/libguestfsUTHvoK/overlay2.qcow2"/>\n      <target dev="sdb" bus="scsi"/>\n      <driver name="qemu" type="qcow2" cache="unsafe"/>\n      <address type="drive" controller="0" bus="0" target="1" unit="0"/>\n    </disk>\n    <serial type="unix">\n      <source mode="connect" path="/tmp/libguestfsoAmZKC/console.sock"/>\n      <target port="0"/>\n    </serial>\n    <channel type="unix">\n      <source mode="connect" path="/tmp/libguestfsoAmZKC/guestfsd.sock"/>\n      <target type="virtio" name="org.libguestfs.channel.0"/>\n    </channel>\n    <controller type="usb" model="none"/>\n    <memballoon model="none"/>\n  </devices>\n  <qemu:commandline>\n    <qemu:env name="TMPDIR" value="/var/tmp"/>\n  </qemu:commandline>\n</domain>\n
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -R
libguestfs: command: run: \ -Z /var/tmp/.guestfs-0
libguestfs: /var/tmp/.guestfs-0:
libguestfs: total 0
libguestfs: drwxr-xr-x. 3 root root unconfined_u:object_r:user_tmp_t:s0  37 Dec 19 04:16 .
libguestfs: drwxrwxrwt. 5 root root system_u:object_r:tmp_t:s0          121 Dec 19 04:16 ..
libguestfs: drwxr-xr-x. 2 root root unconfined_u:object_r:user_tmp_t:s0  46 Dec 19 03:56 appliance.d
libguestfs: -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0   0 Dec 19 03:56 lock
libguestfs: 
libguestfs: /var/tmp/.guestfs-0/appliance.d:
libguestfs: total 385644
libguestfs: drwxr-xr-x. 2 root root unconfined_u:object_r:user_tmp_t:s0         46 Dec 19 03:56 .
libguestfs: drwxr-xr-x. 3 root root unconfined_u:object_r:user_tmp_t:s0         37 Dec 19 04:16 ..
libguestfs: -rw-r--r--. 1 qemu qemu system_u:object_r:virt_content_t:s0    4594176 Dec 19 04:16 initrd
libguestfs: -rwxr-xr-x. 1 qemu qemu system_u:object_r:virt_content_t:s0    8250000 Dec 19 04:16 kernel
libguestfs: -rw-r--r--. 1 qemu qemu system_u:object_r:virt_content_t:s0 4294967296 Dec 19 04:16 root
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -Z /tmp/libguestfsoAmZKC
libguestfs: total 4
libguestfs: drwxr-xr-x.  2 root root unconfined_u:object_r:user_tmp_t:s0   47 Dec 19 04:16 .
libguestfs: drwxrwxrwt. 21 root root system_u:object_r:tmp_t:s0          4096 Dec 19 04:16 ..
libguestfs: srw-rw----.  1 root qemu unconfined_u:object_r:user_tmp_t:s0    0 Dec 19 04:16 console.sock
libguestfs: srw-rw----.  1 root qemu unconfined_u:object_r:user_tmp_t:s0    0 Dec 19 04:16 guestfsd.sock
libguestfs: launch libvirt guest
libguestfs: responding to serial console Device Status Report
\x1b[1;256r\x1b[256;256H\x1b[6n
Google, Inc.
Serial Graphics Adapter 08/26/19
SGABIOS $Id$ (mockbuild@) Mon Aug 26 10:27:06 UTC 2019
Term: 80x24
4 0
SeaBIOS (version 1.11.1-4.module+el8.1.0+4066+0f1aadab)
Machine UUID a1d29f35-42e5-48a8-ab49-0c5b5638ff04
Booting from ROM...
\x1b[2JAlarm clock

Comment 14 YongkuiGuo 2020-01-16 10:14:01 UTC
Yash, rjones,
actually this issue has been included in JIRA PSI-12425. I tested it again and found that this issue only exists in RHEL-8.2.0-20191206.3 compose. It can 'not' be reproduced in RHEL-8.2.0-20191219.0. Currently the command (LIBGUESTFS_BACKEND_SETTINGS=force_tcg libguestfs-test-tool) has not been involved in gating test because I removed it when the issue happened. But right now I will consider adding it into gating test again.

Comment 15 Richard W.M. Jones 2020-01-28 13:16:26 UTC
Created attachment 1656012 [details]
0001-gating-tests-Run-libguestfs-test-tool.patch

Proposed patch to the virt module to add these tests.

Comment 16 Richard W.M. Jones 2020-03-02 12:21:32 UTC
(In reply to YongkuiGuo from comment #12)
> It only happens in openstack instance with x86_64. There is no other arch
> except x86_64 on openstack platform.
> 
> # LIBGUESTFS_BACKEND_SETTINGS=force_tcg libguestfs-test-tool -t 6000
[...]
> SeaBIOS (version 1.11.1-4.module+el8.1.0+4066+0f1aadab)
> Machine UUID a1d29f35-42e5-48a8-ab49-0c5b5638ff04
> Booting from ROM...
> \x1b[2JAlarm clock

What happens here is that we set an alarm clock for 6000 seconds (100 minutes)
and then try to boot a small VM.  After 100 minutes it times out somewhere
inside SeaBIOS or very early in the Linux kernel.

IOW: Yes, this is really a bug.

Comment 17 YongkuiGuo 2020-03-03 06:20:36 UTC
(In reply to Richard W.M. Jones from comment #16)
> What happens here is that we set an alarm clock for 6000 seconds (100
> minutes)
> and then try to boot a small VM.  After 100 minutes it times out somewhere
> inside SeaBIOS or very early in the Linux kernel.
> 
> IOW: Yes, this is really a bug.

This issue only exists in RHEL-8.2.0-20191206.3 compose (comment 14). I have added the force_tcg mode again into gating test, and the timeout error has not occurred in the past month.

Comment 18 Richard W.M. Jones 2020-03-09 14:00:58 UTC
As this is just a gating test, move to 8.2.1.

Comment 19 Richard W.M. Jones 2020-05-26 09:06:51 UTC
I added this to the virt module in RHEL AV 8.3.0.

We'll have to see if the new test fails regularly, and if so either
examine and fix those failures or rework the test.

Note that the *real* fix here would be to add a gating test to both
the kernel and qemu so they didn't break our stuff ...

Comment 20 Danilo de Paula 2020-07-16 14:10:23 UTC
Richard, if this is fixed, would you mind to include this in the errata?
https://errata.devel.redhat.com/errata/edit/53162

Comment 23 YongkuiGuo 2020-07-21 10:18:04 UTC
rjones, In tier0 gating test result, I saw the libguestfs-test-tool commands (libvirt and force_tcg backend) failed because the libvirtd service was not running. So need to start libvirtd service before running libguestfs-test-tool. Please refer to the url: https://dashboard.osci.redhat.com/#/artifact/redhat-module/aid/7379?focus=id:93ce28bc7be4-1

Comment 24 Richard W.M. Jones 2020-07-21 10:38:13 UTC
This is really a bug in libvirt actually.  I notice it frequently on
real machines.  Try installing libguestfs on a fresh install of RHEL
(which doesn't have libguestfs or libvirt installed already), and
libguestfs-test-tool will fail in the same way.

It's been a bug since .. forever.  Not sure what you want to do about it.
We could work around it by (re-)starting libvirtd or we could get libvirt
to fix it.

Comment 25 YongkuiGuo 2020-07-21 11:25:05 UTC
(In reply to Richard W.M. Jones from comment #24)
> This is really a bug in libvirt actually.  I notice it frequently on
> real machines.  Try installing libguestfs on a fresh install of RHEL
> (which doesn't have libguestfs or libvirt installed already), and
> libguestfs-test-tool will fail in the same way.
> 
> It's been a bug since .. forever.  Not sure what you want to do about it.
> We could work around it by (re-)starting libvirtd or we could get libvirt
> to fix it.

Ok, I got it. Thanks. I will verify this bug.

Comment 28 errata-xmlrpc 2020-11-17 17:46:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (virt:8.3 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:5137


Note You need to log in before you can comment on or make changes to this bug.