Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 2032472

Summary: In s390x, the VM will be shut off immediately
Product: Red Hat Enterprise Linux 9 Reporter: YunmingYang <yunyang>
Component: virt-managerAssignee: Virtualization Maintenance <virt-maint>
virt-manager sub component: Common QA Contact: virt-qe-z
Status: CLOSED UPSTREAM Docs Contact:
Severity: low    
Priority: low CC: crobinso, hongzliu, jjongsma, jsuchane, juzhou, kkoukiou, mpitt, skobyda, smitterl, thuth, tyan, tzheng, virt-maint, xchen
Version: 9.0Keywords: Reopened, Triaged
Target Milestone: rcFlags: pm-rhel: mirror+
Target Release: ---   
Hardware: s390x   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-04-04 15:08:15 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description YunmingYang 2021-12-14 14:41:47 UTC
Description of problem:
In s390x, if creating a VM by "virt-install --connect qemu:///system --name subVmTest1 --os-variant fedora31 --boot hd,network --vcpus 2 --memory 2048 --import --pxe --network type=direct,source=enc600 --graphics vnc,listen=127.0.0.1 --console pty,target.type=virtio --noautoconsole", then check the state of the VM by "virsh list --all", its state is shutoff


Version-Release number of selected components (if applicable):
virt-manager-common-3.2.0-8.el9.noarch
libvirt-libs-7.9.0-1.el9.s390x
libvirt-glib-4.0.0-3.el9.s390x
libvirt-dbus-1.4.1-5.el9.s390x
python3-libvirt-7.9.0-1.el9.s390x
libvirt-client-7.9.0-1.el9.s390x
libvirt-daemon-7.9.0-1.el9.s390x
libvirt-daemon-driver-storage-core-7.9.0-1.el9.s390x
libvirt-daemon-driver-storage-disk-7.9.0-1.el9.s390x
libvirt-daemon-driver-storage-iscsi-7.9.0-1.el9.s390x
libvirt-daemon-driver-storage-logical-7.9.0-1.el9.s390x
libvirt-daemon-driver-storage-mpath-7.9.0-1.el9.s390x
libvirt-daemon-driver-storage-scsi-7.9.0-1.el9.s390x
libvirt-daemon-driver-interface-7.9.0-1.el9.s390x
libvirt-daemon-driver-nodedev-7.9.0-1.el9.s390x
libvirt-daemon-driver-nwfilter-7.9.0-1.el9.s390x
libvirt-daemon-driver-qemu-7.9.0-1.el9.s390x
libvirt-daemon-driver-secret-7.9.0-1.el9.s390x
libvirt-daemon-driver-network-7.9.0-1.el9.s390x
libvirt-daemon-driver-storage-rbd-7.9.0-1.el9.s390x
libvirt-daemon-driver-storage-7.9.0-1.el9.s390x
libvirt-daemon-kvm-7.9.0-1.el9.s390x
libvirt-daemon-config-network-7.9.0-1.el9.s390x
qemu-kvm-common-6.1.0-7.el9.s390x
qemu-img-6.1.0-7.el9.s390x
qemu-kvm-audio-pa-6.1.0-7.el9.s390x
qemu-pr-helper-6.1.0-7.el9.s390x
qemu-virtiofsd-6.1.0-7.el9.s390x
qemu-kvm-tools-6.1.0-7.el9.s390x
qemu-kvm-docs-6.1.0-7.el9.s390x
qemu-kvm-core-6.1.0-7.el9.s390x
libvirt-daemon-driver-qemu-7.9.0-1.el9.s390x
qemu-kvm-block-rbd-6.1.0-7.el9.s390x
qemu-kvm-6.1.0-7.el9.s390x


How reproducible:
100%


Steps to Reproduce:
1 Create a VM by "virt-install --connect qemu:///system --name subVmTest1 --os-variant fedora31 --boot hd,network --vcpus 2 --memory 2048 --import --pxe --network type=direct,source=enc600 --graphics vnc,listen=127.0.0.1 --console pty,target.type=virtio --noautoconsole"
2 Check the state of the VM by "virsh list --all"


Actual results:
1 After step 2, the VM is shutoff


Expected results:
1 After step 2, the VM should be running


Additional info:

Comment 1 Thomas Huth 2021-12-15 09:15:29 UTC
The guest firmware on s390x shuts down the VM if it cannot boot, so this is likely the expected behavior since your guest likely failed to boot.

I see a couple of problems with your command line:

1) Looks like you're trying to boot your guest via network, since you're using --pxe ? In that case please use "--boot network" instead of "--boot hd,network". The s390x firmware can only boot from one device, there is no boot order as on x86. See also https://www.qemu.org/docs/master/system/s390x/bootdevices.html for some more information. Thus if you use "--boot hd,network" your guest will *only* try to boot from HD and not from network. Since the hard disk is not installed yet, it will fail and shut down.

2) s390x does not use graphics by default, and the SCLP console instead of virtio-console. I recommend using "--nographics --console pty,target_type=sclp --wait -1 --debug" if you want to see what's going on in the guest.

Does that help?

Comment 2 Thomas Huth 2021-12-15 09:46:10 UTC
Anyay, I wonder whether virt-install should be tweaked to stop with a proper error message if the "--boot" parameter contains more than one device on s390x?

Comment 3 YunmingYang 2021-12-15 12:06:43 UTC
Yeah, they are very help, thanks a lot. But I still need to make sure one thing.After modify the command, I use "virt-install --connect qemu:///system --name subVmTest1 --os-variant fedora31 --boot network --vcpus 2 --memory 2048 --pxe --network type=direct,source=enc600 --console pty,target_type=sclp --noautoconsole", then I could attach the console by "virsh console subVmTest1", and I saw the VM try to get request from TFTP, then it is shut off after some failed tries. But if I use "--graphics vnc,listen=127.0.0.1 --wait -1 --debug" instead of "--noautoconsole" of the command, there is no any boot logs and it seems that VM is restarted directly after "creation completed". Is it a behavior which s390x expected? Since it looks fine when I create a VM with disk and graphics.

Comment 4 Thomas Huth 2021-12-15 12:51:32 UTC
(In reply to YunmingYang from comment #3)
> [...] But if I use "--graphics vnc,listen=127.0.0.1 --wait -1 --debug" instead of
> "--noautoconsole" of the command, there is no any boot logs and it seems that VM is
> restarted  directly after "creation completed". Is it a behavior which s390x expected?
> Since it looks fine when I create a VM with disk and graphics.

The guest firmware on s390x does not support the graphics console, so it is expected that you don't see the TFTP messages there.

Comment 5 YunmingYang 2021-12-17 11:47:05 UTC
Many thanks, then I submitted a issue to cockpit-machines(https://bugzilla.redhat.com/show_bug.cgi?id=2033595), since cockpit-machines add graphics console by default in s390x

Comment 6 Thomas Huth 2021-12-21 11:12:59 UTC
@crobinso , @jjongsma : Do you think it would be feasible to limit the "--boot" parameter to just one device on s390x (i.e. to bail out with an error if the user tried to specify multiple boot devices)?

Comment 7 Cole Robinson 2022-01-10 15:38:01 UTC
@thuth if multiple boot devs will never work on s390x, then we could add a validation check to libvirt. but it will break a lot of virt-install usage where we always append 'hd' boot option if the VM has a disk, which is needed on x86 to get 'boot from disk' pxe option to work, and some other corner cases. We would need to conditionalize that on s390x and coordinate package updates. I'm not sure it's worth the effort TBH

Comment 8 Thomas Huth 2022-01-13 16:52:06 UTC
(In reply to Cole Robinson from comment #7)
> @thuth if multiple boot devs will never work on s390x, then we
> could add a validation check to libvirt. but it will break a lot of
> virt-install usage where we always append 'hd' boot option if the VM has a
> disk, which is needed on x86 to get 'boot from disk' pxe option to work, and
> some other corner cases. We would need to conditionalize that on s390x and
> coordinate package updates. I'm not sure it's worth the effort TBH

Ok, true, if it breaks the workflow of people, that's certainly a bad idea.

What about printing a warning instead? Or at least adding some words about this issue to the documentation of the --boot parameter ?

Comment 9 Cole Robinson 2022-01-14 17:51:39 UTC
(In reply to Thomas Huth from comment #8)
> 
> What about printing a warning instead? Or at least adding some words about
> this issue to the documentation of the --boot parameter ?

Is this a common pitfall on s390x?
Is this fundamental to s390x and will never change, or is this something that could be fixed in the future?
Does it affect all s390x configs or only certain scenarios?

Generally we only put runtime warnings or man page warnings in virt-install if the problem is pretty common and have high impact. Otherwise its a long term maintenance pain for not a lot of gain, and it encourages a slippery slope of adding yet more warnings.

If the limitation is unlikely to ever change and is relevant for all arch == s390x then that reduces long term maintenance, but I'm guessing this is not a common issue.

A starting place could be documenting this in libvirt boot docs, I don't see any mention there

Comment 10 Thomas Huth 2022-01-26 12:23:05 UTC
(In reply to Cole Robinson from comment #9)
> (In reply to Thomas Huth from comment #8)
> > 
> > What about printing a warning instead? Or at least adding some words about
> > this issue to the documentation of the --boot parameter ?
> 
> Is this a common pitfall on s390x?

Yes. People keep running into this issue, also at the qemu-only level, so that I once already added some description to the documentation there:

https://www.qemu.org/docs/master/system/s390x/bootdevices.html

> Is this fundamental to s390x and will never change, or is this something
> that could be fixed in the future?

It's likely never going to change, at least not in the near future, since the standard interfaces between hypervisor and guest are only defined for one boot device.

> Does it affect all s390x configs or only certain scenarios?

It's for all devices.

> Generally we only put runtime warnings or man page warnings in virt-install
> if the problem is pretty common and have high impact. Otherwise its a long
> term maintenance pain for not a lot of gain, and it encourages a slippery
> slope of adding yet more warnings.
> 
> If the limitation is unlikely to ever change and is relevant for all arch ==
> s390x then that reduces long term maintenance, but I'm guessing this is not
> a common issue.
> 
> A starting place could be documenting this in libvirt boot docs, I don't see
> any mention there

I also think that we should improve documentation at all layers... I'll try to have a closer look when time permits...

Comment 11 Thomas Huth 2023-01-31 13:16:37 UTC
(In reply to Cole Robinson from comment #9)
> A starting place could be documenting this in libvirt boot docs, I don't see
> any mention there

The libvirt docs mention it in the https://libvirt.org/formatdomain.html#hard-drives-floppy-disks-cdroms section (scroll down to the description of the "boot" tag).

For virt-install, I now opened a request here:

https://github.com/virt-manager/virt-manager/pull/489

Comment 16 Thomas Huth 2023-04-04 15:08:15 UTC
Ok, closing now since the change has been accepted upstream. We'll get it in downstream via rebase in a future release.