Bug 2112296

Summary: virtio-blk: Can't boot fresh installation from used 512 cluster_size image under certain conditions
Product: Red Hat Enterprise Linux 8 Reporter: bfu <bfu>
Component: qemu-kvmAssignee: Thomas Huth <thuth>
qemu-kvm sub component: virtio-blk,scsi QA Contact: bfu <bfu>
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: high CC: cohuck, coli, dgilbert, hannsj_uhl, jinzhao, juzhang, knoel, lijin, mrezanin, pbonzini, ribarry, smitterl, stefanha, thuth, vgoyal, virt-maint, virt-qe-z, yiwei
Version: 8.7Keywords: Regression, Triaged
Target Milestone: rc   
Target Release: 8.7   
Hardware: s390x   
OS: Linux   
Whiteboard:
Fixed In Version: qemu-kvm-6.2.0-19.module+el8.7.0+16358+eef3c6a2 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-11-08 09:20:55 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description bfu 2022-07-29 09:24:01 UTC
Description of problem:
When using an image with 512k cluster_size, and installing guest with physical size=4096&logical size=512, guest could not reboot after installation

Version-Release number of selected component (if applicable):
kernel version: 4.18.0-409.el8.s390x
qemu version: qemu-img-6.2.0-17.module+el8.7.0+15924+b11d8c3f.s390x
libvirt version: libvirt-8.0.0-9.module+el8.7.0+15830+85788ab7.s390x

How reproducible:
100%

Steps to Reproduce:
1. create an image file that has a cluster size of 512k
qemu-img create -f qcow2 test.qcow2 20G

2. install guest into the created image file with
physical_block_size = 4096, logical_block_size = 512

3. reboot guest after installation finished

Actual results:
guest cannot reboot
2022-07-28 07:30:29: Powering off.
2022-07-28 07:32:40: (Process terminated with status 0)
2022-07-28 07:32:44: LOADPARM=[        ]
2022-07-28 07:32:44: Using virtio-blk.
2022-07-28 07:32:44: Using SCSI scheme.
2022-07-28 07:32:44: 
2022-07-28 07:32:44: ! No zIPL magic in PT !
2022-07-28 07:45:24: (Process terminated with status 0)

Expected results:
guest could reboot successfully

Additional info:

Comment 3 bfu 2022-08-11 17:13:38 UTC
Test result:
JOB ID     : 72a1c81e13e7b75d3bcf1ee12ec7bd23433aad23
JOB LOG    : /root/avocado/job-results/job-2022-08-11T11.22-72a1c81/job.log
 (1/2) Host_RHEL.m8.u7.nographic.qcow2.virtio_blk.up.virtio_net.Guest.RHEL.8.7.0.s390x.io-github-autotest-qemu.unattended_install.cdrom.extra_cdrom_ks.default_install.aio_threads.s390-virtio: STARTED
 (1/2) Host_RHEL.m8.u7.nographic.qcow2.virtio_blk.up.virtio_net.Guest.RHEL.8.7.0.s390x.io-github-autotest-qemu.unattended_install.cdrom.extra_cdrom_ks.default_install.aio_threads.s390-virtio: PASS (775.23 s)
 (2/2) Host_RHEL.m8.u7.nographic.qcow2.virtio_blk.up.virtio_net.Guest.RHEL.8.7.0.s390x.io-github-autotest-qemu.check_block_size.4096_512.extra_cdrom_ks.s390-virtio: STARTED
 (2/2) Host_RHEL.m8.u7.nographic.qcow2.virtio_blk.up.virtio_net.Guest.RHEL.8.7.0.s390x.io-github-autotest-qemu.check_block_size.4096_512.extra_cdrom_ks.s390-virtio: PASS (809.91 s)
RESULTS    : PASS 2 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB HTML   : /root/avocado/job-results/job-2022-08-11T11.22-72a1c81/results.html
JOB TIME   : 1589.68 s

As the test result, add verified: tested to this bz

Comment 6 bfu 2022-08-18 10:16:02 UTC
Test result:
JOB ID     : b46d5622d781594a03201628207d6065954236e3
JOB LOG    : /root/avocado/job-results/job-2022-08-18T05.23-b46d562/job.log
 (1/5) Host_RHEL.m8.u7.product_rhel.nographic.qcow2.virtio_blk.up.virtio_net.Guest.RHEL.8.7.0.s390x.io-github-autotest-qemu.unattended_install.cdrom.extra_cdrom_ks.default_install.aio_threads.s390-virtio: STARTED
 (1/5) Host_RHEL.m8.u7.product_rhel.nographic.qcow2.virtio_blk.up.virtio_net.Guest.RHEL.8.7.0.s390x.io-github-autotest-qemu.unattended_install.cdrom.extra_cdrom_ks.default_install.aio_threads.s390-virtio: PASS (620.88 s)
 (2/5) Host_RHEL.m8.u7.product_rhel.nographic.qcow2.virtio_blk.up.virtio_net.Guest.RHEL.8.7.0.s390x.io-github-autotest-qemu.check_block_size.4096_4096.base.s390-virtio: STARTED
 (2/5) Host_RHEL.m8.u7.product_rhel.nographic.qcow2.virtio_blk.up.virtio_net.Guest.RHEL.8.7.0.s390x.io-github-autotest-qemu.check_block_size.4096_4096.base.s390-virtio: PASS (25.54 s)
 (3/5) Host_RHEL.m8.u7.product_rhel.nographic.qcow2.virtio_blk.up.virtio_net.Guest.RHEL.8.7.0.s390x.io-github-autotest-qemu.check_block_size.4096_512.extra_cdrom_ks.s390-virtio: STARTED
 (3/5) Host_RHEL.m8.u7.product_rhel.nographic.qcow2.virtio_blk.up.virtio_net.Guest.RHEL.8.7.0.s390x.io-github-autotest-qemu.check_block_size.4096_512.extra_cdrom_ks.s390-virtio: PASS (647.89 s)
 (4/5) Host_RHEL.m8.u7.product_rhel.nographic.qcow2.virtio_blk.up.virtio_net.Guest.RHEL.8.7.0.s390x.io-github-autotest-qemu.check_block_size.512_512.extra_cdrom_ks.s390-virtio: STARTED
 (4/5) Host_RHEL.m8.u7.product_rhel.nographic.qcow2.virtio_blk.up.virtio_net.Guest.RHEL.8.7.0.s390x.io-github-autotest-qemu.check_block_size.512_512.extra_cdrom_ks.s390-virtio: PASS (644.79 s)
 (5/5) Host_RHEL.m8.u7.product_rhel.nographic.qcow2.virtio_blk.up.virtio_net.Guest.RHEL.8.7.0.s390x.io-github-autotest-qemu.check_block_size.4096_4096_cluster_install.extra_cdrom_ks.s390-virtio: STARTED
 (5/5) Host_RHEL.m8.u7.product_rhel.nographic.qcow2.virtio_blk.up.virtio_net.Guest.RHEL.8.7.0.s390x.io-github-autotest-qemu.check_block_size.4096_4096_cluster_install.extra_cdrom_ks.s390-virtio: PASS (656.66 s)
RESULTS    : PASS 5 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB HTML   : /root/avocado/job-results/job-2022-08-18T05.23-b46d562/results.html
JOB TIME   : 2603.17 s

qemu version: qemu-kvm-6.2.0-19.module+el8.7.0+16358+eef3c6a2.s390x

As the test result, set this bz to verified

Comment 11 errata-xmlrpc 2022-11-08 09:20:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Low: virt:rhel and virt-devel:rhel security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:7472