Bug 1749134 - I/O error when virtio-blk disk is backed by a raw image on 4k disk
Summary: I/O error when virtio-blk disk is backed by a raw image on 4k disk
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.1
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: 8.1
Assignee: Hanna Czenczek
QA Contact: Xueqiang Wei
URL:
Whiteboard:
: 1743360 (view as bug list)
Depends On: 1738839
Blocks: 1744207
TreeView+ depends on / blocked
 
Reported: 2019-09-05 01:31 UTC by David Gibson
Modified: 2019-11-06 07:19 UTC (History)
23 users (show)

Fixed In Version: qemu-kvm-4.1.0-8.module+el8.1.0+4199+446e40fc
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1738839
Environment:
Last Closed: 2019-11-06 07:19:21 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:3723 0 None None None 2019-11-06 07:19:50 UTC

Comment 1 David Gibson 2019-09-05 01:33:37 UTC
Thomas, I see you had an exception flagged for the original bug 1738839, although the priority was only medium.  I'm not very clear on what the consequences of the bug are beyond a surprising error message.

Should we consider this for an exception, or can we just wait until RHEL-AV-8.2, in which case we'll get the fix via rebase?

Comment 2 CongLi 2019-09-05 02:08:30 UTC
*** Bug 1743360 has been marked as a duplicate of this bug. ***

Comment 3 Thomas Huth 2019-09-05 06:57:18 UTC
(In reply to David Gibson from comment #1)
> Thomas, I see you had an exception flagged for the original bug 1738839,
> although the priority was only medium.  I'm not very clear on what the
> consequences of the bug are beyond a surprising error message.

The consequence was that the mkfs.xfs was failing completely, i.e. not only the error message, but the whole installation of the guest failed in that case. 

> Should we consider this for an exception, or can we just wait until RHEL-AV-8.2, in which case we'll get the fix via rebase?

Depends whether you urgently need this for BZ 1747110 in RHEL-AV-8.1 already or not. For s390x, we don't really care for RHEL-AV yet since it is not supported on s390x yet.

Comment 4 Ademar Reis 2019-09-05 14:57:25 UTC
This ended up being closed as a duplicate of bug 1743360, which was fully acked and assigned already. So just to clarify: yes, we need the patches in AV-8.1.

Comment 8 Xueqiang Wei 2019-09-16 10:14:42 UTC
Tested on qemu-kvm-4.1.0-8.module+el8.1.0+4199+446e40fc, not hit this issue. So set status to VERIFIED.


Details as below:

Host:
kernel-4.18.0-144.el8.x86_64
qemu-kvm-4.1.0-8.module+el8.1.0+4199+446e40fc

Guest:
kernel-4.18.0-138.el8.x86_64


1. create raw image on 4k disk on host (e.g. sdc)

# fdisk -l /dev/sdc 
Disk /dev/sdc: 558.4 GiB, 599550590976 bytes, 146374656 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

# mkdir /mnt/test
# mount /dev/sdc /mnt/test/
# qemu-img create -f raw /mnt/test/test.raw 1G


2. boot guest with below cmd lines:

/usr/libexec/qemu-kvm \
    -S  \
    -name 'avocado-vt-vm1' \
    -machine q35  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x1  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/avocado_vkzzzsjy/monitor-qmpmonitor1-20190827-054125-X8YHvELh,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/avocado_vkzzzsjy/monitor-catch_monitor-20190827-054125-X8YHvELh,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idnZn1j7 \
    -chardev socket,nowait,server,path=/var/tmp/avocado_vkzzzsjy/serial-serial0-20190827-054125-X8YHvELh,id=chardev_serial0 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20190827-054125-X8YHvELh,path=/var/tmp/avocado_vkzzzsjy/seabios-20190827-054125-X8YHvELh,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20190827-054125-X8YHvELh,iobase=0x402 \
    -device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0 \
    -device qemu-xhci,id=usb1,bus=pcie.0-root-port-2,addr=0x0 \
    -drive id=drive_image1,if=none,snapshot=off,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/rhel810-64-virtio.qcow2 \
    -device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,bus=pcie.0-root-port-3,addr=0x0 \
    -device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \
    -device virtio-net-pci,mac=9a:c8:3a:2f:3f:1c,id=idXNk6ZE,netdev=id595yhy,bus=pcie.0-root-port-4,addr=0x0  \
    -netdev tap,id=id595yhy,vhost=on \
    -m 14336  \
    -smp 24,maxcpus=24,cores=12,threads=1,sockets=2  \
    -cpu 'Skylake-Server',+kvm_pv_unhalt \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -monitor stdio \
    -drive id=drive_data,if=none,snapshot=off,cache=none,format=raw,file=/mnt/test/test.raw \
    -device pcie-root-port,id=pcie.0-root-port-5,slot=5,chassis=5,addr=0x5,bus=pcie.0 \
    -device virtio-blk-pci,id=data1,drive=drive_data,bus=pcie.0-root-port-5,addr=0x0 \

3. create partition and format it in guest

# lsblk
NAME                             MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda                              252:0    0  20G  0 disk 
├─vda1                           252:1    0   1G  0 part /boot
└─vda2                           252:2    0  19G  0 part 
  ├─rhel_bootp--73--75--125-root 253:0    0  17G  0 lvm  /
  └─rhel_bootp--73--75--125-swap 253:1    0   2G  0 lvm  [SWAP]
vdb                              252:16   0   1G  0 disk 

# fdisk -l /dev/vdb 
Disk /dev/vdb: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

# parted /dev/vdb mktable gpt
# parted /dev/vdb mkpart primary xfs "0%" "100%"
# mkfs.xfs /dev/vdb1                             
meta-data=/dev/vdb1              isize=512    agcount=4, agsize=65408 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=261632, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=1566, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

# dmesg |grep vdb
[    2.201486] virtio_blk virtio2: [vdb] 2097152 512-byte logical blocks (1.07 GB/1.00 GiB)
[   99.656990]  vdb:
[  107.029911]  vdb: vdb1

# mount /dev/vdb1 /mnt/
# lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda                       252:0    0   20G  0 disk 
├─vda1                    252:1    0    1G  0 part /boot
└─vda2                    252:2    0   19G  0 part 
  ├─rhel_vm--198--95-root 253:0    0   17G  0 lvm  /
  └─rhel_vm--198--95-swap 253:1    0    2G  0 lvm  [SWAP]
vdb                       252:16   0    1G  0 disk 
└─vdb1                    252:17   0 1022M  0 part /mnt

# dmesg |grep vdb
[    2.201486] virtio_blk virtio2: [vdb] 2097152 512-byte logical blocks (1.07 GB/1.00 GiB)
[   99.656990]  vdb:
[  107.029911]  vdb: vdb1
[  197.136847] XFS (vdb1): Mounting V5 Filesystem
[  197.148264] XFS (vdb1): Ending clean mount

# dmesg |grep error

after step 3,  format successfully, not hit any error.

Comment 10 errata-xmlrpc 2019-11-06 07:19:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3723


Note You need to log in before you can comment on or make changes to this bug.