Bug 1996131 - VM remains in paused state when trying to write on a resized disk resides on iscsi [rhel-8.4.0.z]
Summary: VM remains in paused state when trying to write on a resized disk resides on ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.4
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: 8.5
Assignee: Kevin Wolf
QA Contact: qing.wang
URL:
Whiteboard:
Depends On: 1994494 1997934
Blocks: 1996602 2000568
TreeView+ depends on / blocked
 
Reported: 2021-08-20 15:58 UTC by RHEL Program Management Team
Modified: 2021-12-28 09:20 UTC (History)
21 users (show)

Fixed In Version: qemu-kvm-5.2.0-16.module+el8.4.0+12368+54110afb.7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1994494
Environment:
Last Closed: 2021-08-31 08:07:47 UTC
Type: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-94361 0 None None None 2021-08-22 11:07:50 UTC
Red Hat Product Errata RHBA-2021:3340 0 None None None 2021-08-31 08:08:00 UTC

Comment 5 qing.wang 2021-08-25 08:48:47 UTC
Passed test on
Red Hat Enterprise Linux release 8.4 (Ootpa)
4.18.0-305.el8.x86_64
qemu-kvm-common-5.2.0-16.module+el8.4.0+12368+54110afb.7.x86_64


Test steps:
Test steps
1.build iscis target server
root@dell-per440-07 /home/tmp $ targetcli ls
o- / ........................................................................................... [...]
  o- backstores ................................................................................ [...]
  | o- block .................................................................... [Storage Objects: 0]
  | o- fileio ................................................................... [Storage Objects: 1]
  | | o- one ................................... [/home/iscsi/onex.img (15.0GiB) write-back activated]
  | |   o- alua ..................................................................... [ALUA Groups: 1]
  | |     o- default_tg_pt_gp ......................................... [ALUA state: Active/optimized]
  | o- pscsi .................................................................... [Storage Objects: 0]
  | o- ramdisk .................................................................. [Storage Objects: 0]
  o- iscsi .............................................................................. [Targets: 1]
  | o- iqn.2016-06.one.server:one-a ........................................................ [TPGs: 1]
  |   o- tpg1 ................................................................. [no-gen-acls, no-auth]
  |     o- acls ............................................................................ [ACLs: 2]
  |     | o- iqn.1994-05.com.redhat:clienta ......................................... [Mapped LUNs: 1]
  |     | | o- mapped_lun0 .................................................... [lun0 fileio/one (rw)]
  |     | o- iqn.1994-05.com.redhat:clientb ......................................... [Mapped LUNs: 1]
  |     |   o- mapped_lun0 .................................................... [lun0 fileio/one (rw)]
  |     o- luns ............................................................................ [LUNs: 1]
  |     | o- lun0 ............................. [fileio/one (/home/iscsi/onex.img) (default_tg_pt_gp)]
  |     o- portals ...................................................................... [Portals: 1]
  |       o- 0.0.0.0:3260 ....................................................................... [OK]
  o- loopback ........................................................................... [Targets: 0]

2.attach iscsi disk on host
iscsiadm -m discovery -t st -p 127.0.0.1
iscsiadm -m node -T iqn.2016-06.one.server:one-a  -p 127.0.0.1:3260 -l

3.change the value to 64

echo 64 > /sys/block/sdd/queue/max_sectors_kb

4.create lvms on the disk
pvcreate /dev/sdd
vgcreate vg /dev/sdd

lvcreate -L 2560M -n lv1 vg;lvcreate -L 2560M -n lv2 vg
lvcreate -L 2560M -n lv3 vg;lvcreate -L 2560M -n lv4 vg


qemu-img create -f qcow2 /dev/vg/lv1 2G;qemu-img create -f qcow2 /dev/vg/lv2 2G

qemu-img create -f qcow2 /dev/vg/lv3 2G;qemu-img create -f qcow2 /dev/vg/lv4 2G
 
5.boot the vm with two lvm as blk device



/usr/libexec/qemu-kvm \
  -name src_vm1 \
  -machine pc-q35-rhel8.4.0,accel=kvm,usb=off,dump-guest-core=off \
  -m 8g \
  -no-user-config -nodefaults \
  -vga qxl \
  -device pcie-root-port,id=pcie.0-root-port-2,slot=2,bus=pcie.0,multifunction=on \
  -device pcie-root-port,id=pcie.0-root-port-2-1,chassis=3,bus=pcie.0,addr=0x2.0x1 \
  -device pcie-root-port,id=pcie.0-root-port-2-2,chassis=4,bus=pcie.0,addr=0x2.0x2 \
  -device pcie-root-port,id=pcie.0-root-port-3,slot=3,bus=pcie.0 \
  -device pcie-root-port,id=pcie.0-root-port-4,slot=4,bus=pcie.0 \
  -device pcie-root-port,id=pcie.0-root-port-5,slot=5,bus=pcie.0 \
  -device pcie-root-port,id=pcie.0-root-port-6,slot=6,bus=pcie.0 \
  -device pcie-root-port,id=pcie.0-root-port-7,slot=7,bus=pcie.0 \
  -device pcie-root-port,id=pcie.0-root-port-8,slot=8,bus=pcie.0 \
  -device pcie-root-port,id=pcie.0-root-port-9,slot=9,bus=pcie.0 \
  -device qemu-xhci,id=usb1,bus=pcie.0-root-port-2-1,addr=0x0 \
  -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
  -object iothread,id=iothread0 \
  -object iothread,id=iothread1 \
  -device virtio-scsi-pci,id=scsi0,bus=pcie.0-root-port-2-2,addr=0x0,iothread=iothread0 \
  -device virtio-scsi-pci,id=scsi1,bus=pcie.0-root-port-8,addr=0x0 \
  -blockdev driver=qcow2,file.driver=file,cache.direct=off,cache.no-flush=on,file.filename=/home/kvm_autotest_root/images/rhel840-64-virtio-scsi.qcow2,node-name=drive_image1 \
  -device scsi-hd,id=os1,drive=drive_image1,bootindex=0 \
  \
   -blockdev node-name=host_device_stg,driver=host_device,aio=native,filename=/dev/vg/lv1,cache.direct=on,cache.no-flush=off,discard=unmap \
  -blockdev node-name=drive_stg,driver=raw,cache.direct=on,cache.no-flush=off,file=host_device_stg \
  -device virtio-blk-pci,iothread=iothread1,bus=pcie.0-root-port-4,addr=0x0,write-cache=on,id=stg,drive=drive_stg,rerror=stop,werror=stop \
\
 -blockdev node-name=host_device_stg2,driver=host_device,aio=native,filename=/dev/vg/lv2,cache.direct=on,cache.no-flush=off,discard=unmap \
  -blockdev node-name=drive_stg2,driver=qcow2,cache.direct=on,cache.no-flush=off,file=host_device_stg2 \
  -device virtio-blk-pci,iothread=iothread1,bus=pcie.0-root-port-5,addr=0x0,write-cache=on,id=stg2,drive=drive_stg2,rerror=stop,werror=stop \
\
 -blockdev node-name=host_device_stg3,driver=host_device,aio=native,filename=/dev/vg/lv3,cache.direct=on,cache.no-flush=off,discard=unmap \
  -blockdev node-name=drive_stg3,driver=raw,cache.direct=on,cache.no-flush=off,file=host_device_stg3 \
  -device scsi-hd,write-cache=on,id=stg3,drive=drive_stg3,rerror=stop,werror=stop \
\
 -blockdev node-name=host_device_stg4,driver=host_device,aio=native,filename=/dev/vg/lv4,cache.direct=on,cache.no-flush=off,discard=unmap \
  -blockdev node-name=drive_stg4,driver=qcow2,cache.direct=on,cache.no-flush=off,file=host_device_stg4 \
  -device scsi-hd,write-cache=on,id=stg4,drive=drive_stg4,rerror=stop,werror=stop \
\
  \
  -vnc :5 \
  -qmp tcp:0:5955,server,nowait \
  -monitor stdio \
  -device virtio-net-pci,mac=9a:b5:b6:b1:b4:b5,id=idMmq1jH,vectors=4,netdev=idxgXAlm,bus=pcie.0-root-port-6,addr=0x0 \
  -netdev tap,id=idxgXAlm


6.execute io on disks in guest

root@bootp-73-199-218 /home $ cat test.sh 
for i in $(seq 100); do
    printf "pass %03d ... " $i
    dd if=/dev/zero bs=1M count=2048 of=$1 conv=fsync status=none
    if [[ "$?" != "0" ]];then break;fi
    echo "ok"
  done

./test.sh /dev/vda
./test.sh /dev/vdb
./test.sh /dev/sdb
./test.sh /dev/sdc

all io passed and guest keep running status

Comment 7 errata-xmlrpc 2021-08-31 08:07:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (virt:av bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3340


Note You need to log in before you can comment on or make changes to this bug.