Bug 1976256 - Pivot failed: Job ... in state 'standby' cannot accept command verb 'complete' [rhel-8.4.0.z]
Summary: Pivot failed: Job ... in state 'standby' cannot accept command verb 'complete...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.4
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: 8.5
Assignee: Kevin Wolf
QA Contact: aihua liang
URL:
Whiteboard:
Depends On: 1945635
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-06-25 15:59 UTC by RHEL Program Management Team
Modified: 2021-08-31 08:14 UTC (History)
13 users (show)

Fixed In Version: qemu-kvm-5.2.0-16.module+el8.4.0+11923+e8b883e4.4
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1945635
Environment:
Last Closed: 2021-08-31 08:07:47 UTC
Type: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2021:3340 0 None None None 2021-08-31 08:08:00 UTC

Comment 2 Danilo de Paula 2021-07-15 14:11:08 UTC
Kevin needs to send the backport.

Comment 8 Yanan Fu 2021-07-23 06:40:21 UTC
QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.

Comment 9 aihua liang 2021-07-23 08:31:56 UTC
Test with qemu-kvm-5.2.0-16.module+el8.4.0+11923+e8b883e4.4, the bug has been fixed, set bug's status to "Verified".

Test Steps:
 1.start guest with qemu cmds:
    /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,memory-backend=mem-machine_mem \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720 \
    -object memory-backend-ram,size=30720M,id=mem-machine_mem  \
    -smp 10,maxcpus=10,cores=5,threads=1,dies=1,sockets=2  \
    -cpu 'Cascadelake-Server-noTSX',+kvm_pv_unhalt \
    -chardev socket,server=on,path=/tmp/monitor-qmpmonitor1-20210723-033759-Kitvz8Hs,wait=off,id=qmp_id_qmpmonitor1  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,server=on,path=/tmp/monitor-catch_monitor-20210723-033759-Kitvz8Hs,wait=off,id=qmp_id_catch_monitor  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idSISJK3 \
    -chardev socket,server=on,path=/tmp/serial-serial0-20210723-033759-Kitvz8Hs,wait=off,id=chardev_serial0 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20210723-033759-Kitvz8Hs,path=/tmp/seabios-20210723-033759-Kitvz8Hs,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20210723-033759-Kitvz8Hs,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-2,addr=0x0 \
    -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/rhel840-64-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:83:4d:3b:4c:c8,id=idEPrsK1,netdev=idHubNSm,bus=pcie-root-port-3,addr=0x0  \
    -netdev tap,id=idHubNSm,vhost=on  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5 \
    -monitor stdio \

 2.create snapshot node and do snapshot
   #create snapshot node
    {'execute':'blockdev-create','arguments':{'options': {'driver':'file','filename':'/root/sn1','size':21474836480},'job-id':'job1'}}
    {'execute':'blockdev-add','arguments':{'driver':'file','node-name':'drive_sn1','filename':'/root/sn1'}}
    {'execute':'blockdev-create','arguments':{'options': {'driver': 'qcow2','file':'drive_sn1','size':21474836480,'backing-file':'/home/kvm_autotest_root/images/rhel840-64-virtio-scsi.qcow2','backing-fmt':'qcow2'},'job-id':'job2'}}
    {'execute':'blockdev-add','arguments':{'driver':'qcow2','node-name':'sn1','file':'drive_sn1','backing':null}}
    {'execute':'job-dismiss','arguments':{'id':'job1'}}
    {'execute':'job-dismiss','arguments':{'id':'job2'}}
  #do snapshot
    {'execute':'blockdev-snapshot','arguments':{'node':'drive_image1','overlay':'sn1'}}

 3. dd a file in sn1
    (guest)# dd if=/dev/urandom of=test bs=1M count=100

 4. do commit
    {'execute': 'block-commit', 'arguments': { 'device':'sn1','job-id':'j1'}}
{"timestamp": {"seconds": 1627028445, "microseconds": 983141}, "event": "JOB_STATUS_CHANGE", "data": {"status": "created", "id": "j1"}}
{"timestamp": {"seconds": 1627028445, "microseconds": 983219}, "event": "JOB_STATUS_CHANGE", "data": {"status": "running", "id": "j1"}}
{"return": {}}
{"timestamp": {"seconds": 1627028446, "microseconds": 18655}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "j1"}}
{"timestamp": {"seconds": 1627028446, "microseconds": 18713}, "event": "BLOCK_JOB_READY", "data": {"device": "j1", "len": 110624768, "offset": 110624768, "speed": 0, "type": "commit"}}

 5. pause the block job
    {'execute': 'job-pause', 'arguments': {'id':'j1'}}

 6. complete the block job
    {"execute":"job-complete","arguments":{"id":"j1"}}

 7. resume the block job
    {"execute":"job-resume","arguments":{"id":"j1"}}
    {"timestamp": {"seconds": 1627028495, "microseconds": 760718}, "event": "JOB_STATUS_CHANGE", "data": {"status": "ready", "id": "j1"}}
{"return": {}}
{"timestamp": {"seconds": 1627028495, "microseconds": 761954}, "event": "JOB_STATUS_CHANGE", "data": {"status": "waiting", "id": "j1"}}
{"timestamp": {"seconds": 1627028495, "microseconds": 761997}, "event": "JOB_STATUS_CHANGE", "data": {"status": "pending", "id": "j1"}}
{"timestamp": {"seconds": 1627028495, "microseconds": 762059}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "j1", "len": 113115136, "offset": 113115136, "speed": 0, "type": "commit"}}
{"timestamp": {"seconds": 1627028495, "microseconds": 762092}, "event": "JOB_STATUS_CHANGE", "data": {"status": "concluded", "id": "j1"}}
{"timestamp": {"seconds": 1627028495, "microseconds": 762117}, "event": "JOB_STATUS_CHANGE", "data": {"status": "null", "id": "j1"}}

Comment 11 errata-xmlrpc 2021-08-31 08:07:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (virt:av bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3340


Note You need to log in before you can comment on or make changes to this bug.