RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1584982 - qemu abort()-s on lock failure even if it was running previously
Summary: qemu abort()-s on lock failure even if it was running previously
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Fam Zheng
QA Contact: CongLi
URL:
Whiteboard:
Depends On: 1524792 1569500
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-06-01 05:58 UTC by Peter Krempa
Modified: 2018-06-11 02:33 UTC (History)
18 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1569500
Environment:
Last Closed: 2018-06-05 05:09:13 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1589627 0 high CLOSED Improve image locking error handling 2021-02-22 00:41:40 UTC

Internal Links: 1589627

Description Peter Krempa 2018-06-01 05:58:38 UTC
Lock failure should not result into qemu aborting. Stopping cpus should be sufficient.

+++ This bug was initially created as a clone of Bug #1569500 +++

Description of problem:
Guest shutoff after attaching same disk two times then detach the disk

Version-Release number of selected component (if applicable):
# rpm -q libvirt qemu-kvm-rhev
libvirt-3.9.0-14.el7_5.3.x86_64
qemu-kvm-rhev-2.10.0-21.el7_5.2.x86_64

How reproducible:
100%

Steps to Reproduce:
1) Hotplug a disk into a running guest.

# virt-xml rhel7.3-1005 --add-device --disk /var/lib/libvirt/images/b.img --update 
Device hotplug successful.
Domain 'rhel7.3-1005' defined successfully.

2) Hotplug same disk to the guest again.
# virt-xml rhel7.3-1005 --add-device --disk /var/lib/libvirt/images/b.img --update --debug
[Thu, 19 Apr 2018 20:04:58 virt-xml 30379] DEBUG (cli:264) Launched with command line: /usr/share/virt-manager/virt-xml rhel7.3-1005 --add-device --disk /var/lib/libvirt/images/b.img --update --debug
[Thu, 19 Apr 2018 20:04:58 virt-xml 30379] DEBUG (cli:278) Requesting libvirt URI default
[Thu, 19 Apr 2018 20:04:58 virt-xml 30379] DEBUG (cli:281) Received libvirt URI qemu:///system
[Thu, 19 Apr 2018 20:04:58 virt-xml 30379] DEBUG (virt-xml:64) XML diff:
--- Original XML
+++ Altered XML
@@ -151,6 +151,11 @@
       <alias name="balloon0"/>
       <address type="pci" domain="0x0000" bus="0x00" slot="0x08" function="0x0"/>
     </memballoon>
+    <disk type="file" device="disk">
+      <driver name="qemu" type="qcow2"/>
+      <source file="/var/lib/libvirt/images/b.img"/>
+      <target dev="vdc"/>
+    </disk>
   </devices>
   <seclabel type="dynamic" model="selinux" relabel="yes">
     <label>system_u:system_r:svirt_t:s0:c95,c361</label>

[Thu, 19 Apr 2018 20:04:58 virt-xml 30379] DEBUG (virt-xml:250) Doing setup for disk=<VirtualDisk disk 140150625953040>
[Thu, 19 Apr 2018 20:04:58 virt-xml 30379] DEBUG (cli:316)   File "/usr/share/virt-manager/virt-xml", line 479, in <module>
    sys.exit(main())
  File "/usr/share/virt-manager/virt-xml", line 463, in main
    update_changes(domain, devs, action, options.confirm)
  File "/usr/share/virt-manager/virt-xml", line 299, in update_changes
    fail(_("Error attempting device %s: %s") % (action, e))
  File "/usr/share/virt-manager/virtinst/cli.py", line 316, in fail
    logging.debug("".join(traceback.format_stack()))

[Thu, 19 Apr 2018 20:04:58 virt-xml 30379] ERROR (cli:317) Error attempting device hotplug: internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk2' could not be initialized
[Thu, 19 Apr 2018 20:04:58 virt-xml 30379] DEBUG (cli:319) 
Traceback (most recent call last):
  File "/usr/share/virt-manager/virt-xml", line 293, in update_changes
    domain.attachDeviceFlags(xml, libvirt.VIR_DOMAIN_AFFECT_LIVE)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 594, in attachDeviceFlags
    if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self)
libvirtError: internal error: unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk2' could not be initialized

3) Detach the disk.
# virsh detach-disk rhel7.3-1005 vdb --live
error: Failed to detach disk
error: internal error: End of file from qemu monitor


Actual results:
The guest turns to shutoff state.

Expected results:
The disk should be detached successfully and The guest should still running normally.

Additional info:


--- Additional comment from Peter Krempa on 2018-04-19 14:33:54 CEST ---

So the problem most probably is that libvirt messes up labelling of the disk if the second attaching fails. This means that qemu will not have permission to access the file later.

At any rate, this should not be a reason to abort.

main_channel_handle_message: agent start
Failed to get "write" lock
Unexpected error in raw_apply_lock_bytes() at block/file-posix.c:642:
2018-04-19T12:05:47.871711Z qemu-kvm: Failed to lock byte 100
2018-04-19 12:05:52.848+0000: shutting down, reason=crashed

Comment 2 Fam Zheng 2018-06-04 03:23:41 UTC
I'll try to reproduce here. Meanwhile, do we have a backtrace of QEMU?

Comment 3 Peter Krempa 2018-06-04 06:51:15 UTC
Original reporter said that no core was generated. I assumed it was an abort from the messages recorded in the VM log file:

Failed to get "write" lock
Unexpected error in raw_apply_lock_bytes() at block/file-posix.c:642:
2018-04-19T12:05:47.871711Z qemu-kvm: Failed to lock byte 100
2018-04-19 12:05:52.848+0000: shutting down, reason=crashed

The last line is generated by libvirt.

Comment 4 Fam Zheng 2018-06-04 09:08:01 UTC
I could reproduce this issue with libvirt + virt-xml with the same steps as the original report: 1) start vm with an image file attached; 2) try to attach the image again, which results in an error; 3) detach the image.

Step 3) causes an abort of QEMU. The error is rather unexpected from QEMU perspective: it now cannot (re-)acquire the locked bits which are already acquired when VM starting. This op is technically not necessary, but is there to keep the image locking code simple.

I think what happened is that Libvirt changed the image file's permission (and/or security context) by mistake in the error handling of step 2), which made it no longer accessible to QEMU process.

Before step 2):

[root@tc-rhel7 rpm]# ls -lhZ /var/lib/libvirt/images/b.img
-rw-r--r--. qemu qemu system_u:object_r:svirt_image_t:s0:c600,c690 /var/lib/libvirt/images/b.img

After:

[root@tc-rhel7 rpm]# ls -lhZ /var/lib/libvirt/images/b.img
-rw-r--r--. root root system_u:object_r:virt_image_t:s0 /var/lib/libvirt/images/b.img

Peter, is this a known problem?

Comment 5 Peter Krempa 2018-06-04 09:20:53 UTC
(In reply to Fam Zheng from comment #4)
[...]
> 
> Peter, is this a known problem?

Yes, it is. I've cloned this bug from the original report on libvirt:

https://bugzilla.redhat.com/show_bug.cgi?id=1569500

Which is a duplicate of a known problem:

https://bugzilla.redhat.com/show_bug.cgi?id=1524792


Note You need to log in before you can comment on or make changes to this bug.