RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1430634 - Attach lun type disk report error and crash guest
Summary: Attach lun type disk report error and crash guest
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.4-Alt
Hardware: aarch64
OS: Linux
medium
high
Target Milestone: rc
: ---
Assignee: Michal Privoznik
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 1431224 1432057
TreeView+ depends on / blocked
 
Reported: 2017-03-09 07:43 UTC by weizhang
Modified: 2017-08-02 07:44 UTC (History)
10 users (show)

Fixed In Version: libvirt-3.2.0-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1431224 (view as bug list)
Environment:
Last Closed: 2017-08-02 07:44:59 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
libvirtd.log (1.72 MB, text/plain)
2017-03-10 09:28 UTC, weizhang
no flags Details
avocado-vt-vm1.xml (9.60 KB, text/plain)
2017-03-10 12:04 UTC, weizhang
no flags Details

Description weizhang 2017-03-09 07:43:17 UTC
Description of problem:
Attach lun type disk report error and crash guest
not sure it is libvirt or qemu-kvm-rhev issues so report libvirt first

Version-Release number of selected component (if applicable):
kernel-4.9.0-10.el7.aarch64
qemu-kvm-rhev-2.8.0-6.el7.aarch64
libvirt-3.1.0-2.el7.aarch64


How reproducible:
100%

Steps to Reproduce:
1.virsh attach-disk --domain avocado-vt-vm1 --source /dev/sdb --target vdb --driver qemu --type lun
error: Failed to attach disk
error: internal error: child reported: Kernel does not provide mount namespace: No such file or directory
2.Then find guest is destroyed
3.

Actual results:
Attach failed and crash guest

Expected results:
PASS without crash

Additional info:

Comment 2 Jaroslav Suchanek 2017-03-09 09:50:28 UTC
Libvirt debug logs would be fine, as well as qemu logs of the guest.

Also I assume that the same command succeeded on x86. It is not clear to me, what do you mean by 'guest crash'. Was there any qemu process crash? Or the guest was stopped due to guest kernel panic? All in all, there might be no issue in libvirt unless the mount namespace is in charge. Adding Michal and Peter to CC list.

Thanks.

Comment 3 weizhang 2017-03-10 09:28:46 UTC
Created attachment 1261872 [details]
libvirtd.log

Hi Jaroslav,

Sorry for not describing clear, it means qemu process crash. I can not see any crash info with console. And the libvirtd.log will be attached.

Comment 4 Michal Privoznik 2017-03-10 09:41:08 UTC
And also if you could attach domain status XML that'd be great. You can find it under /var/run/libvirt/qemu/avocado-vt-vm1.xml

Comment 5 Michal Privoznik 2017-03-10 10:22:44 UTC
Ah, so after careful examination of the logs, I think this is what is happening here:

0) the avocado VM is started with namespaces enabled
1) libvirt starts the hotplug routine
2) qemu dies right in the middle of it:

2017-03-10 09:23:15.853+0000: 22172: info : qemuMonitorIOWrite:534 : QEMU_MONITOR_IO_WRITE: mon=0xffff60005f80 buf={"execute":"device_add","arguments":{"driver":"virtio-blk-pci","scsi":"on","bus":"pci.2","addr":"0x0","drive":"drive-virtio-disk1","id":"virtio-disk1"},"id":"libvirt-12"}
 len=172 ret=172 errno=0
2017-03-10 09:23:16.011+0000: 22172: error : qemuAgentIO:652 : internal error: End of file from agent monitor

3) libvirt tries to roll back. And because it still thinks that the domain is using namespaces it calls function to enter the namespace of the qemu process and do all the work there. The namespace, however, no longer exists - kernel cleaned it up (it always does when the last process in the namespace dies). Therefore our roll back attempts fail: we are trying to enter non-existent namespace.

So there are two bugs here:
1) libvirt shouldn't try to use namespace routines once a domain dies,
2) qemu should not crash on device_add.

Working on fixing libvirt issue.

Comment 6 weizhang 2017-03-10 12:04:00 UTC
Created attachment 1261915 [details]
avocado-vt-vm1.xml

Hi Michal,

Attach the xml also to help you make sure the problem :)

Comment 7 Michal Privoznik 2017-03-10 12:41:51 UTC
Patch proposed upstream:

https://www.redhat.com/archives/libvir-list/2017-March/msg00447.html

Comment 8 Michal Privoznik 2017-03-10 15:13:31 UTC
Patch pushed upstream:

commit e915942b05d3c97b9b2b412b0cce43045a5628d1
Author:     Michal Privoznik <mprivozn>
AuthorDate: Fri Mar 10 13:34:15 2017 +0100
Commit:     Michal Privoznik <mprivozn>
CommitDate: Fri Mar 10 16:02:34 2017 +0100

    qemuProcessHandleMonitorEOF: Disable namespace for domain
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1430634
    
    If a qemu process has died, we get EOF on its monitor. At this
    point, since qemu process was the only one running in the
    namespace kernel has already cleaned the namespace up. Any
    attempt of ours to enter it has to fail.
    
    This really happened in the bug linked above. We've tried to
    attach a disk to qemu and while we were in the monitor talking to
    qemu it just died. Therefore our code tried to do some roll back
    (e.g. deny the device in cgroups again, restore labels, etc.).
    However, during the roll back (esp. when restoring labels) we
    still thought that domain has a namespace. So we used secdriver's
    transactions. This failed as there is no namespace to enter.
    
    Signed-off-by: Michal Privoznik <mprivozn>

v3.1.0-104-ge915942b0

Comment 10 Jaroslav Suchanek 2017-03-10 16:18:54 UTC
> So there are two bugs here:
> 1) libvirt shouldn't try to use namespace routines once a domain dies,
> 2) qemu should not crash on device_add.

So we need a qemu clone too, right?

Thanks.

Comment 11 Andrew Jones 2017-03-14 12:28:28 UTC
I'm curious what makes this AArch64/Pegas specific? Is it the machine model? Does this reproduce with the q35 model? I just want to be sure we're targeting the right builds with the fix.

Thanks,
drew

Comment 13 Andrea Bolognani 2017-06-13 08:53:40 UTC
(In reply to Andrew Jones from comment #11)
> I'm curious what makes this AArch64/Pegas specific? Is it the machine model?
> Does this reproduce with the q35 model?

If you look at Bug 1431224, Comment 7 you'll see the QEMU
crash couldn't be reproduced on x86.

With QEMU exiting cleanly instead of crashing, libvirt had
a chance to clean up after itself properly: that's why the
issue could only be reproduced on aarch64.


Note You need to log in before you can comment on or make changes to this bug.