RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1161831 - screenshot after qemu-attach crashes libvirtd
Summary: screenshot after qemu-attach crashes libvirtd
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.1
Hardware: x86_64
OS: All
high
high
Target Milestone: rc
: ---
Assignee: Ján Tomko
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-11-08 09:36 UTC by Luyao Huang
Modified: 2015-11-19 05:55 UTC (History)
3 users (show)

Fixed In Version: libvirt-1.2.13-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-11-19 05:55:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2202 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2015-11-19 08:17:58 UTC

Description Luyao Huang 2014-11-08 09:36:33 UTC
Description of problem:
libvirt will crash when use screenshot to a vm which use qemu-attach connect to libvirt

Version-Release number of selected component (if applicable):
libvirt-1.2.8-6.el7.x86_64

How reproducible:
100%

Steps to Reproduce:

1.# /usr/libexec/qemu-kvm -m 512 -hda /var/lib/libvirt/images/test6.img -net nic -net tap,vlan=0,ifname=tap0,script=no --daemonize -monitor unix:/tmp/demo,server,nowait -vnc 127.0.0.1:2 -name sdsd2

2.# virsh qemu-attach 22094
Domain sdsd2 attached to pid 22094

3.# virsh screenshot sdsd2
error: could not take a screenshot of sdsd2
error: End of file while reading data: Input/output error
error: Failed to reconnect to the hyperviso

Actual results:
libvirtd crash

Expected results:
fix it

Additional info:

(gdb) t a a bt

Thread 11 (Thread 0x7f114a34d700 (LWP 24159)):
#0  pthread_cond_wait@@GLIBC_2.3.2 ()
    at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f115a39d2a6 in virCondWait (c=c@entry=0x7f115c2f5440, 
    m=m@entry=0x7f115c2f5418) at util/virthread.c:153
#2  0x00007f115a39d75b in virThreadPoolWorker (
    opaque=opaque@entry=0x7f115c2d7a60) at util/virthreadpool.c:104
#3  0x00007f115a39d05e in virThreadHelper (data=<optimized out>)
    at util/virthread.c:197
#4  0x00007f1157bf7df3 in start_thread (arg=0x7f114a34d700)
    at pthread_create.c:308
#5  0x00007f115750e05d in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 10 (Thread 0x7f115ae66880 (LWP 24155)):
#0  0x00007f1157503a8d in poll () at ../sysdeps/unix/syscall-template.S:81
#1  0x00007f115a3625a1 in poll (__timeout=5000, __nfds=13, 
    __fds=<optimized out>) at /usr/include/bits/poll2.h:46
#2  virEventPollRunOnce () at util/vireventpoll.c:643
#3  0x00007f115a361092 in virEventRunDefaultImpl () at util/virevent.c:308
#4  0x00007f115aee96ad in virNetServerRun (srv=srv@entry=0x7f115c2f52c0)
    at rpc/virnetserver.c:1139
---Type <return> to continue, or q <return> to quit---
#5  0x00007f115aeb6548 in main (argc=<optimized out>, argv=<optimized out>)
    at libvirtd.c:1507

Thread 9 (Thread 0x7f114ab4e700 (LWP 24158)):
#0  pthread_cond_wait@@GLIBC_2.3.2 ()
    at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f115a39d2a6 in virCondWait (c=c@entry=0x7f115c2f5440, 
    m=m@entry=0x7f115c2f5418) at util/virthread.c:153
#2  0x00007f115a39d75b in virThreadPoolWorker (
    opaque=opaque@entry=0x7f115c2d8090) at util/virthreadpool.c:104
#3  0x00007f115a39d05e in virThreadHelper (data=<optimized out>)
    at util/virthread.c:197
#4  0x00007f1157bf7df3 in start_thread (arg=0x7f114ab4e700)
    at pthread_create.c:308
#5  0x00007f115750e05d in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 8 (Thread 0x7f1148349700 (LWP 24163)):
#0  pthread_cond_wait@@GLIBC_2.3.2 ()
    at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f115a39d2a6 in virCondWait (c=c@entry=0x7f115c2f54d8, 
    m=m@entry=0x7f115c2f5418) at util/virthread.c:153
#2  0x00007f115a39d77b in virThreadPoolWorker (
---Type <return> to continue, or q <return> to quit---
    opaque=opaque@entry=0x7f115c2d7a60) at util/virthreadpool.c:104
#3  0x00007f115a39d05e in virThreadHelper (data=<optimized out>)
    at util/virthread.c:197
#4  0x00007f1157bf7df3 in start_thread (arg=0x7f1148349700)
    at pthread_create.c:308
#5  0x00007f115750e05d in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 7 (Thread 0x7f1146b46700 (LWP 24166)):
#0  pthread_cond_wait@@GLIBC_2.3.2 ()
    at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f115a39d2a6 in virCondWait (c=c@entry=0x7f115c2f54d8, 
    m=m@entry=0x7f115c2f5418) at util/virthread.c:153
#2  0x00007f115a39d77b in virThreadPoolWorker (
    opaque=opaque@entry=0x7f115c2d8090) at util/virthreadpool.c:104
#3  0x00007f115a39d05e in virThreadHelper (data=<optimized out>)
    at util/virthread.c:197
#4  0x00007f1157bf7df3 in start_thread (arg=0x7f1146b46700)
    at pthread_create.c:308
#5  0x00007f115750e05d in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 6 (Thread 0x7f1147347700 (LWP 24165)):
---Type <return> to continue, or q <return> to quit---
#0  pthread_cond_wait@@GLIBC_2.3.2 ()
    at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f115a39d2a6 in virCondWait (c=c@entry=0x7f115c2f54d8, 
    m=m@entry=0x7f115c2f5418) at util/virthread.c:153
#2  0x00007f115a39d77b in virThreadPoolWorker (
    opaque=opaque@entry=0x7f115c2d7a60) at util/virthreadpool.c:104
#3  0x00007f115a39d05e in virThreadHelper (data=<optimized out>)
    at util/virthread.c:197
#4  0x00007f1157bf7df3 in start_thread (arg=0x7f1147347700)
    at pthread_create.c:308
#5  0x00007f115750e05d in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 5 (Thread 0x7f114b34f700 (LWP 24157)):
#0  pthread_cond_wait@@GLIBC_2.3.2 ()
    at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f115a39d2a6 in virCondWait (c=c@entry=0x7f115c2f5440, 
    m=m@entry=0x7f115c2f5418) at util/virthread.c:153
#2  0x00007f115a39d75b in virThreadPoolWorker (
    opaque=opaque@entry=0x7f115c2d7a60) at util/virthreadpool.c:104
#3  0x00007f115a39d05e in virThreadHelper (data=<optimized out>)
    at util/virthread.c:197
#4  0x00007f1157bf7df3 in start_thread (arg=0x7f114b34f700)
---Type <return> to continue, or q <return> to quit---
    at pthread_create.c:308
#5  0x00007f115750e05d in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 4 (Thread 0x7f1147b48700 (LWP 24164)):
#0  pthread_cond_wait@@GLIBC_2.3.2 ()
    at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f115a39d2a6 in virCondWait (c=c@entry=0x7f115c2f54d8, 
    m=m@entry=0x7f115c2f5418) at util/virthread.c:153
#2  0x00007f115a39d77b in virThreadPoolWorker (
    opaque=opaque@entry=0x7f115c2d8090) at util/virthreadpool.c:104
#3  0x00007f115a39d05e in virThreadHelper (data=<optimized out>)
    at util/virthread.c:197
#4  0x00007f1157bf7df3 in start_thread (arg=0x7f1147b48700)
    at pthread_create.c:308
#5  0x00007f115750e05d in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 3 (Thread 0x7f1148b4a700 (LWP 24162)):
#0  pthread_cond_wait@@GLIBC_2.3.2 ()
    at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f115a39d2a6 in virCondWait (c=c@entry=0x7f115c2f54d8, 
    m=m@entry=0x7f115c2f5418) at util/virthread.c:153
---Type <return> to continue, or q <return> to quit---
#2  0x00007f115a39d77b in virThreadPoolWorker (
    opaque=opaque@entry=0x7f115c2d8090) at util/virthreadpool.c:104
#3  0x00007f115a39d05e in virThreadHelper (data=<optimized out>)
    at util/virthread.c:197
#4  0x00007f1157bf7df3 in start_thread (arg=0x7f1148b4a700)
    at pthread_create.c:308
#5  0x00007f115750e05d in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 2 (Thread 0x7f114934b700 (LWP 24161)):
#0  pthread_cond_wait@@GLIBC_2.3.2 ()
    at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f115a39d2a6 in virCondWait (c=c@entry=0x7f115c2f5440, 
    m=m@entry=0x7f115c2f5418) at util/virthread.c:153
#2  0x00007f115a39d75b in virThreadPoolWorker (
    opaque=opaque@entry=0x7f115c2d7a60) at util/virthreadpool.c:104
#3  0x00007f115a39d05e in virThreadHelper (data=<optimized out>)
    at util/virthread.c:197
#4  0x00007f1157bf7df3 in start_thread (arg=0x7f114934b700)
    at pthread_create.c:308
#5  0x00007f115750e05d in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

---Type <return> to continue, or q <return> to quit---
Thread 1 (Thread 0x7f1149b4c700 (LWP 24160)):
#0  __strcmp_sse42 () at ../sysdeps/x86_64/multiarch/strcmp-sse42.S:164
#1  0x00007f115a50d0c9 in virSecuritySELinuxSetFileconHelper (
    path=0x7f1124000e50 "/var/cache/libvirt/qemu/qemu.screendump.m4QkFu", 
    tcon=0x0, optional=<optimized out>) at security/security_selinux.c:890
#2  0x00007f115a509513 in virSecurityManagerSetSavedStateLabel (
    mgr=0x7f113c10d630, vm=vm@entry=0x7f1138000cf0, 
    savefile=savefile@entry=0x7f1124000e50 "/var/cache/libvirt/qemu/qemu.screendump.m4QkFu") at security/security_manager.c:547
#3  0x00007f115a506476 in virSecurityStackSetSavedStateLabel (
    mgr=<optimized out>, vm=0x7f1138000cf0, 
    savefile=0x7f1124000e50 "/var/cache/libvirt/qemu/qemu.screendump.m4QkFu")
    at security/security_stack.c:351
#4  0x00007f115a509513 in virSecurityManagerSetSavedStateLabel (
    mgr=0x7f113c1680a0, vm=0x7f1138000cf0, 
    savefile=0x7f1124000e50 "/var/cache/libvirt/qemu/qemu.screendump.m4QkFu")
    at security/security_manager.c:547
#5  0x00007f11432ff94f in qemuDomainScreenshot (dom=<optimized out>, 
    st=0x7f11240009f0, screen=<optimized out>, flags=<optimized out>)
    at qemu/qemu_driver.c:3858
#6  0x00007f115a425b10 in virDomainScreenshot (
    domain=domain@entry=0x7f1124000930, stream=stream@entry=0x7f11240009f0, 
    screen=0, flags=0) at libvirt.c:3171
---Type <return> to continue, or q <return> to quit---
#7  0x00007f115aec8833 in remoteDispatchDomainScreenshot (
    server=<optimized out>, ret=0x7f11240008e0, args=0x7f1124000900, 
    rerr=0x7f1149b4bc80, msg=<optimized out>, client=0x7f115c2f6340)
    at remote_dispatch.h:7412
#8  remoteDispatchDomainScreenshotHelper (server=<optimized out>, 
    client=0x7f115c2f6340, msg=<optimized out>, rerr=0x7f1149b4bc80, 
    args=0x7f1124000900, ret=0x7f11240008e0) at remote_dispatch.h:7379
#9  0x00007f115a498ff2 in virNetServerProgramDispatchCall (msg=0x7f115c303db0, 
    client=0x7f115c2f6340, server=0x7f115c2f52c0, prog=0x7f115c300d20)
    at rpc/virnetserverprogram.c:437
#10 virNetServerProgramDispatch (prog=0x7f115c300d20, 
    server=server@entry=0x7f115c2f52c0, client=0x7f115c2f6340, 
    msg=0x7f115c303db0) at rpc/virnetserverprogram.c:307
#11 0x00007f115aee81fd in virNetServerProcessMsg (msg=<optimized out>, 
    prog=<optimized out>, client=<optimized out>, srv=0x7f115c2f52c0)
    at rpc/virnetserver.c:172
#12 virNetServerHandleJob (jobOpaque=<optimized out>, opaque=0x7f115c2f52c0)
    at rpc/virnetserver.c:193
#13 0x00007f115a39d6c5 in virThreadPoolWorker (
    opaque=opaque@entry=0x7f115c2d8090) at util/virthreadpool.c:145
#14 0x00007f115a39d05e in virThreadHelper (data=<optimized out>)
    at util/virthread.c:197
#15 0x00007f1157bf7df3 in start_thread (arg=0x7f1149b4c700)
---Type <return> to continue, or q <return> to quit---
    at pthread_create.c:308
#16 0x00007f115750e05d in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113



Seems libvirtd crash reason is:

a qemu-attach vm have a wrong selinux label:

  <seclabel type='static' model='selinux' relabel='yes'>
    <label>system_u:system_r:virtd_t:s0-s0:c0.c1023</label>
  </seclabel>
  <seclabel type='static' model='dac' relabel='yes'>
    <label>system_u:system_r:virtd_t:s0-s0:c0.c1023</label>
  </seclabel>

and donnot have a image label.
When use sreenshot, libvirtd call virSecuritySELinuxSetFileconHelper function
and tcon = NULL.And try to pass NULL as the first parameter to strcmp().
Libvirt should add a check when try to use strcmp() function.

Comment 1 Ján Tomko 2014-11-11 08:08:00 UTC
Upstream patch:
https://www.redhat.com/archives/libvir-list/2014-November/msg00308.html

Comment 3 Ján Tomko 2014-12-04 16:25:08 UTC
v1 of the fix by Luyao Huang:
https://www.redhat.com/archives/libvir-list/2014-December/msg00009.html

The first patch has been pushed as:
commit f8c1fb3d2e38f181912544e956af068acde0e900
Author:     Luyao Huang <lhuang>
AuthorDate: 2014-12-01 17:54:35 +0800
Commit:     Martin Kletzander <mkletzan>
CommitDate: 2014-12-01 12:04:38 +0100

    qemu: Make pid available for security managers in qemuProcessAttach
    
    There are some small issue in qemuProcessAttach:
    
    1.Fix virSecurityManagerGetProcessLabel always get pid = 0,
    move 'vm->pid = pid' before call virSecurityManagerGetProcessLabel.
    
    2.Use virSecurityManagerGenLabel to get image label.
    
    3.Fix always set selinux label for other security driver label.
    
    Signed-off-by: Luyao Huang <lhuang>

git describe: v1.2.10-221-gf8c1fb3

v2 of the second patch from the series on the list:
https://www.redhat.com/archives/libvir-list/2014-December/msg00207.html

Comment 4 Ján Tomko 2014-12-11 09:46:45 UTC
Fixed upstream by:
commit c7c96647e903f50273977d1514d3a2a8f713b6e7
Author:     Luyao Huang <lhuang>
AuthorDate: 2014-12-09 16:33:57 +0800
Commit:     Ján Tomko <jtomko>
CommitDate: 2014-12-11 10:29:43 +0100

    dac: Add a new func to get DAC label of a running process
    
    When using qemuProcessAttach to attach a qemu process,
    the DAC label is not filled correctly.
    
    Introduce a new function to get the uid:gid from the system
    and fill the label.
    
    This fixes the daemon crash when 'virsh screenshot' is called:
    https://bugzilla.redhat.com/show_bug.cgi?id=1161831
    
    It also fixes qemu-attach after the prerequisite of this patch
    (commit f8c1fb3) was pushed out of order.
    
    Signed-off-by: Luyao Huang <lhuang>
    Signed-off-by: Ján Tomko <jtomko>

git describe: v1.2.11-rc2-1-gc7c9664

Comment 6 vivian zhang 2015-04-28 09:47:16 UTC
I can produce this bug with build libvirt-1.2.8-6.el7.x86_64

verify it with build
libvirt-1.2.14-1.el7.x86_64

verify steps

1. # /usr/libexec/qemu-kvm -hdb /var/lib/libvirt/images/new.img -monitor unix:/tmp/demo,server,nowait -name new -vnc 127.0.0.1:2

2.# ps aux |grep new
root     16819 19.9  0.2 589964 19276 pts/4    Sl+  17:38   0:03 /usr/libexec/qemu-kvm -hdb /var/lib/libvirt/images/new.img -monitor unix:/tmp/demo,server,nowait -name new -vnc 127.0.0.1:2
root     16828  0.0  0.0 112640   960 pts/3    S+   17:38   0:00 grep --color=auto new

3. # virsh qemu-attach 16819
Domain new attached to pid 16819


4. # virsh list
 Id    Name                           State
----------------------------------------------------
 44    new                            running


5. check dac label with dumpxml, dac label can be get correct when using root 

# virsh dumpxml new

...
 <seclabel type='static' model='selinux' relabel='yes'>
    <label>unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0-s0:c0.c1023</imagelabel>
  </seclabel>
  <seclabel type='static' model='dac' relabel='yes'>
    <label>+0:+0</label>
    <imagelabel>+0:+0</imagelabel>
  </seclabel>
</domain>
...

6. check libvirtd process, and do screenshot. libvirt does not crash
# ps aux |grep libvirtd
root     10010  0.0  0.4 1247060 32308 ?       Ssl  Apr24   1:53 /usr/sbin/libvirtd
root     16945  0.0  0.0 112644   960 pts/3    S+   17:44   0:00 grep --color=auto libvirtd

# virsh screenshot new /tmp/new.ppm
Screenshot saved to /tmp/new.ppm, with type of image/x-portable-pixmap

# ps aux |grep libvirtd
root     10010  0.0  0.4 1247508 32764 ?       Ssl  Apr24   1:53 /usr/sbin/libvirtd
root     16958  0.0  0.0 112644   964 pts/3    S+   17:44   0:00 grep --color=auto libvirtd
[root@server 1.2.14-1.el7]#

Comment 8 vivian zhang 2015-05-22 08:42:20 UTC
I can produce this with build libvirt-1.2.8-6.el7.x86_64

verify this with build libvirt-1.2.15-2.el7.x86_64

steps:

1. # /usr/libexec/qemu-kvm -hdb /var/lib/libvirt/images/new.img -monitor unix:/tmp/demo,server,nowait -name ef
VNC server running on `::1:5900'

2. # ps aux |grep ef
root        93  0.0  0.0      0     0 ?        S<   May21   0:00 [deferwq]
root      2431 51.6  0.3 623652 28528 pts/1    Sl+  16:39   0:05 /usr/libexec/qemu-kvm -hdb /var/lib/libvirt/images/new.img -monitor unix:/tmp/demo,server,nowait -name ef

3. # virsh qemu-attach 2431
Domain ef attached to pid 2431

4. # virsh list
 Id    Name                           State
----------------------------------------------------
 17    ef                             running

5.# virsh screenshot ef
Screenshot saved to ef-2015-05-22-16:40:48.ppm, with type of image/x-portable-pixmap

check libvirtd does not crashed

move to verified

Comment 10 errata-xmlrpc 2015-11-19 05:55:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2202.html


Note You need to log in before you can comment on or make changes to this bug.