RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1108590 - Libvirtd will crash while start a guest which DAC's seclabel type='none' in guest's xml
Summary: Libvirtd will crash while start a guest which DAC's seclabel type='none' in g...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.6
Hardware: x86_64
OS: All
unspecified
high
Target Milestone: rc
: ---
Assignee: Ján Tomko
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 1108593
TreeView+ depends on / blocked
 
Reported: 2014-06-12 09:48 UTC by zhenfeng wang
Modified: 2014-10-14 04:22 UTC (History)
5 users (show)

Fixed In Version: libvirt-0.10.2-39.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1108593 (view as bug list)
Environment:
Last Closed: 2014-10-14 04:22:29 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:1374 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2014-10-14 08:11:54 UTC

Description zhenfeng wang 2014-06-12 09:48:34 UTC
Description of problem:
Libvirtd will crash while start a guest which DAC's seclabel type='none' in guest's xml

Version-Release number of selected component (if applicable):
kernel-2.6.32-466.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.426.el6.x86_64
libvirt-0.10.2-38.el6.x86_64
How reproducible:


Steps to Reproduce:
1.Prepare a guest with the following content in the guest's xml
#virsh dumpxml rhel6
--
  <seclabel type='none' model='dac'/>

2.Start the guest
#virsh start rhel6

3.Save the guest, the libvirtd crashed
# virsh save rhel6 /tmp/rh6.save
error: Failed to save domain rhel6 to /tmp/rh6.save
error: End of file while reading data: Input/output error
error: One or more references were leaked after disconnect from the hypervisor
error: Failed to reconnect to the hypervisor

Actual results:
libvirtd crashed

Expected results:
The guest should be saved successfully and libvirtd shouldn't crash

Additional info:

Comment 1 zhenfeng wang 2014-06-12 09:49:18 UTC
(gdb) t a a bt

Thread 11 (Thread 0x7fffec0f1700 (LWP 15423)):
#0  0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff7a0cd86 in virCondWait (c=<value optimized out>, 
    m=<value optimized out>) at util/threads-pthread.c:117
#2  0x00007ffff7a0d353 in virThreadPoolWorker (opaque=<value optimized out>)
    at util/threadpool.c:103
#3  0x00007ffff7a0cba9 in virThreadHelper (data=<value optimized out>)
    at util/threads-pthread.c:161
#4  0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0
#5  0x0000003d146e8b6d in clone () from /lib64/libc.so.6

Thread 10 (Thread 0x7fffecaf2700 (LWP 15422)):
#0  0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff7a0cd86 in virCondWait (c=<value optimized out>, 
    m=<value optimized out>) at util/threads-pthread.c:117
#2  0x00007ffff7a0d353 in virThreadPoolWorker (opaque=<value optimized out>)
    at util/threadpool.c:103
#3  0x00007ffff7a0cba9 in virThreadHelper (data=<value optimized out>)
    at util/threads-pthread.c:161
#4  0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0
---Type <return> to continue, or q <return> to quit---
#5  0x0000003d146e8b6d in clone () from /lib64/libc.so.6

Thread 9 (Thread 0x7fffed4f3700 (LWP 15421)):
#0  0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff7a0cd86 in virCondWait (c=<value optimized out>, 
    m=<value optimized out>) at util/threads-pthread.c:117
#2  0x00007ffff7a0d353 in virThreadPoolWorker (opaque=<value optimized out>)
    at util/threadpool.c:103
#3  0x00007ffff7a0cba9 in virThreadHelper (data=<value optimized out>)
    at util/threads-pthread.c:161
#4  0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0
#5  0x0000003d146e8b6d in clone () from /lib64/libc.so.6

Thread 8 (Thread 0x7fffedef4700 (LWP 15420)):
#0  0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff7a0cd86 in virCondWait (c=<value optimized out>, 
    m=<value optimized out>) at util/threads-pthread.c:117
#2  0x00007ffff7a0d353 in virThreadPoolWorker (opaque=<value optimized out>)
    at util/threadpool.c:103
#3  0x00007ffff7a0cba9 in virThreadHelper (data=<value optimized out>)
    at util/threads-pthread.c:161
---Type <return> to continue, or q <return> to quit---
#4  0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0
#5  0x0000003d146e8b6d in clone () from /lib64/libc.so.6

Thread 7 (Thread 0x7fffee8f5700 (LWP 15419)):
#0  0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff7a0cd86 in virCondWait (c=<value optimized out>, 
    m=<value optimized out>) at util/threads-pthread.c:117
#2  0x00007ffff7a0d353 in virThreadPoolWorker (opaque=<value optimized out>)
    at util/threadpool.c:103
#3  0x00007ffff7a0cba9 in virThreadHelper (data=<value optimized out>)
    at util/threads-pthread.c:161
#4  0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0
#5  0x0000003d146e8b6d in clone () from /lib64/libc.so.6

Thread 6 (Thread 0x7fffef2f6700 (LWP 15418)):
#0  0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff7a0cd86 in virCondWait (c=<value optimized out>, 
    m=<value optimized out>) at util/threads-pthread.c:117
#2  0x00007ffff7a0d353 in virThreadPoolWorker (opaque=<value optimized out>)
    at util/threadpool.c:103
#3  0x00007ffff7a0cba9 in virThreadHelper (data=<value optimized out>)
---Type <return> to continue, or q <return> to quit---
    at util/threads-pthread.c:161
#4  0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0
#5  0x0000003d146e8b6d in clone () from /lib64/libc.so.6

Thread 5 (Thread 0x7fffefcf7700 (LWP 15417)):
#0  0x0000003d14681451 in __strlen_sse2 () from /lib64/libc.so.6
#1  0x0000003d14681166 in strdup () from /lib64/libc.so.6
#2  0x00007ffff7a1036f in virParseOwnershipIds (label=0x0, 
    uidPtr=0x7fffefcf679c, gidPtr=0x7fffefcf6798) at util/util.c:3446
#3  0x000000000044d63e in qemuOpenFile (driver=<value optimized out>, 
    vm=<value optimized out>, path=0x7fffdc000cc0 "/tmp/rh6.save", oflags=577, 
    needUnlink=0x7fffefcf682e, bypassSecurityDriver=0x7fffefcf682f)
    at qemu/qemu_driver.c:2751
#4  0x000000000046dc0c in qemuDomainSaveMemory (driver=0x7fffe400b860, 
    vm=0x7fffe40d1470, path=0x7fffdc000cc0 "/tmp/rh6.save", 
    domXML=<value optimized out>, compressed=0, 
    was_running=<value optimized out>, flags=0, asyncJob=QEMU_ASYNC_JOB_SAVE)
    at qemu/qemu_driver.c:2941
#5  0x000000000046e39f in qemuDomainSaveInternal (driver=0x7fffe400b860, 
    dom=0x7fffdc000c50, vm=0x7fffe40d1470, 
    path=0x7fffdc000cc0 "/tmp/rh6.save", compressed=0, xmlin=0x0, flags=0)
    at qemu/qemu_driver.c:3087
#6  0x000000000046e93e in qemuDomainSaveFlags (dom=0x7fffdc000c50, 
---Type <return> to continue, or q <return> to quit---
    path=0x7fffdc000cc0 "/tmp/rh6.save", dxml=0x0, flags=0)
    at qemu/qemu_driver.c:3196
#7  0x00007ffff7ab4135 in virDomainSave (domain=0x7fffdc000c50, 
    to=0x7fffdc000d30 "/tmp/rh6.save") at libvirt.c:2590
#8  0x000000000043cad6 in remoteDispatchDomainSave (
    server=<value optimized out>, client=<value optimized out>, 
    msg=<value optimized out>, rerr=0x7fffefcf6b80, 
    args=<value optimized out>, ret=<value optimized out>)
    at remote_dispatch.h:4630
#9  remoteDispatchDomainSaveHelper (server=<value optimized out>, 
    client=<value optimized out>, msg=<value optimized out>, 
    rerr=0x7fffefcf6b80, args=<value optimized out>, ret=<value optimized out>)
    at remote_dispatch.h:4608
#10 0x00007ffff7aec2c2 in virNetServerProgramDispatchCall (prog=0x79fa70, 
    server=0x796ff0, client=0x79ac00, msg=0x79b140)
    at rpc/virnetserverprogram.c:431
#11 virNetServerProgramDispatch (prog=0x79fa70, server=0x796ff0, 
    client=0x79ac00, msg=0x79b140) at rpc/virnetserverprogram.c:304
#12 0x00007ffff7aeab0e in virNetServerProcessMsg (srv=<value optimized out>, 
    client=0x79ac00, prog=<value optimized out>, msg=0x79b140)
    at rpc/virnetserver.c:170
#13 0x00007ffff7aeb1ac in virNetServerHandleJob (
    jobOpaque=<value optimized out>, opaque=<value optimized out>)
---Type <return> to continue, or q <return> to quit---
    at rpc/virnetserver.c:191
#14 0x00007ffff7a0d2bc in virThreadPoolWorker (opaque=<value optimized out>)
    at util/threadpool.c:144
#15 0x00007ffff7a0cba9 in virThreadHelper (data=<value optimized out>)
    at util/threads-pthread.c:161
#16 0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0
#17 0x0000003d146e8b6d in clone () from /lib64/libc.so.6

Thread 4 (Thread 0x7ffff06f8700 (LWP 15416)):
#0  0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff7a0cd86 in virCondWait (c=<value optimized out>, 
    m=<value optimized out>) at util/threads-pthread.c:117
#2  0x00007ffff7a0d353 in virThreadPoolWorker (opaque=<value optimized out>)
    at util/threadpool.c:103
#3  0x00007ffff7a0cba9 in virThreadHelper (data=<value optimized out>)
    at util/threads-pthread.c:161
#4  0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0
#5  0x0000003d146e8b6d in clone () from /lib64/libc.so.6

Thread 3 (Thread 0x7ffff10f9700 (LWP 15415)):
#0  0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
---Type <return> to continue, or q <return> to quit---
#1  0x00007ffff7a0cd86 in virCondWait (c=<value optimized out>, 
    m=<value optimized out>) at util/threads-pthread.c:117
#2  0x00007ffff7a0d353 in virThreadPoolWorker (opaque=<value optimized out>)
    at util/threadpool.c:103
#3  0x00007ffff7a0cba9 in virThreadHelper (data=<value optimized out>)
    at util/threads-pthread.c:161
#4  0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0
#5  0x0000003d146e8b6d in clone () from /lib64/libc.so.6

Thread 2 (Thread 0x7ffff1afa700 (LWP 15414)):
#0  0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib64/libpthread.so.0
#1  0x00007ffff7a0cd86 in virCondWait (c=<value optimized out>, 
    m=<value optimized out>) at util/threads-pthread.c:117
#2  0x00007ffff7a0d353 in virThreadPoolWorker (opaque=<value optimized out>)
    at util/threadpool.c:103
#3  0x00007ffff7a0cba9 in virThreadHelper (data=<value optimized out>)
    at util/threads-pthread.c:161
#4  0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0
#5  0x0000003d146e8b6d in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7ffff798c860 (LWP 15412)):
#0  0x0000003d146df343 in poll () from /lib64/libc.so.6
---Type <return> to continue, or q <return> to quit---
#1  0x00007ffff79fa72c in virEventPollRunOnce () at util/event_poll.c:615
#2  0x00007ffff79f9967 in virEventRunDefaultImpl () at util/event.c:247
#3  0x00007ffff7aea34d in virNetServerRun (srv=0x796ff0)
    at rpc/virnetserver.c:748
#4  0x0000000000423eb7 in main (argc=<value optimized out>, 
    argv=<value optimized out>) at libvirtd.c:1229

Comment 3 Ján Tomko 2014-06-12 10:03:04 UTC
Fixed upstream:
commit 7eb0ee175b278a4439cee65a7a554767f0be9cd1
Author:     Ján Tomko <jtomko>
AuthorDate: 2014-06-12 10:50:43 +0200
Commit:     Ján Tomko <jtomko>
CommitDate: 2014-06-12 12:01:35 +0200

    Fix crash when saving a domain with type none dac label
    
    qemuDomainGetImageIds did not check if there was a label
    in the seclabel, thus crashing on
    <seclabel type='none' model='dac'/>
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1108590

git describe: v1.2.5-112-g7eb0ee1

Comment 6 zhenfeng wang 2014-06-25 04:01:11 UTC
Verify this bug with libvirt-0.10.2-39.el6, the verify sterps as following

pkginfo
libvirt-0.10.2-39.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.428.el6.x86_64
kernel-2.6.32-486.el6.x86_64

steps

1.Set security_driver='selinux' in qemu.conf
2..Prepare a guest with the following content in the guest's xml
#virsh dumpxml rhel6
--
  <seclabel type='none' model='dac'/>

3.Start the guest
#virsh start rhel6

4.After guest start completely, save and restore the guest, both the save and restore operation can be operated successfully
# virsh save rhel6 /tmp/rhel6.save

Domain rhel6 saved to /tmp/rhel6.save

# virsh restore /tmp/rhel6.save 
Domain restored from /tmp/rhel6.save

5.migrate the guest to the target with the storage, the guest can be migrated successfully

6.Destroy the guest, then set the security_driver='none' in qemu.conf
 security_driver='none'
#service libvirtd restart

7.Do step 2~5, all steps were operated successfully.

According to the upper test result, mark this bug verified

Comment 8 errata-xmlrpc 2014-10-14 04:22:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1374.html


Note You need to log in before you can comment on or make changes to this bug.